Latest Updates
Mother of OpenAI Whistleblower Suchir Balaji Calls for FBI Probe into Son’s Death; Elon Musk Says "Doesn't Seem Like a Suicide"
The mother of Suchir Balaji, a former employee of OpenAI, has called for an FBI investigation into the circumstances surrounding her son’s death. Balaji, a 26-year-old researcher and whistleblower, was found dead in his San Francisco apartment on November 26. While authorities initially ruled his death as a suicide, his mother, Poornima Ramarao, is disputing this conclusion.
Taking to social media platform ‘X’, Ramarao on Sunday revealed that a private investigator and a second autopsy had been conducted, neither of which confirmed the cause of death stated by the police. She also alleged that Balaji's apartment, located on Buchanan Street, appeared to have been “ransacked” and showed signs of a struggle, particularly in the bathroom. “It looks like someone hit him in the bathroom based on blood spots,” she said. Ramarao labeled her son's death as “a cold-blooded murder” disguised as a suicide and demanded that the FBI investigate.
In her post, Ramarao also tagged Elon Musk and Vivek Ramaswamy, an Indian-American tech entrepreneur, both of whom are expected to join the incoming Donald Trump administration. Musk responded by saying, “This doesn’t seem like a suicide.”
The death of Balaji has raised questions, especially given his high-profile role in OpenAI. Balaji had resigned from the company in October 2023 after nearly four years. He had been vocal about his concerns regarding the company’s practices, specifically the training of its AI models using copyrighted content scraped from the internet without authorization. Balaji argued that this practice could potentially violate copyright laws, stating, “If you believe what I believe, you have to just leave the company,” during an interview with the New York Times earlier this year.
On his personal website, Balaji further explained his concerns, claiming that OpenAI's process of collecting data for model training amounted to potential copyright infringement. He noted that while generative models rarely produce outputs identical to their training data, the act of replicating copyrighted material during training could violate copyright laws unless protected under the doctrine of "fair use." Balaji said, “This is not a sustainable model for the internet ecosystem as a whole.”
In response, OpenAI denied the allegations, asserting that their data usage was in line with fair use principles and legal precedents. The company stated, "We build our AI models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents." They argued that this practice is fair to creators, necessary for innovation, and critical for U.S. competitiveness.