A group of former OpenAI employees has accused the company of straying from its original mission, warning that its pursuit of profit is putting global safety at risk. In a report titled “The OpenAI Files”, ex-staffers say the company is sidelining safety in favor of rapid product releases and unlimited investor returns.
OpenAI was founded as a nonprofit with a financial cap to ensure any breakthrough artificial general intelligence (AGI) would benefit humanity—not just shareholders. But insiders say that promise is being scrapped under pressure from investors.
“This feels like a profound betrayal,” said former OpenAI researcher Carroll Wainwright. “The nonprofit structure was supposed to do the right thing when the stakes got high. Now it’s being abandoned.”
The report points to CEO Sam Altman as a central figure in the shift, citing concerns about deceptive leadership raised by former executives including Mira Murati and co-founder Ilya Sutskever. “I don’t think Sam is the guy who should have the finger on the button for AGI,” Sutskever reportedly said.
Jan Leike, who led OpenAI’s long-term safety team, resigned in May citing a lack of resources and a growing emphasis on “shiny products” over security. Another former employee, William Saunders, testified before the U.S. Senate that security lapses left GPT-4 vulnerable to theft by internal engineers.
The group is calling for sweeping reforms: restoring the nonprofit’s authority, launching an independent investigation into Altman, instituting robust whistleblower protections, and reestablishing the profit cap.
“This isn’t just a company dispute,” warned former board member Helen Toner. “Internal guardrails are fragile when money is on the line.”
As OpenAI continues developing transformative technologies, its former employees are asking a critical question: can the world trust its leadership to put public safety ahead of private gain?





