There’s a rule: any headline phrased as a question can always be answered with no. So when The New Yorker asks whether you can trust OpenAI’s Sam Altman to make the right calls on how A.I. is used “in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones” – what do you think the answer is? (Ronan Farrow and Andrew Marantz)
- Altman was fired by OpenAI's board in 2023 after colleagues compiled roughly 70 pages of Slack messages and HR documents alleging a consistent pattern of deception toward executives, board members, and safety teams.
- He was reinstated within five days after threatening the board's existence through investor pressure, staff walkouts, and a Microsoft-backed escape hatch, then oversaw an investigation into himself that produced no written findings.
- The piece documents a decade-long gap between Altman's public safety commitments and internal reality: promised compute for safety research shrank from 20 percent to under 2 percent, safety teams were dissolved, a charter clause that could have wound down OpenAI if a safer competitor pulled ahead was quietly abandoned, and the company just cut a Pentagon deal days after positioning itself as the ethical alternative to rivals who wouldn't.
- OpenAI's conversion from nonprofit to for-profit entity involved a board vote recorded as an abstention without the dissenting member's apparent consent.
- The investigation that cleared Altman was never written down, on the advice of the new board members' personal attorneys.