- A New Yorker profile drawn from more than 100 interviews quotes Sam Altman saying his “vibes don’t really fit with a lot of this traditional A.I.-safety stuff,” offering the most direct explanation yet for the departure of safety researchers from OpenAI.
- Altman told employees who raised concerns about OpenAI’s Pentagon contracts that geopolitical judgments about military strikes fall outside the scope of internal input.
- A former OpenAI board member described Altman as “indifferent to the consequences of potential deceptions”; Altman attributed his shifting positions to the pace of change in the field.
- Anthropic, now a direct OpenAI competitor, was founded in 2021 by former OpenAI safety researchers who departed over concerns about the company’s safety culture.
What Happened
The New Yorker published an extended profile of OpenAI CEO Sam Altman in April 2026, based on more than 100 interviews and access to internal company documents. Reporting on the profile by The Decoder highlights Altman’s explanation for the persistent departure of safety researchers from OpenAI: “My vibes don’t really fit with a lot of this traditional A.I.-safety stuff.” The quote represents Altman’s most candid public accounting of the cultural divergence that has driven researchers out of the company for years.
Why It Matters
The attrition of safety-focused talent from OpenAI is not new, but it has been well-documented. In 2021, Dario Amodei and Daniela Amodei — along with several other OpenAI researchers — left to found Anthropic, a company whose stated mission centers on AI safety. In 2024, OpenAI’s Superalignment team, created to address risks from advanced AI systems, was disbanded following the resignation of co-lead Jan Leike, who stated publicly that safety culture at OpenAI had deteriorated. The New Yorker profile places Altman’s perspective at the center of that record.
Altman acknowledged in the profile that the same cultural mismatch he describes contributed directly to the creation of Anthropic. That company has since received substantial investment and produced the Claude model family, positioning it as a credible rival in both the enterprise and consumer AI markets.
Technical Details
The profile documents a specific, concrete pattern of shifting positions. In 2019, Altman publicly argued against releasing the GPT-2 language model in full, characterizing the system as too risky for unrestricted distribution. Within a few years, OpenAI was releasing models it described as significantly more capable — including GPT-3.5 and GPT-4 — at no cost through its consumer interface, ChatGPT. Altman addressed this inconsistency directly in the profile: “I think what some people want is a leader who is going to be absolutely sure of what they think and stick with it, and it’s not going to change. And we are in a field, in an area, where things change extremely quickly.”
The profile also describes OpenAI disbanding safety-focused teams and allegedly scaling back safety evaluations, without specifying the precise evaluations involved. When employees raised concerns following OpenAI’s entry into contracts with the U.S. Department of Defense, Altman drew a clear line on the scope of internal input: “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that.”
Who’s Affected
The primary population affected is current and former OpenAI employees who have raised safety-related concerns — some of whom have departed for Anthropic, Google DeepMind, or independent research organizations. The profile’s characterizations also carry weight for the broader AI safety research community, which has tracked organizational and cultural conditions at major AI labs since at least 2023.
A former OpenAI board member quoted in the profile described Altman as “deeply polarizing” and characterized him as “eager to please yet indifferent to the consequences of potential deceptions.” Those characterizations, now published in a major national magazine and based on more than 100 interviews, add to a public record that regulators, investors, and prospective employees will weigh when assessing OpenAI’s governance.
What’s Next
The profile arrives as OpenAI continues restructuring from a nonprofit-governed entity to a for-profit public benefit corporation, a transition under scrutiny from state attorneys general in California and Delaware, as well as from former employees and early investors. No formal response from OpenAI had been published as of April 7, 2026.
Altman’s on-record remarks about the limits of internal dissent on Pentagon contracts are likely to draw additional attention from AI policy researchers and civil society organizations tracking the company’s government and defense relationships.