Sam Altman Addresses Home Attack and *New Yorker* Profile Questions in New Public Blog Post

Sam Altman Addresses Home Attack and *New Yorker* Profile Questions in New Public Blog Post

Sam Altman Addresses Home Attack and New Yorker Profile Questions in New Public Blog Post

On Friday evening, OpenAI CEO Sam Altman published a blog entry responding to two high-stakes recent events: an alleged targeted attack on his personal residence, and a deeply reported New Yorker profile that has raised widespread questions about his trustworthiness.

The attack unfolded early that same Friday, when an individual is accused of throwing a Molotov cocktail at Altman’s home in San Francisco. The San Francisco Police Department confirmed no one was injured in the incident, and the suspect was later arrested at OpenAI’s headquarters, where they had been threatening to burn down the building.

Though law enforcement has not publicly released the suspect’s identity, Altman noted the attack occurred just days after the publication of what he called an “incendiary article” profiling him. He shared that others had already warned him releasing the piece “at a time of great anxiety about AI” could put him at increased risk.

“I brushed it aside,” Altman wrote. “Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.”

The controversial profile in question is a lengthy investigative feature co-written by Ronan Farrow — a Pulitzer Prize-winning journalist famous for his exposé of Harvey Weinstein’s pattern of sexual abuse — and Andrew Marantz, a veteran reporter who has covered technology and politics extensively. After interviewing more than 100 people with first-hand knowledge of Altman’s business conduct, Farrow and Marantz wrote most respondents described Altman as having “a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart.”

Echoing claims from past Altman profiles, the pair noted that multiple sources raised questions about his trustworthiness. One anonymous former OpenAI board member told the reporters Altman combines “a strong desire to please people, to be liked in any given interaction” with “a sociopathic lack of concern for the consequences that may come from deceiving someone.”


Sponsored Promotion

Meet your next investor or portfolio startup at Disrupt

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410.

Meet your next investor or portfolio startup at Disrupt

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410.


In his response to the profile, Altman acknowledged that reflecting on his tenure, he can point to “a lot of things I’m proud of and a bunch of mistakes.” One key mistake he highlighted is his longstanding tendency toward “being conflict-averse,” a trait he says has “caused great pain for me and OpenAI.”

“I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company,” Altman said, in an apparent reference to his 2023 ousting and rapid reinstatement as OpenAI CEO. “I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission.”

He added, “I am sorry to people I’ve hurt and wish I had learned more faster.”

Altman also addressed the pervasive tension across the AI industry, noting there seems to be “so much Shakespearean drama between the companies in our field,” which he attributes to a “‘ring of power’ dynamic” that “makes people do crazy things.”

He clarified he does not view artificial general intelligence (AGI) itself as the ring, but rather the totalizing ideology of “being the one to control AGI.” His proposed solution to this harmful dynamic is “to orient towards sharing the technology with people broadly, and for no one to have the ring.”

Altman closed his post by saying he welcomes “good-faith criticism and debate,” while reiterating his core belief that “technological progress can make the future unbelievably good, for your family and mine.”

“While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally,” he said.

Related Article