“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” OpenAI CEO Sam Altman wrote on X in his announcement of the “head of preparedness” job at OpenAI on Saturday.
In exchange for $555,000 per year, according to OpenAI’s job ad, the head of preparedness is supposed to “expand, strengthen, and guide,” the existing preparedness program within OpenAI’s safety systems department. This side of OpenAI builds the safeguards that, in theory, make OpenAI’s models “behave as intended in real-world settings.”
But hey, wait a minute, are they saying OpenAI’s models behave as intended in real-world settings now? In 2025, ChatGPT continued to hallucinate in legal filings, attracted hundreds of FTC complaints, including complaints that it was triggering mental health crises in users, and evidently turned pictures of clothed women into bikini deepfakes. Sora had to have its ability to make videos of figures like Martin Luther King, Jr. revoked because users were abusing the privilege to make revered historical figures say basically anything.
When cases related to problems with OpenAI products reach the courts—as with the wrongful death suit filed by the family of Adam Raine, who, it is alleged, received advice and encouragement from ChatGPT that led to his death—there’s a legal argument to be made that users were abusing OpenAI’s products. In November, a filing from OpenAI’s lawyers cited rule violations as a potential cause of Raine’s death.
Whether you buy the abuse argument or not, it’s clearly a big part of the way OpenAI makes sense of what its products are doing in society. Altman acknowledges in his X post about the head of preparedness job that the company’s models can impact people’s mental health, and can find security vulnerabilities. We are, he says, “entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”
After all, if the goal were purely to not ever cause any harm, the quickest way to make sure of that would be to just remove ChatGPT and Sora from the market altogether.
The head of preparedness at OpenAI, then, is someone who will thread this needle, and “[o]wn OpenAI’s preparedness strategy end-to-end,” figuring out how to evaluate the models for unwanted abilities, and design ways to mitigate them. The ad says this person will have to ”evolve the preparedness framework as new risks, capabilities, or external expectations emerge.” This can only mean figuring out new potential ways OpenAI products might be able to harm people or society, and come up with the rubric for allowing the products to exist, while demonstrating, presumably, that the risks have been dulled enough that OpenAI isn’t legally liable for the seemingly inevitable future “downsides.”
It would be bad enough having to do all this for a company that’s treading water, but OpenAI has to take drastic steps to bring in revenue and release cutting edge products in a hurry. In an interview last month, Altman strongly implied that he would take the company’s revenue from where it is now—apparently somewhere north of $13 billion per year—to $100 billion in less than two years. Altman said his company’s “consumer device business will be a significant and important thing,” and that “AI that can automate science will create huge value.”
So if you would like to oversee “mitigation design” across new versions of OpenAI’s existing products, along with new physical gadgets, and platforms that don’t exist yet, but are supposed to do things like “automate science,” all while the CEO is breathing down your neck about needing to make approximately the same amount of annual revenue as Walt Disney the year after next, enjoy being the head of preparedness at OpenAI. Try not to fuck up the entire world at your new job.








English (US) ·