OpenAI begins search for a new Head of Preparedness role

Image Credit: Jakub Porzycki / NurPhoto / Getty Images

OpenAI has started looking for a new senior executive who will be responsible for examining and managing emerging risks tied to advanced artificial intelligence systems, including concerns related to cybersecurity, mental health, and biological misuse.

In a recent post shared on X, OpenAI CEO Sam Altman acknowledged that modern AI models are beginning to raise serious challenges. He pointed specifically to the potential effects of AI on mental health, as well as the growing sophistication of models that can identify critical vulnerabilities in computer systems.

Altman wrote that OpenAI is seeking people interested in helping the world strengthen cybersecurity defenses while preventing attackers from abusing powerful tools. He also pointed to broader concerns around how advanced systems are released, including questions around biological capabilities and the safety of systems that may eventually be able to improve themselves. The full post can be viewed on Altman’s X account.

According to OpenAI’s official job listing for the Head of Preparedness position, available on the company’s careers page, the role is focused on executing and advancing OpenAI’s Preparedness Framework. This framework outlines how the company monitors, evaluates, and prepares for frontier AI capabilities that could introduce risks of severe harm if not carefully managed.

OpenAI first announced the creation of its preparedness team in 2023. At the time, the company said the group would study a wide range of potential catastrophic risks, from more immediate threats such as phishing and cybercrime to longer term and speculative dangers, including national security and nuclear-related scenarios.

Less than a year after that announcement, OpenAI reassigned its then Head of Preparedness, Aleksander Madry, to a role centered on AI reasoning research. CNBC reported on the change in July 2024, noting a broader reshuffling within the company’s safety leadership. Additional safety focused executives have either departed OpenAI or transitioned into roles outside of preparedness and risk mitigation, reflecting ongoing changes in how the company structures its governance around advanced AI development.

OpenAI has also recently revised its Preparedness Framework. In an update reported earlier this year, the company stated that it may modify its internal safety requirements if a competing AI lab releases a high risk system without comparable protections. The update raised questions about how competitive pressures could influence safety standards across the industry.

Altman’s comments come as generative AI products continue to face increasing scrutiny, particularly around their psychological and social impact. Several recent lawsuits have alleged that AI chatbots, including OpenAI’s ChatGPT, reinforced harmful delusions, increased users’ sense of isolation, and in some cases contributed to severe mental health outcomes. One such case was reported in November.

OpenAI has said it remains focused on improving its systems’ ability to recognize signs of emotional distress and guide users toward appropriate real world support. The search for a new Head of Preparedness signals that the company is continuing to invest in formal leadership dedicated to anticipating and addressing the risks that come with increasingly capable AI systems.

Facebook
Twitter
Pinterest
Reddit
Telegram