OpenAI is looking to hire a new executive responsible for researching emerging risks related to AI in areas ranging from computer security to mental health.
In a post on X, CEO Sam Altman acknowledged that AI models are “starting to present some real challenges,” including “the potential impact of models on mental health,” and that “models that are very good at computer security are starting to uncover critical vulnerabilities.”
“If you want to help the world understand how to make all systems more secure, ideally enabling cybersecurity defenders with cutting-edge capabilities while preventing attackers from using them to cause harm, please consider applying to similarly unlock biological capabilities and gain confidence in the safety of running systems that can self-improve,” Altman wrote.
OpenAI’s readiness officer role listing describes the job as being responsible for implementing the company’s readiness framework, “a framework that describes OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of significant harm.”
For the first time, the company announced the creation of a preparedness team in 2023, saying it would be responsible for investigating potential “catastrophic risks,” whether immediate like phishing attacks or more speculative like nuclear threats.
Less than a year later, OpenAI reassigned readiness director Aleksander Madry to a job focused on AI inference. Other safety executives at OpenAI have also left the company or taken on new roles outside of readiness and safety.
The company also recently updated its readiness framework and said it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without similar protections.
tech crunch event
san francisco
|
October 13-15, 2026
As Altman alluded to in his post, generative AI chatbots are facing increased scrutiny regarding their impact on mental health. A recent lawsuit alleges that OpenAI’s ChatGPT fostered paranoia in users, increased social isolation, and even led some users to commit suicide. (The company said it continues to work on improving ChatGPT’s ability to recognize signs of mental distress and connect users with real-world support.)
