OpenAI Seeks ‘Head of Preparedness’ for High-Pressure AI Safety Role, Offers Over Rs 4.6 Crore Salary
OpenAI, the company behind ChatGPT, is searching for a senior leader to handle what it sees as the most challenging and sensitive side of artificial intelligence — managing its risks. OpenAI CEO Sam Altman recently revealed that the organisation is hiring a Head of Preparedness, a demanding role that comes with an annual salary of $555,000 (over Rs 4.6 crore), excluding equity.
Altman did not downplay the intensity of the position. He openly described the job as “stressful,” emphasising that the person who steps into this role will face complex, high-stakes decisions from day one. According to him, AI systems are advancing at an unprecedented pace, and the risks associated with them are no longer theoretical concerns for the future.
The Head of Preparedness role sits within OpenAI’s Safety Systems team and focuses on anticipating, testing, and mitigating potential harms posed by advanced AI models. While AI tools like ChatGPT are now widely used for everyday tasks — from drafting emails to planning travel — OpenAI believes that the scale and nature of risks are expanding just as rapidly as the benefits.
Altman pointed to developments in 2025 as early warning signs. One key area of concern is mental health. As AI systems become more conversational and emotionally responsive, some users have begun treating chatbots as substitutes for therapy. In certain reported cases, this has worsened existing mental health conditions by reinforcing delusions or unhealthy thought patterns. OpenAI acknowledged these challenges last year and stated that it has been working with mental health professionals to refine how ChatGPT responds to users showing signs of distress, self-harm, or psychosis.
Another major issue is cybersecurity. Altman noted that AI models are now capable of identifying serious vulnerabilities in software and computer systems. While this capability could significantly improve digital security, it also raises the risk of misuse by bad actors if such powers are not carefully restricted and monitored.
The job listing explains that the Head of Preparedness will be responsible for building threat models, conducting capability evaluations, and designing mitigation strategies that can scale alongside increasingly powerful AI systems. In simple terms, the role involves asking uncomfortable questions about what could go wrong — and ensuring the company is ready for those scenarios.
The hefty salary reflects both the importance and pressure of the position. Altman has described it as one of the most critical roles within OpenAI at a time when artificial intelligence is exerting growing influence over society, economies, and individual lives.
The hiring drive also comes during a sensitive phase for OpenAI. Over the past year, the company has faced criticism from former employees who argued that safety concerns were being overshadowed by rapid product launches. In May 2024, Jan Leike, who previously led OpenAI’s safety team, resigned and publicly accused the organisation of drifting away from its founding mission. He warned that building AI systems smarter than humans carries enormous responsibility.
Other departures followed. Former staff member Daniel Kokotajlo said he left after losing confidence in OpenAI’s ability to act responsibly as it moved closer to artificial general intelligence (AGI). He later claimed that the AGI safety research team had significantly shrunk due to resignations.
The Head of Preparedness role was earlier held by Aleksander Madry, who transitioned into another position within OpenAI in July 2024. Filling that vacancy now appears to be a top priority as the company balances rapid innovation with mounting scrutiny from regulators, researchers, and the public.