Anthropic AI Safety Researcher Mrinank Sharma Quits, Chooses Poetry Over Tech Amid Concerns About World’s Future

An Anthropic AI safety researcher resigns, citing global crises and ethical tensions, choosing poetry and reflection over continued tech development.
In a surprising move that has sparked conversations across the technology community, AI safety researcher Mrinank Sharma has stepped away from his role at Anthropic, one of the world’s most closely watched artificial intelligence companies. His departure, effective February 9, 2026, was accompanied by a deeply personal and reflective message that suggested both philosophical unease and ethical conflict about the direction of modern AI work.
Anthropic, known for its Claude AI models and strong public messaging around responsible AI development, has positioned itself as a leader in building safer systems. Yet Sharma’s exit hints that working on safety from inside the industry may be more complicated than it appears.
Educated at some of the world’s most prestigious institutions, Sharma earned a DPhil in Machine Learning from the University of Oxford and previously completed a Master of Engineering in Machine Learning from the University of Cambridge. With such credentials, his career path seemed firmly rooted in advanced AI research. Instead, he is now turning toward writing — and even considering a formal degree in poetry.
In a post that quickly circulated online, Sharma spoke less like a technologist and more like a philosopher confronting an uncertain era. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” he wrote. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
The note suggested that his concerns stretch beyond artificial intelligence alone. Rather, he sees today’s technological acceleration as part of a broader global instability — one that demands deeper reflection and human understanding, not just technical solutions.
He also hinted at tensions between ideals and reality inside organisations working on AI. “Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions,” he wrote. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most, and throughout broader society too.”
Instead of continuing to refine systems or improve model behaviour, Sharma says he feels drawn toward creative expression. He described being “called to writing that addresses and engages fully with the place we find ourselves,” hoping to place “poetic truth alongside scientific truth as equally valid ways of knowing, both of which I believe have something essential to contribute when developing new technology.”
As he put it plainly, he plans “to explore a poetry degree and devote myself to the practice of courageous speech”.
Sharma ended his farewell by sharing lines from William Stafford’s poem The Way It Is, a quiet meditation on holding onto one’s moral thread despite life’s turbulence — a fitting metaphor for someone choosing conscience over career momentum.
His exit echoes earlier moments in tech history when researchers, including Google’s Timnit Gebru, publicly questioned whether companies live up to their ethical commitments. Together, such stories underline a growing debate: as AI grows more powerful, can the people building it still align innovation with humanity?
For Sharma, the answer, at least for now, lies not in code — but in words.








