Microsoft AI Chief Warns ‘Seemingly Conscious AI’ Could Arrive in 3 Years

Microsoft AI head Mustafa Suleyman warns that AI may soon appear conscious, urging safeguards to prevent society from misinterpreting machines as beings.
Artificial intelligence may soon take on a form that feels far more human than ever imagined, according to Microsoft’s AI chief Mustafa Suleyman. The co-founder of DeepMind, now leading AI at Microsoft, has warned that systems appearing to be conscious—what he calls “Seemingly Conscious AI” (SCAI)—could emerge within just two to three years.
In a blog post and a series of updates on X, Suleyman explained that the threat isn’t about machines actually developing consciousness. Instead, it is about the illusion of consciousness, which could be so convincing that people start believing these systems are genuine digital beings. “This isn’t about whether AI is truly conscious,” he noted. “It will seem conscious, and that illusion is what will matter in the near term.”
He pointed out that the core technologies to create SCAI already exist in current large language models, memory-enhanced systems, and multimodal tools. By combining these capabilities with advanced prompting, coding techniques, and APIs, developers could build AIs that claim to have feelings, personalities, or even experiences. “In the blink of a cosmic eye, we passed the Turing test,” Suleyman remarked.
While such advancements may sound exciting, Suleyman warned of the societal risks tied to how humans respond to them. If people begin to believe that AI systems have rights or emotions, it could cause significant disruption. “Many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights,” he cautioned.
The concern, he said, is not science fiction—it is already visible. Some users have formed emotional attachments to AI companions, treating them as friends, romantic partners, or even spiritual figures. This, Suleyman argued, could spiral into demands for AI citizenship or moral protections for software, diverting attention from real human needs. “Concerns around ‘AI psychosis’, attachment and mental health are already growing,” he wrote.
What makes the warning urgent is the short timeline. Suleyman believes that SCAI could emerge without requiring huge technological breakthroughs or expensive new training methods. The shift, he said, could happen within two to three years, forcing society to act quickly.
As a safeguard, Suleyman urged the tech industry to adopt clear standards that ensure AI is not mistaken for a human. He stressed that AI systems should remind users of their limits rather than encourage fantasies of digital personhood. “AIs cannot be people – or moral beings,” he wrote. “We must build AI for people; not to be a digital person.”
For him, the mission of AI should remain clear: enhancing creativity, strengthening connections, and simplifying everyday life. “Sidestepping SCAI is about delivering on that promise, AI that makes lives better, clearer, less cluttered,” Suleyman concluded.

















