Google Flags Rising Threat of Rogue AI Clones Targeting Gemini
Google has raised fresh concerns about a new and sophisticated cybersecurity threat facing the artificial intelligence industry — the cloning of AI models through targeted attacks. In its latest findings, the company revealed that hackers are attempting to replicate its Gemini AI chatbot by extracting sensitive information directly from the system.
According to Google, these efforts fall under what it calls “distillation attacks," which are designed to push the AI chatbot to share the confidential details about its model and how it functions in the background. The tactic, the company says, involves feeding massive volumes of prompts — sometimes numbering in the hundreds of thousands — into the chatbot to study its responses and reverse-engineer its underlying structure.
Google further explained that attackers are trying to clone Gemini with “model extraction" and this is easily done by feeding 100,000s of prompts to the AI chatbot (anyone) and getting it to reveal the finer details that makes it tick. All the data can be used to build or even enhance existing/competing AI models, the company warned.
While concerns about rogue AI behaviour and malicious prompt injections have surfaced in the past, the possibility of successfully cloning a sophisticated AI system marks a more alarming development. The implications go beyond data theft — they could reshape the competitive landscape of the AI industry.
Interestingly, Google’s latest report suggests that these attacks may not only be the work of lone hackers. The company indicated that private firms, potentially competitors, or even research groups could be orchestrating such attempts. However, executing such large-scale extraction attacks would require significant technical expertise and resources, making it a complex operation.
The threat is particularly troubling for smaller AI companies that may lack the infrastructure, cybersecurity budgets, and dedicated research teams that major technology firms possess. If successful, such cloning efforts could enable bad actors to replicate years of innovation without incurring the massive costs associated with building advanced AI models.
Industry experts warn that if hackers manage to clone AI tools from emerging startups, it could undermine trust and stability within the broader ecosystem. Companies are already investing billions of dollars into developing and maintaining AI systems. The unauthorized replication of these technologies could not only result in financial losses but also compromise proprietary research and competitive advantage.
For everyday users, the danger may be even harder to detect. A rogue AI chatbot designed to closely mimic an original platform could easily deceive individuals into believing they are interacting with the authentic service. In such cases, sensitive personal or professional information could be unknowingly exposed.
Google’s warning serves as a broader reminder that as artificial intelligence evolves, so too do the risks associated with it. Gemini may not be the first AI platform targeted by cloning attempts, and it is unlikely to be the last. As AI adoption accelerates globally, cybersecurity defenses will need to advance just as rapidly to safeguard innovation and public trust.