AI and militancy make a potent combination
Quite often, it has been debated across media platforms as to how militants and criminals have always been a tad ahead of the law enforcers even when the latter have technology and state-backing on its side to enable them to go about their business. Mounting a counter-offensive on the establishment, many a time, as one has seen, the Derring-dos have drawn first blood, surprising the authorities entrusted to protect and defend the country. Recent Washington-based agency reports confirm what everyone had feared but were afraid to ask in public.
It reads: “As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren’t sure exactly what to do with it. “For extremist organisations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English. “Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued.
“So, make their nightmares into reality.” IS, which had seized territory in Iraq and Syria years ago but is now a decentralised alliance of militant groups that share a violent ideology, realised long back that social media could be a potent tool for recruitment and disinformation, so it’s not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups — or even an individual bad actor with a web connection — AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. “For any adversary, AI really makes it much easier to do things,” said John Laliberte, a former vulnerability researcher at the National Security Agency in America. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”
A host of new possibilities have emerged with the Chat GPT programme for example, even as generative AI programmes have been used effortlessly to spread fake images and one-sided propaganda to influence the impressionable and willing followers round the world. Militant outfits like IS and Al- Qaida are already holding workshops, if global intelligence agencies are to be believed, while social media platforms like Twitter have long been used by them to spread their message and keep the followers updated. Hackers are also reported to be already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks.
They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. ‘Our policies and capabilities must keep pace with the threats of tomorrow’, rightly opines a security expert. As of now, the desperadoes are keeping the law enforcers wide awake to brace up to newer challenges day in and day out, not only on the ground but in cyberspace as well.