Time to nip tech abuse in the bud

Time to nip tech abuse in the bud
x
Highlights

Malicious use and abuse of various new-age technologies to steal information, identities for pecuniary gains or other purpose has been prevalent for...

Malicious use and abuse of various new-age technologies to steal information, identities for pecuniary gains or other purpose has been prevalent for years. But, several incidents are coming to light of late where individuals faces are morphed to derive satisfaction from their embarrassment and torment. Innocents helpless against invisible criminals are suffering in stoic silence and are at times subjected to even blackmail. The Indian government has largely ignored it and failed to stop consequences of this criminal intent.

But it has taken a celebrity’s torment to jolt the government out of its slumber and indifference to realise the gravity of situation. It issued stern advisories to social media platforms, including Facebook, Instagram and Youtube. It reiterated that Section 66D of the IT Act which entails punishment for cheating by personation by using computer resources with imprisonment up to three years and fine up to Rs 1 lakh. It also cited Rule 3(2) (b) of the Information Technology Rules, which requires that social media platforms pull down misleading and malicious content in the nature of impersonation, including artificially morphed images of an individual, within 24 hours of the receipt of a complaint.

We have come far from the days of technology merely causing disinformation and hoaxes. It is now feeding hatred, conflicts and wars. Deepfakes – manipulation of audio, video, images with help of AI – have taken social media abuses to new level. These impersonating technologies are said to be capable of even fooling biometrics and facial recognition systems.

There are a lot worse videos than of Rashmika Mandanna circulating on the Web,

made easy by the widespread abuse of deepfake technology. The government must not merely be content with advisories as such an advisory issued in February this year failed to rein in such tech misuse. As Rashmika urged, ‘’We need to address this as a community and with urgency before more of us are affected by such identity theft. More than enact a stronger legal and regulatory framework, the government must set up and empower enforcement wings to spring into

action as and when deepfake abuse is reported. This assumes more significance in view of the upcoming Global Partnership on AI (GPAI) Summit this December. It is going to be chaired by India which accounts for a fifth of global internet users.

India’s announcement came on the heels of US President Joe Biden’s executive order for more robust standards and regulations to ensure AI safety and security, and was followed by around two dozen countries resolve to seek a global alliance to combat AI-related risks such as disinformation. As the societal impact of AI keeps evolving, developers cannot fully foresee how their products would be put to misuse. Hence, a multi-stakeholder approach for a comprehensive and enforceable digital policy is a must. Innovations have to be nurtured and fostered to develop new-era digital forensics to bolster the different forgery techniques and detection applications.

Governments, IT companies and concerned techies need to join hands to ensure availability of ease-to-use solutions to the public both to zero in on deepfakes and report them to governments and media platforms. It has become the bounden duty of all to counteract misinformation. AI itself, along with its subsets such as Machine Learning (ML), should be used to scour huge volumes of data – to detect and defeat deepfakes, followed by punitive action.

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS