Live
- Chanchalguda Jail Officials Say They Haven't Received Bail Papers Yet, Allu Arjun May Stay in Jail Tonight
- BJP leaders present evidence of illegal voters in Delhi, urge EC for swift action
- Exams will not be cancelled: BPSC chairman
- Nagesh Trophy: Karnataka, T.N win in Group A; Bihar, Rajasthan triumph in Group B
- YS Jagan condemns the arrest of Allu Arjun
- Economic and digital corridors to maritime connectivity, India and Italy building vision for future, says Italian Ambassador
- SMAT 2024: Patidar's heroics guide Madhya Pradesh to final after 13 years
- CCPA issues notices to 17 entities for violating direct selling rules
- Mamata expresses satisfaction over speedy conviction in minor girl rape-murder case
- Transparent Survey Process for Indiramma Housing Scheme Directed by District Collector
Just In
Malicious use and abuse of various new-age technologies to steal information, identities for pecuniary gains or other purpose has been prevalent for...
Malicious use and abuse of various new-age technologies to steal information, identities for pecuniary gains or other purpose has been prevalent for years. But, several incidents are coming to light of late where individuals faces are morphed to derive satisfaction from their embarrassment and torment. Innocents helpless against invisible criminals are suffering in stoic silence and are at times subjected to even blackmail. The Indian government has largely ignored it and failed to stop consequences of this criminal intent.
But it has taken a celebrity’s torment to jolt the government out of its slumber and indifference to realise the gravity of situation. It issued stern advisories to social media platforms, including Facebook, Instagram and Youtube. It reiterated that Section 66D of the IT Act which entails punishment for cheating by personation by using computer resources with imprisonment up to three years and fine up to Rs 1 lakh. It also cited Rule 3(2) (b) of the Information Technology Rules, which requires that social media platforms pull down misleading and malicious content in the nature of impersonation, including artificially morphed images of an individual, within 24 hours of the receipt of a complaint.
We have come far from the days of technology merely causing disinformation and hoaxes. It is now feeding hatred, conflicts and wars. Deepfakes – manipulation of audio, video, images with help of AI – have taken social media abuses to new level. These impersonating technologies are said to be capable of even fooling biometrics and facial recognition systems.
There are a lot worse videos than of Rashmika Mandanna circulating on the Web,
made easy by the widespread abuse of deepfake technology. The government must not merely be content with advisories as such an advisory issued in February this year failed to rein in such tech misuse. As Rashmika urged, ‘’We need to address this as a community and with urgency before more of us are affected by such identity theft. More than enact a stronger legal and regulatory framework, the government must set up and empower enforcement wings to spring into
action as and when deepfake abuse is reported. This assumes more significance in view of the upcoming Global Partnership on AI (GPAI) Summit this December. It is going to be chaired by India which accounts for a fifth of global internet users.
India’s announcement came on the heels of US President Joe Biden’s executive order for more robust standards and regulations to ensure AI safety and security, and was followed by around two dozen countries resolve to seek a global alliance to combat AI-related risks such as disinformation. As the societal impact of AI keeps evolving, developers cannot fully foresee how their products would be put to misuse. Hence, a multi-stakeholder approach for a comprehensive and enforceable digital policy is a must. Innovations have to be nurtured and fostered to develop new-era digital forensics to bolster the different forgery techniques and detection applications.
Governments, IT companies and concerned techies need to join hands to ensure availability of ease-to-use solutions to the public both to zero in on deepfakes and report them to governments and media platforms. It has become the bounden duty of all to counteract misinformation. AI itself, along with its subsets such as Machine Learning (ML), should be used to scour huge volumes of data – to detect and defeat deepfakes, followed by punitive action.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com