A Global Summit On Ai Amid Malicious Abuses

Update: 2025-02-06 08:27 IST

There’s no question we are in an AI and data revolution era. Researchers are making a feverish pitch to come up with better models to churn huge data, address related risks and challenges. Artificial Intelligence, however, is not any new phenomenon; its roots go back to the mid-20th century. AI has continued to evolve. In the wake of machine learning and deep learning, the 21st century world is amazed at the latest innovations in AI research, such as: Virtual Assistants (Amazon Alexa, Apple Siri, Google Assistant), recommendations from OTT or music applications, image and speech recognition, autonomous vehicles, and diagnostic tools.

Five years ago, Generative AI models, such as DALL-E and ChatGPT, first showed their stunning capabilities in generating images and human-like text, respectively, sparking a scramble among tech companies the world over. These applications have been providing users efficiency, personalization, and experiences never life before. Now, daring western models such as OpenAI, Chinese company DeepSeek’s R1, which cost much cheaper, is topping popularity charts.

While AI is being integrated into businesses and public systems, policymakers have begun to take notice of concerns that AI may worsen overall inequality between genders and among nations, besides engendering a class/racial bias which will affect recruitments or immigrant processing. We have seen how deepfakes are flooding the net with fake videos, fake audio, fake text, and fake images. AI has made them much cheaper and much more realistic. GPT-4 can pen articles in a jiffy and the world may be flooded with fake news, views and discussions. People will be at a loss to distinguish between fakes and real ones.

That AI can also have a sinister influence on the human mind is discernible in the fact that there have been more than 3 billion search results for ‘AI girlfriend’ on Google. Lonely vulnerable people can be turned into addicts by companies by shaping their behaviors and opinions. AI is also powering defence sectors. Already, suicide drones are in use. Giving them autonomy to act on their own will have horrible consequences if errors or bugs take effect.

As such, there is a need to regulate misuse of AI and create safety standards and bolster fact-checking systems by governments. All concerns regarding data reliability, fundamental rights, tech isolationism, and equitable access will be deliberated upon by a host of global leaders, researchers, and innovators in Paris on February 10-11 for a responsible, sustainable AI development.

The Artificial Intelligence (AI) Action Summit co-chaired by India and France summit to pave way for global AI standards and promote ethical governance. It may be recalled that two key summits had taken place before. The AI Safety Summit in Britain in Bletchley Park in 2023 saw the signing of the Bletchley Declaration on AI Safety by 25 nations. Later, a Seoul summit gathered 16 top AI companies to make voluntary commitments to develop AI in a transparent manner.

The Paris summit will build on the strengths and pledges of these previous meets. Why in Paris? With the AI threatening to disrupt labour incomes and numbers, the West is playing catch-up as American and Chinese companies are racing to develop new applications and capabilities. India, too, is trailing. Prime Minister Modi will confer with other leaders on ways to provide access to independent, safe, and reliable AI for all. They will also deliberate on developing AI as inclusive and environmentally friendly as well. It is hoped that PM Modi will return with a glean on best practices by governments in building a robust talent pool through industry and academic collaboration to build a large number of skilled force to spur the AI startup space.

Tags:    

Similar News