AI as a job-killer vis-a-vis its opportunities remain a question

AI as a job-killer vis-a-vis its opportunities remain a question
x
Highlights

AI technology is currently advancing at a breakneck speed, much like the exponential growth experienced by database technology in the late twentieth century. Databases have grown to become the core infrastructure that drives enterprise-level software. Similarly, most of the new value added from software over the coming decades is expected to be driven, at least in part, by AI

According to Harvard Review, although computer automation is not causing a net loss of jobs, it does imply a substantial displacement of jobs from some occupations. Moreover, the burden of displacement falls disproportionately on workers in low-wage occupations, mainly because low-wage occupations use computers much less than those into high-wage occupations.

Office and administrative support positions grew from less than 12 per cent of US employment in 1950 to a peak of about 17 percent by 1980. By 2019, mass adoption of personal computers had returned the administrative support share to the level of the 1950s.

AI technology is currently advancing at a breakneck speed, much like the exponential growth experienced by database technology in the late twentieth century. Databases have grown to become the core infrastructure that drives enterprise-level software. Similarly, most of the new value added from software over the coming decades is expected to be driven, at least in part, by AI.

“Artificial intelligence will change the workforce,” affirms Carolyn Frantz, Microsoft’s Corporate Secretary. The bleak view of AI as a job-killer is but one side of the coin: while 75 million jobs may disappear, around 133 million more engaging, less repetitive new roles are expected to be created. AI “is an opportunity for workers to focus on the parts of their jobs that may also be the most satisfying to them,” says Frantz.

Less paperwork, quicker responses, and a more efficient bureaucracy – AI has the power to drastically change public administration. But then are the governments ready? This tech comes with both risks and opportunities that need to be understood and duly evaluated.

AI has the potential to make health care “much more accessible and more affordable,” insists Paul Bates, director of NHS services at Babylon Health. Babylon, an app that offers symptom-checking and fast access to physicians is providing advice to more than one million residents in central London through an AI-powered chatbot. Patients can get an accurate, safe, and convenient answer in a matter of seconds – and save health care providers’ money too.

One of the essential purposes of AI is to automate tasks that previously would have required human intelligence. Cutting down on the labour resources an organization must employ to complete a project, or the time an individual must devote to routine tasks, enables tremendous gains in efficiency. For instance, chatbots can be used to field customer service questions, and the medical assistant AI can be used to diagnose diseases based on patients’ symptoms.

Computational creativity is drastically changing the nature of art. Software, more than a tool, is becoming a creative collaborator, merging computer scientist with artist. As Austrian artist Sonja Baumel assures, “The exhibition space becomes a lab; art becomes an expression of science, and the artist is the researcher.”

AI will be deployed to augment both defensive and offensive cyber operations. Additionally, new means of cyber-attack will be invented to take advantage of the particular weaknesses of AI technology. Finally, the importance of data will be amplified by AI’s appetite for large amounts of training data, redefining how we must think about data protection. Prudent governance at the global level will be essential to ensure that this era-defining technology will bring about broadly shared safety and prosperity.

Big data and AI have a special relationship. Recent breakthroughs in AI development stem mostly from “machine learning.” Instead of dictating a static set of directions for an AI to follow, this technique trains AI by using large data sets.

For example, AI chatbots can be trained on data sets containing text recordings of human conversation collected from messenger apps to learn how to understand what humans say, and to come up with appropriate responses. One could say that big data is the raw material that fuels AI algorithms and models.

In a simplified model of how AI could be applied to cyber defence, log lines of recorded activity from servers and network components can be labelled as “hostile” or “non-hostile,” and an AI system can be trained using this data set to classify future observations into one of those two classes. The system can then act as an automated sentinel, singling out unusual observations from the vast background noise of normal activity.

Global policy makers have begun turning their attention to the ramifications of widespread AI technology, and its effect on cyber security, in particular.

Governing institutions will need to continue to improve their security posture in these and many other areas, including identity fraud. Since the AI software used for attack purposes is capable of rapidly evolving, this is an ongoing requirement rather than a one-off investment.

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS