Ocnus.Net
News Before Its News
About Us | Ocnus? |

Front Page 
 
 Africa
 
 Analyses
 
 Business
 
 Dark Side
 
 Defence & Arms
 
 Dysfunctions
 
 Editorial
 
 International
 
 Labour
 
 Light Side
 
 Research
Search

Analyses Last Updated: Feb 6, 2024 - 2:52:06 PM


Artificial Intelligence Will Entrench Global Inequality
By Robert Muggah, FP, 29/5/23
May 30, 2023 - 1:53:32 PM

Email this article
 Printer friendly page

The debate about regulating AI urgently needs input from the global south.

The artificial intelligence race is gathering pace, and the stakes could not be higher. Major corporate players—including Alibaba, DeepMind, Google, IBM, Microsoft, OpenAI, and SAP—are leveraging huge computational power to push the boundaries of AI and popularize new AI tools such as GPT-4 and Bard. Hundreds of other private and non-profit players are rolling out apps and plugins, staking their claims in this fast-moving frontier market that some enthusiasts predict will upend the way we work, play, do business, create wealth, and govern.

Amid all the enthusiasm, there is a mounting sense of dread. A growing number of tech titans and computer scientists have expressed deep anxiety about the existential risks of surrendering decision-making to complex algorithms and, in the not so distant future, super-intelligent machines that may abruptly find little use for humans. A 2022 survey found that roughly half of all responding AI experts believed there is at least a one in 10 chance these technologies could doom us all. Whatever the verdict, as recent U.S. congressional testimony from OpenAI CEO Sam Altman reveals, AI represents an unprecedented shift in the social contract that will fundamentally redefine relations between people, institutions, and nations.

Adding to these ominous existential worries is the already lopsided distribution of power and wealth, ensuring that the winnings of future upheaval will accrue disproportionately to the 1 percent. But if AI menaces white-collar jobs and empowers undemocratic interests in privileged countries, what to say about the fallout in those parts of the world where billions toil in the informal sector without safety nets, making them even easier marks for power elites and their digital tools? However the AI disruption plays out worldwide, there is scant hope that, without mitigation, safeguards, and compensation such as universal basic income, the world will be a more equitable place to live, work, or vote.

Fears about the existential risks posed by machine intelligence are hardly new. In his 1872 novel Erewhon, Samuel Butler prophesied that sentient machines would eventually replace humans. In 1942, master science fiction writer Isaac Asimov famously laid out his three laws for robotics: Robots may not injure humans, must obey orders from humans as long as this does not violate the first law, and must protect humans’ existence as long as this does not violate the first two laws. A few years later, in 1950, Alan Turing imagined machines that could converse with humans, while in 1965 Irving John Good predicted a machine-driven “intelligence explosion.” The world had to wait another half century for the promised AI revolution to arrive.

And yet for all the historical premonitions, the current furor over AI is as unprecedented as it is uniquely unsettling. For one, the latest crop of highly advanced large language models and the computational power driving them are no longer confined to the laboratory but are already being used by hundreds of millions of people. Another cause for concern is that some of the most outspoken AI advocates are now convinced that its unregulated use poses a fatal risk to humanity in the near future. What was once floated as a distant theoretical threat is now a clear and present danger—so much so that technologists such as Eliezer Yudkowsky, Geoffrey Hinton, and Max Tegmark and more than 31,000 other people have called for a pause in training the most powerful forms of AI, which they see as among the “most profound risks to society and humanity” today.

Well before the latest outbreak of anxiety, governments, businesses and universities across North America and Western Europe were debating the real and potential harms associated with AI. Their attention converged on at least four possible threats. The first is the existential threat posed by super intelligent machines that may quickly dispose of humans. The second is widespread and accelerating unemployment, with Goldman Sachs recently estimating that as many as 300 million jobs are at risk of being replaced by AI. The third major concern relates to the disturbing way AI imitates and shares text, voice, and video—and could thus supercharge misinformation and disinformation. A fourth fear is that AI could be used to build doomsday technologies—such as biological or cyber viruses—with devastating consequences.

We are not yet at the mercy of thinking machines. As awareness of AI risks has grown, so too have standards and guidance to mitigate them. But for the most part, these are voluntary, including hundreds of protocols and principles advocating for responsible design and self-restraint. Common priorities include aligning AI with the best interests of humans and promoting safety in the design and deployment of algorithms. Other objectives include transparency of the algorithms themselves, accountability in relation to their development and application, fairness and equity in their use, privacy and data protection, human oversight and control, and compliance with regulations. The focus on voluntary self-policing is starting to change, with tech companies themselves advocating for the establishment of AI agencies and the enforcement of more robust rules.

Yet the push to create safeguards is far from ecumenical. To date, most of the debate over AI and possible strategies to mitigate unintended harms is concentrated in the West. Most of the government and industry standards now on the table were issued in the European Union, the United States, or member states of the Organization for Economic Cooperation and Development, a club of 38 advanced economies. The EU, for example, is poised to release a new AI Act focusing on applications and systems that pose unacceptable and high risk. The Western focus on AI is hardly surprising given the density of AI companies, investors, and research institutes working on AI from Silicon Valley to Tel Aviv, Israel.

Even so, it is worth underlining that the needs and concerns of regions such as Latin America, Sub-Saharan Africa, South Asia, and Southeast Asia—where AI is also rapidly expanding and will generate monumental effects—are not much reflected in the AI debate. Put another way, the vast majority of discussion about the consequences and regulation of AI is occurring among countries whose populations make up just 1.3 billion people. Far less attention and resources are dedicated to addressing these same concerns in poor and emerging countries that account for the remaining 6.7 billion of the global population.

This is a troubling omission, given that many of the darker consequences of poorly regulated AI are particularly resonant in the so-called global south. Undoubtedly, some anxieties are global, including those over super-intelligence, job losses, and accelerating fake news. Yet the darker portents of AI represent anything but an equal opportunity affliction. Unmitigated AI could deepen social, economic, and digital cleavages between and within countries. The unregulated spread of AI could also concentrate corporate power even more, and deepening techno-authoritarianism could accelerate the corrosion of already damaged democratic institutions.

While these AI-induced harms clearly represent universal threats, their impacts not only will fall unevenly across an already badly divided globe but could also prove particularly paralyzing in lower- and middle-income countries with precarious regulatory guardrails and weak institutions. For one, algorithms and datasets generated in wealthy countries and subsequently applied in developing nations could reproduce and reinforce biases and discrimination owing to their lack of sensitivity and diversity. Moreover, low-wage and low-skill workers already suffering from poor pay and lax labor protections are particularly exposed to the job-killing effects of AI. There are, of course, many potential benefits to the spread of AI in the global south, but these may not be harnessed without adequate AI regulation, ethical governance, and better public awareness of the need to limit AI’s damaging effects.

Given the blistering pace of AI advances, the time for building regulatory guardrails and other backstops is now. AI-powered technologies are rapidly being adopted in some of the world’s most unequal countries in Africa (including the Central African Republic, Mozambique, and South Africa), the Middle East (including Oman, Qatar, and Saudi Arabia) and Latin America (including Brazil, Chile, and Mexico). Yet many of the basic laws and principles to govern safe AI have yet to be fully developed, much less negotiated and publicly debated. Likewise, large U.S., European, and Chinese technology vendors are rapidly introducing powerful AI technologies in many developing countries, securing dominant market share in surveillance and other AI applications, and wiping out the local competition. The use of AI technologies to reinforce illiberal and autocratic governance is already on full display in places such as Cambodia, China, Egypt, Nicaragua, Russia, and Venezuela.

Unfettered AI development is good news for autocrats and power elites who are already set up to reap the spoils of government and monopolize public goods. Unless effective regulations, equitable compensatory mechanisms, social safeguards, and political firewalls can be built, AI is likely to deliver greater uncertainty and collateral damages to the globe’s digitally challenged underclass, for whom next-generation technology will be someone else’s miracle.


Source:Ocnus.net 2023

Top of Page

Analyses
Latest Headlines
Russia’s Vision for Dominance in Middle East Suffers Under Conflict
The EU lifebelt
Fear drives a war mentality
Russia Strives to Survive
Ukraine’s new strategy hits Russia where it hurts
The Boogaloo Boys Catalogue
The China Purge
What Is the History of Fascism in the United States?
Balance of War in Ukraine Set to Shift, Not in Russia’s Favor
Scandalous Indoctrination: Inside a Kings College Counter-Terrorism Course for UK Civil Servants