Who will govern AI? The race of nations to regulate artificial intelligence

Share:

Artificial intelligence (AI) is a very broad term: it can refer to many activities undertaken by computing machines, with or without human intervention. Our familiarity with AI technologies depends largely on where they play a role in our lives, for example in facial recognition tools, chatbots, photo editing software or self-driving cars.

The term “artificial intelligence” is also evocative of tech giants – Google, Meta, Alibaba, Baidu – and emerging players – OpenAI and Anthropic, among others. While governments come to mind less easily, they are the ones who shape the rules under which AI systems operate.

Since 2016, different tech-savvy regions and nations in Europe, Asia-Pacific and North America have implemented regulations targeting artificial intelligence . Other nations are lagging behind, such as Australia [ editor’s note: where the authors of this article work ], which is still studying the possibility of adopting such rules.

There are currently more than 1,600 AI public policies and strategies around the world. The European Union, China, the United States and the United Kingdom have become leaders in the development and governance of AI , as an international summit on AI security was held in the United Kingdom in early November.

Accelerate AI regulation

AI regulatory efforts began to accelerate in April 2021, when the EU proposed an initial regulatory framework called the AI ​​Act . These rules aim to set obligations for providers and users, based on the risks associated with different AI technologies.

While the European AI law was pending , China proposed its own AI regulations. In Chinese media, policymakers have spoken about their desire to be first movers and provide global leadership in AI development and governance.

While the EU has taken a comprehensive approach, China has regulated specific aspects of AI one after the other. These aspects range from “algorithmic recommendations” (e.g. platforms like YouTube ) to image or voice synthesis or technologies used to generate “deepfake” and generative AI .

Chinese AI governance will be supplemented by other regulations, still to come. This iterative process allows regulators to strengthen their bureaucratic know-how and regulatory capacity, and allows flexibility to implement new legislation in the face of emerging risks.

A warning for the United States?

Progress on Chinese AI regulations may have been a wake-up call for the United States. In April, an influential lawmaker, Chuck Shumer , said his country should not “allow China to take the top position in terms of innovation, or write the rules of the road” when it comes to AI.

On October 30, 2023, the White House issued an executive order on safe, secure, and trustworthy AI. This executive order attempts to clarify very broad questions of equity and civil rights, while also addressing specific applications of technology.

Alongside the dominant players, countries with growing IT sectors, such as Japan, Taiwan, Brazil, Italy, Sri Lanka and India, have also sought to implement defensive strategies to mitigate potential risks associated with widespread AI integration.

These global AI regulations reflect a race against foreign influence. Geopolitically, the United States competes with China, whether economically or militarily. The EU emphasizes establishing its own digital sovereignty and strives to be independent from the United States.

At the national level, these regulations can be seen as favoring large incumbent technology companies over emerging competitors. This is because it is often costly to comply with legislation, requiring resources that small businesses may lack.

Alphabet, Meta and Tesla have supported calls for AI regulation . At the same time, Google , owned by Alphabet, like Amazon, has invested billions in Anthropic, OpenAI’s competitor; while xAI, owned by Elon Musk, the boss of Tesla, has just launched its first product, a chatbot called Grok .

A shared vision

The European AI law, China’s AI regulations and the White House executive order show that the countries involved share common interests. Together, they set the stage for the “Bletchley Declaration” , issued on November 1  , in which 28 countries, including the United States, the United Kingdom, China, Australia and several members of the EU [ editor’s note: including France and the European Union itself ], are committed to cooperating on AI security.

Countries or regions consider that AI contributes to their economic development, their national security, and their international leadership. Despite the recognized risks, all jurisdictions are working to support AI development and innovation.

By 2026, global spending on AI-centric systems could exceed US$300 billion , according to one estimate. By 2032, according to a Bloomberg report, the generative AI market alone could be worth US$1.3 trillion .

Such figures tend to dominate media coverage of AI, as well as the supposed benefits of using AI for technology companies, governments and consulting firms. Critical voices are often pushed aside .

Divergent interests

Beyond economic promises, countries are also turning to AI systems for defense, cybersecurity and military applications.

At the International AI Security Summit in the UK, international tensions were evident . While China endorsed Bletchley’s statement made on the first day of the summit, it was excluded from public events on the second day.

One point of contention is China’s social credit system , which operates in a less than transparent manner. The European AI Act considers that social rating systems of this type create an unacceptable risk.

The United States perceives China’s investments in AI as a threat to its national and economic security , particularly in terms of cyberattacks and disinformation campaigns. These tensions are of course likely to hamper global collaboration on binding AI regulations.

The limits of current rules

Existing AI regulations also have significant limitations. For example, there is no clear and common definition across jurisdictions of different types of AI technologies.

Current legal definitions of AI tend to be very broad, raising concerns about their applicability in practice, as the regulations accordingly cover a wide range of systems that pose different risks and might merit different treatments.

Likewise, many regulations do not clearly define the notions of risk, safety, transparency, fairness and non-discrimination, which poses problems in precisely ensuring any legal compliance.

We are also seeing local jurisdictions initiate their own regulations within the national framework, to address particular concerns and balance regulation and economic development of AI.

Thus, California has introduced two bills aimed at regulating AI in employment. Shanghai proposed a system for classifying, managing and supervising AI development at the municipal level.

However, narrowly defining AI technologies, as China has done, presents the risk that companies will find ways to circumvent the rules.

Go forward

Sets of “best practices” for AI governance are emerging from local and national jurisdictions and transnational organizations, overseen by groups such as the UN AI Advisory Council and the National Institute of Standards and Technology of the United States. The forms of governance that exist in the United Kingdom, the United States, Europe and, to a lesser extent, China, are likely to serve as a framework for global governance.

Global collaboration on AI governance will be underpinned by ethical consensus and, more importantly, national and geopolitical interests.

Author Bios: Fan Yang is a Research Fellow at Melbourne Law School, the University of Melbourne and the ARC Center of Excellence for Automated Decision-Making and Society and Ausma Bernot is a Postdoctoral Research Fellow, Australian Graduate School of Policing and Security at Charles Sturt University

Tags: