国产视频

In Short

Regulate or Innovate? Governing AI amid the Race for AI Sovereignty

shutterstock_2207290429
Shutterstock

At a Glance

  • Global AI governance has rapidly shifted from collaborative oversight to competitive development.
  • Linking AI with national sovereignty creates powerful resistance to meaningful regulation, while widening gaps in technical expertise leave policymakers unable to engage effectively.
  • Corporate influence in governance processes threatens to replace public accountability with private rule-making.
  • AI benefits remain concentrated in the Global North, while disruptions disproportionately affect the South, undermining inclusive governance efforts.
  • The path forward requires balancing innovation with responsibility through democratic coalitions, market incentives, risk-based frameworks, and cross-regional solidarity.

The New Sovereignty Battleground

In just fifteen months鈥攆rom November 2023 to February 2025鈥攖he global approach to AI governance underwent a dramatic reversal. The , signed by 28 nations, including France, warned of 鈥渟erious, even catastrophic harm鈥 from advanced AI systems. Yet at the Paris AI Action Summit in February 2025, French President Emmanuel Macron : 鈥淚f we regulate before we innovate, we won鈥檛 have any innovation of our own.鈥

This whiplash pivot from collaborative oversight to competitive development reflects a fundamental reframing of AI as a sovereign imperative. Governments now treat AI capabilities as essential to national power, with safety concerns increasingly dismissed as impediments to technological competitiveness.

鈥嬧嬧嬧婽his pivot reflects how rapidly AI has become entangled with perceptions of national power. The transformation began with ChatGPT鈥檚 consumer release in late 2022, which sparked a global AI race. Since then, governments have increasingly poured resources into what might best be called 鈥溾濃攃hanneling public funds and regulatory muscle toward accelerating AI development rather than restraining it. The message from capitals worldwide has become clear: Innovate first. Regulate later, if at all.

As governments pursue innovations in AI, it is important to understand how states, institutions, and industry will combine to govern the technology, and how failure to do so could have unintended consequences. Policymakers designing responsible governance frameworks for AI face three major challenges: the linkage of AI with sovereign ambitions; an expertise gap in understanding the technical complexities of AI; and, relatedly, the outsize role private industry plays in regulating AI. This triple challenge forms the central tension in current AI policy debates.

Stakeholders in AI development and deployment will need to work together to ensure that the competition for sovereignty does not drive a race to the bottom, where ethics are jettisoned in the pursuit of power. This analysis maps the challenges of AI governance in this new landscape. Rather than advocating a single approach, it identifies critical leverage points where targeted interventions can create more democratic, equitable outcomes while preserving innovation.

The Governance Deficit

Of the proliferating AI laws and regulations鈥 at the national or supranational level鈥攎any focus on developing AI, but far fewer focus on governing it. Most jurisdictional arrangements begin with national strategies or ethics policies rather than binding legislation. The most prominent examples include the EU鈥檚 and China鈥檚 . In the United States, the Trump administration鈥檚 January 2025 鈥淩emoving Barriers to U.S. Leadership in AI Infrastructure鈥 replaced former President Joe on safe and trustworthy AI development. Both the Chinese plan and 国产视频 order emphasize investment and innovation over safeguards. Only the EU鈥檚 AI Act takes a comprehensive governance approach, imposing transparency and due diligence obligations on developers.

Challenge 1: Technology as National Identity

At the domestic level, the first challenge of designing governance frameworks is the equivalence鈥攐ften implicit and increasingly explicit鈥攖hat governments make between sovereignty and technological advancement. In a 2024 , for instance, the French government explicitly linked AI to national sovereignty: 鈥淥ur lag in artificial intelligence undermines our sovereignty. Weak control of technology implies a one-way dependence on other countries.鈥 India鈥檚 strategy : 鈥淲e are determined that we must have our own sovereign AI,鈥 though researchers question India鈥檚 ability to achieve this. This framing transforms AI from a technology into a national imperative. When capabilities become tied to sovereignty, regulation becomes secondary to innovation. To gain traction, governance frameworks must be positioned as enhancing competitiveness rather than constraining it.

Challenge 2: Knowledge Asymmetry

The second challenge is that regulating AI requires demystifying AI systems. AI experts on how to conceive of AI harms, often disagreeing on the upper bounds of AI capability and the kinds of danger AI poses. Policymakers often lack the technical understanding needed for effective regulation of new technologies. The constant churn of political cycles, industry leadership shuffles, and disruptive developments means they also usually lack the time to learn the problem set. This expertise gap, always present between regulators and industry, is particularly acute with AI. The field encompasses everything from simple algorithms to complex neural networks, creating confusion about what actually constitutes 鈥淎I.鈥 Even experts disagree about capabilities and risks. Some warn of existential threats, while others focus on immediate harms like algorithmic discrimination. These divisions within the research community leave policymakers without clear guidance on appropriate guardrails.

The term 鈥淎I鈥 connotes everything from routine ATM calculations to autocorrect to automated image generation to chatbots, prompting some computer scientists to dismiss the hype over AI products as little more than 鈥.鈥 Moreover, even the over how to conceptualize AI harms. While some regulators may seek to close the expertise gap and set guardrails for responsible AI development and use, divisions within the AI research community make it difficult to discern the subtlety and substance of differing expert views.

Challenge 3: Corporate Foxes in the Technological Hen House

Tech giants possess computational resources that those available to academics and many governments. Microsoft, Google, OpenAI, and Meta increasingly governance discussions through their technological advantage. The EU鈥檚 exemplifies this dynamic, adopting a 鈥渃o-regulation鈥 framework that delegates compliance to industry leaders. Meanwhile, Google, Microsoft, OpenAI, and Anthropic have a self-regulating body to oversee their version of safe and responsible AI development. As regulators hesitate and to these arrangements, companies fill the vacuum with private governance regimes that they present as sufficient for public protection. When tech giants draft their own regulatory playbook, it鈥檚 not just the fox guarding the hen house鈥攊t鈥檚 the fox designing the coop, training the guard dogs, and writing the farmer鈥檚 manual on poultry security.

The Fraught Path to Global Rules

Principles Without Enforcement

International organizations have engaged with AI governance since 2019, when the OECD released its , which 47 countries endorsed. The G20 soon after. UNESCO鈥檚 2021 and the UN鈥檚 came next. These frameworks acknowledge AI鈥檚 global supply chain and the risk of regulatory arbitrage across jurisdictions. From critical minerals to coders, the AI stack is pushing the current boundaries of laws that govern everything from the mining industry to capital and labor. And the principles adopted by international organizations try as they can to address those complexities. Many of these frameworks also generally emphasize the need for multilateral cooperation, given the global nature of the AI supply chain.聽

Yet principles face practical obstacles. The biggest challenge is conflicting national interests. Economic competition creates resistance to binding global cooperation. The United States diverges significantly from EU and Chinese regulatory approaches. The Biden administration relied primarily on voluntary commitments from tech companies. In early 2025, the Trump administration went further, revoking Biden鈥檚 executive orders while encouraging massive private investment, including OpenAI and Oracle鈥檚 AI infrastructure project. This regulatory reversal exemplifies how sovereignty narratives can accelerate a global race to the bottom.

The New Digital Divide

AI鈥檚 benefits and governance capacity are unevenly distributed. The United States, China, and the EU command the resources to shape AI development and reap the rewards. Business management consultancies like PWC that a staggering 84 percent of AI鈥檚 projected $15.7 trillion economic value will flow to China, North America, and Europe.

A 2024 indicates the Global South faces compounding disadvantages: limited access to computing resources, data infrastructure shortages, and insufficient AI expertise. Worse, automation threatens to eliminate traditional development pathways. There is a real risk AI concentrations could worsen the divide between the Global North and South by labor prone to automation, such as telemarketing, and industries to grow, such as agriculture. As one expert : 鈥淔or poorer countries, this is engendering a new race to the bottom where machines are cheaper than humans and the cheap labor that was once offshored to their lands is now being onshored back to wealthy nations.鈥

These asymmetries create a governance paradox. Global South nations most likely to benefit profoundly from AI innovation are also vulnerable to AI鈥檚 disruptions, and they have the least influence over its governance. Meanwhile, countries with the most to gain have reduced incentives for strict regulation. The result is a fragmented international landscape with diminishing prospects for inclusive global frameworks.

Finding a Way Forward

The governance challenges described above demand pragmatic responses that acknowledge technological realities while preserving democratic oversight. Four promising pathways emerge:

Democratic Counterweights

Effective domestic governance requires counterbalancing corporate influence through broad-based coalitions. Universities, civil society organizations, and public interest technologists can demystify AI systems and empower policymakers. These coalitions can provide technical expertise independent from commercial interests, develop alternative governance frameworks, and advocate for public values in AI design and deployment. Distributing knowledge more widely creates the foundation for informed governance.

Market Incentives for Responsible AI

Public-interest AI systems, like philanthropically-funded language models, can create market pressure for higher standards. When ethical alternatives exist, companies face competitive pressure to improve their own practices. The private sector can also provide checks against government overreach, particularly in authoritarian contexts. Consumer and investor pressure in tech companies鈥 home markets remains a powerful lever for global ethical practices. The goal is not to halt innovation but to channel market forces toward responsible development.

Risk-Based Multilateral Frameworks

History shows that nations cooperate despite competing interests when the risks of noncooperation are sufficiently severe. The nuclear nonproliferation regime and Outer Space Treaty demonstrate this principle. Similar approaches could work for AI by focusing on concrete, bounded risks that threaten shared interests. Following the ICANN model for internet governance, technical standards bodies could provide neutral ground for cooperation on specific AI safety issues, sidestepping broader geopolitical conflicts while building trust incrementally.

Digital Solidarity Across Regions

A vision of 鈥渄igital solidarity鈥 could facilitate regional cooperation and more equitable AI development. Nations should acknowledge the limits of purely domestic solutions and leverage global AI supply chains strategically. Developing shared technology stacks, promoting procurement best practices that prevent vendor lock-in, and creating sustainable financing mechanisms similar to trade adjustment assistance could help smaller countries participate meaningfully in the AI economy while building domestic capacity.

Beyond the False Choice

The artificial choice between innovation and regulation threatens both. As countries reframe AI as a sovereignty issue, we face a governance inflection point with long-term consequences for global technology and power distribution.

The three central challenges鈥攕overeignty claims, expertise gaps, and corporate co-regulation鈥攄emand sophisticated responses. Nations must balance legitimate technological ambitions against the need for meaningful oversight. Regulators need technical capacity independent of corporate influence. And governance frameworks must ensure the Global South participates meaningfully rather than merely bearing AI鈥檚 disruptive effects.

Without action, we risk entrenching a world where a handful of companies and countries monopolize AI鈥檚 benefits while distributing its risks globally. The central insight is that regulation need not impede innovation鈥攊t can channel it productively. The task is not to choose between technological advancement and public protection, but to craft governance that enables responsible progress.

The governance paths outlined here offer pragmatic first steps. They acknowledge geopolitical realities while preserving space for democratic values. The complex interplay between sovereignty, expertise, and corporate power makes AI governance uniquely challenging. But the same interconnectedness that creates these challenges also opens opportunities for creative governance solutions that serve broader interests than those of a privileged few.

More 国产视频 the Authors

Programs/Projects/Initiatives

Regulate or Innovate? Governing AI amid the Race for AI Sovereignty