国产视频

In Short

Automated Authority: AI Algorithms

shutterstock_2512815885
Shutterstock

At a Glance

  • AI algorithms can self-modify and create new code when fed new data, raising unprecedented questions about agency and accountability, even as AI produces documented harms across domains, from discriminatory hiring to dangerous content recommendations.
  • U.S. regulation relies primarily on industry self-governance, while the EU and China have developed more assertive oversight frameworks鈥攖he EU seeks regulatory influence, while China pursues technological self-sufficiency.
  • As corporate AI power concentrates, public interest approaches are needed to ensure democratic oversight and equitable benefits.
  • Meaningful AI governance must center on human autonomy, balancing innovation with protection against algorithmic harms.

Minds in the Machine

Within two months of its release, Chat GPT reached monthly active users, setting a record for user adoption by January 2023. AI-powered systems have leapt into our lives in forms ranging from speech recognition to autonomous driving to medical diagnosis. The breakneck speed and vast scope of adoption raise many policy questions, with AI algorithms at their core.

AI can be broadly defined as computer systems that perform tasks typically requiring human intelligence. Different technologies fall under this umbrella, from predictive AI (in hiring) to generative AI (ChatGPT). Algorithmic governance is a key layer in an 鈥淎I Sovereignty Stack,鈥 as Luca Belli , alongside energy, data, computing power, human talent, and cybersecurity.

AI algorithms are fundamentally different from previous generations. Earlier algorithms included search systems designed for relevance or social media algorithms coded to maximize engagement. Modern AI departs from the 鈥渞ule-based鈥 approach, as Kaifu Lee : AI models let machines develop pattern recognition capabilities by learning from enormous numbers of examples, leveraging neural networks. In other words, AI models can self-modify and create new algorithms when fed new data.

Such abilities raise thorny questions of agency and accountability. Can a computer be an author, as asked in 1982? Should AI become a , as Lawrence Solum questioned in 1991? If self-driving cars cause harm, ? Should creators and marketers be held responsible for algorithmic harms? Might the be misguided for determining agency, in light of the , which notes that AI merely manipulates symbols without having a 鈥渕ind鈥 of its own? In one notable case, (2025), the U.S. court ruled that artwork generated by an AI cannot be copyrighted, as relevant statutes consider only human authors.

When Code Discriminates

While AI assistants may improve efficiency in everyday tasks, algorithmic bias can magnify harm, given AI systems鈥 global reach and algorithmic invisibility. Early examples include Microsoft鈥檚 chatbot (removed after becoming racist) and (scrapped for gender bias). Government programs have seen disasters like , a Dutch system that included 鈥渘on-Western appearance鈥 as fraud indicators, leading to family separation and suicide.

Such issues are stubbornly persistent. Google Gemini produced historically inaccurate images and reportedly told a student seeking homework help to 鈥減lease die.鈥 ChatGPT often 鈥,鈥 creating false legal precedents or reporting untrue facts. In an extreme case, a Florida teen after developing an obsession with a Character.AI chatbot that encouraged an abusive relationship.

The discourse and pursuit of 鈥溾 cannot materialize if we fail to address algorithmic opacity. Scholars Lucas Introna and Helen Nissenbaum in 2000 that search engines raise serious issues of systematic bias. Social media algorithms have similarly shown pernicious effects. Eli Pariser鈥檚 鈥溾 concept presciently captured algorithms鈥 impact on polarization, while how Facebook鈥檚 algorithms harm children鈥檚 mental health and fuel ethnic violence.

Without oversight, we continue living in what Frank Pasquale calls a 鈥溾 where technical opacity shields harmful practices. The research community has documented these harms in works like Cathy O鈥橬eil鈥檚 and Virginia Eubanks鈥檚 .

Guardrails for the Digital Age

AI鈥檚 power demands responsible oversight. OpenAI鈥檚 chief scientist Ilya Sutskever that increasingly potent models could easily cause great harm. Industry leaders including Elon Musk for a pause in the 鈥渙ut of control鈥 AI race creating digital minds their creators can鈥檛 reliably control. Despite these risks, U.S. legislation has made little progress toward regulation.

The United States has regulating AI. The 鈥攊ntroduced in 2019 in Congress and reintroduced in 2022鈥攈as not advanced. President Biden鈥檚 2023 executive order lacked enforcement specifics, and the White House was removed by the Trump administration in 2025. was vetoed by Governor Newsom for fear of 鈥渃hilling effects鈥 on industry.

In the United States, AI firms enjoy broad legal protections through of the Communication Decency Act and First Amendment interpretations that treat algorithmic outputs as 鈥渙pinions.鈥 Tim Wu that 鈥渢he Supreme Court has effectively privatized speech control.鈥 However, as Oren Bracha and Frank Pasquale , doing nothing is not an option given AI鈥檚 centrality and potential for harm. AI should not be exempt from legal responsibility to operate safely.

Who Rules the Digital Kingdom?

With limited U.S. regulation, oversight has emerged elsewhere. Stanford鈥檚 2024 shows that 75 countries have adopted national AI strategies. Among these, two AI governance regimes show particular influence: the EU and China.

Brussels鈥檚 Regulatory Gambit

For the EU, 鈥渄igital sovereignty鈥 is about strategic autonomy to counterbalance the United States and China. The EU harmonizes rules across member states using a risk-based approach, classifying AI systems into four risk categories:

  1. Unacceptable: Includes social scoring and real-time facial recognition monitoring.
  2. High: Includes AI in hiring, justice, immigration, and law.
  3. Limited: Includes chatbots, deepfakes, and emotion recognition systems.听
  4. Minimal: Includes AI in gaming and spam filters.听

The AI Act requires algorithmic testing and systemic risk assessments for high-risk categories. Like EU laws such as the General Data Protection Regulation (GDPR), the act has extraterritorial impact (the 鈥溾), with noncompliance fines up to 鈧35 million or 7% of a firm鈥檚 annual turnover. The EU鈥檚 history of suggests serious enforcement, though questions remain about .听

Beijing鈥檚 Technological Ambition

China both industrial capacity and regulatory prowess鈥攕overeignty through AI as well as sovereignty over AI. Despite U.S. sanctions, Chinese labs like have produced models comparable to OpenAI鈥檚 while using fewer resources, suggesting sanctions have accelerated China鈥檚 tech self-sufficiency instead.

China has enhanced its data governance through the . A balances industry incentives with regulations against potential harms such as algorithmic discrimination and has developed rules for labeling AI-generated content. China is also trying to establish cross-border, non-personal data flow mechanisms with the EU.

Beyond the Algorithm

Microsoft, Google, and Amazon dominate two-thirds of the cloud computing market for AI training. When a handful of companies control the infrastructure to develop and deploy AI, who are the real AI sovereigns?

To advance an AI development agenda driven by public rather than corporate interest, four policy directions emerge:

  1. Expand public AI infrastructure to counter corporate concentration.
  2. Create algorithmic transparency requirements for high-risk applications.
  3. Establish cross-border regulatory coordination for fundamental rights.
  4. Incorporate civil society in governance through formal advisory roles.

The true measure of AI sovereignty lies not in state control or corporate dominance but in whether it enhances human autonomy and well-being. Any meaningful approach to algorithmic governance must center public interest and democratic oversight.

More 国产视频 the Authors

Min Jiang
Min Jiang
Min Jiang

Non-Resident Scholar, Planetary Politics

Programs/Projects/Initiatives

Automated Authority: AI Algorithms