国产视频

Introduction

Since OpenAI鈥檚 large-language model ChatGPT burst onto the scene in November 2022, society has seen generative artificial intelligence (AI) rapidly shape conceptions of work and life online. Large-language models are an example of generative AI that is built on the recent technical advances in large, multi-purpose 鈥渇oundation models.鈥 But AI encompasses not only the generation of content but also a variety of analytical and predictive tools. AI is a broad umbrella term that people have used for decades 鈥渢o refer to both a field of study and the machine-based systems that use mathematical models to analyze inputs to complete specific tasks, such as making predictions, recommendations, content, and decisions.鈥1

Generative AI鈥檚 rapid ascendance in the current zeitgeist has spurred policymakers to focus on governing AI more broadly. In the United States, the Biden administration responded swiftly by issuing a Blueprint for an AI Bill of Rights2 and requirements for federal agencies in Executive Order 14110 on 鈥渢he Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.鈥3 As a new presidential administration and Congress prepare to take power, it is critical that鈥攊n this time of experimentation鈥攖he United States determines how it wants the AI model ecosystem to shape its democracy.

The history of the internet鈥檚 evolution contains an important example of the consequences of failing to prioritize openness. The internet鈥檚 early years were deeply 鈥済enerative,鈥 not in the currently popular use of the word as in 鈥済enerative AI,鈥 but rather with the following definition: an environment that fosters wide-ranging creativity and innovation.4 But due to a confluence of factors5鈥攊ncluding a lack of prioritizing the promotion of openness鈥攖hat relatively open era has given way to dominance by a few large companies. This current era of consolidation has eroded the power of internet users, reduced space for new competitors, and constrained the potential for unexpected innovation to emerge from diverse corners. Many of the companies that have dominated this phase of consolidation are now at the center of rapid advances in AI, and society is now at another important juncture in the internet鈥檚 evolution. How broadly accessible and competitive does the United States want the AI landscape to be? What choices will help ensure that AI best serves democratic institutions and norms globally?

Three broad categories of intervention receive persistent attention in the discourse about how to accomplish this goal: (1) governmental regulation and oversight of AI,6 (2) developing 鈥減ublic AI鈥 models controlled by non-corporate actors,7 and (3) ensuring that the AI model ecosystem is sufficiently open in terms of code and other transparency measures. While we point out some of the intersections between these categories, this report focuses on the ways in which openness can better align AI with serving the public interest.

Many of the policy debates around openness in AI models have focused narrowly on the risks posed by unpredictability in the downstream uses of open-source models. It is indeed important to study the marginal risk posed by open models,8 but most of the current discourse around risk does not fully account for the benefits that openness can provide. Which lessons about the many benefits of openness in AI models should the United States draw from the long history of open-source software? Which aspects of openness beyond code or model weights should be encouraged? Looking at examples in open-source software can help clarify some of the benefits that openness can bring to AI development. While open AI models and open-source software are not perfectly analogous鈥攊n fact, there are important differences between AI and open-source software鈥攎any key benefits found in open-source software will transfer to AI.

We do not argue for a reductive one-to-one equation of open-source software development and open societies, but the principles of open software and open models do reflect the ethos of open societies foundational to democracy.9 The long history of open-source software has demonstrated the importance of openness to several societal benefits that reinforce democratic principles. These benefits include promoting transparency and public accountability, fostering unexpected and iterative innovation, promoting educational and research uses of technology, and bolstering security. All of these benefits are key elements of open societies.

Importantly, the concept of openness in AI models should extend beyond publicly available code or model weights to also encompass the importance of transparency in understanding how technical decisions for models are made and an understanding of who makes them. This broader conceptualization underlies how we use the terms 鈥渙penness鈥 and 鈥渙pen models.鈥

If policymakers continue to focus disproportionately on the risks of open models, they will help keep the bulk of AI innovation in the hands of a few powerful companies that already dominate social media, cloud, and search capabilities. But this trend is not inevitable. If U.S. policymakers want the benefits of AI to be broadly and equitably distributed and to serve democratic values, then they must consider what kind of AI ecosystem to incentivize and build. An AI ecosystem characterized by open models able to thrive alongside proprietary ones can promote public transparency and accountability, innovation from unexpected corners, new avenues for education and research, and security. To imagine how such an AI ecosystem might look, we first must define the key attributes of an open model.

Citations
  1. Sarah Forland, 鈥淒emystifying AI: A Primer,鈥 国产视频鈥檚 Open Technology Institute, October 7, 2024, source.
  2. Biden Administration, 鈥淏lueprint for an AI Bill of Rights,鈥 White House, .
  3. Office of Science and Technology Policy, Executive Order 14110, the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (White House, October 20, 2023), .
  4. Jonathan L. Zittrain. The Future of the Internet鈥攁nd How to Stop It (Yale University Press and Penguin UK, 2008), .
  5. A non-exhaustive list of these factors should include delays in passing federal privacy legislation, updating pro-competitive legal and regulatory tools, swiftly developing standards for data portability, and purposefully aligning urgent public-interest objectives with needed financial and technical investments.
  6. See, e.g., Ami Fields-Meyer and Janet Haven, 鈥淎rtificial Intelligence, Illiberalism, and the Threat to Democracy,鈥 Foreign Policy, October 31, 2024, .
  7. See, e.g., Public AI Network, Public AI: Infrastructure for the Common Good (Public AI Network, August 10, 2024), ; Ganesh Sitaraman and Alex Pascal, 鈥淭he National Security Case for Public AI,鈥 Vanderbilt Policy Accelerator, September 27, 2024, ; Nathan Sanders, Bruce Schneier, and Norman Eisen, 鈥淗ow Public AI Can Strengthen Democracy,鈥 Brookings, March 4, 2024, .
  8. See, e.g., Dual-Use Foundation Models With Widely Available Model Weights (National Telecommunications and Information Administration, 2024), .
  9. Lawrence Lessig, 鈥淥pen Code and Open Societies: Values of Internet Governance,鈥 Chicago-Kent Law Review 74 (February 1999): 1405鈥1420, .

Table of Contents

Close