国产视频

Benefits of Open-Source AI

Public Transparency and Democratic Accountability

Artificial intelligence (AI) decision-making can be inscrutable in ways that frustrate democratic accountability. The inability to understand how a model reached a decision on any given question presents problems for any process that needs to be auditable, repeatable, and generally subject to external scrutiny.1 The inability to understand why a model gave the answer it did makes AI unreliable for use in critical applications. By offering access to both the technical components of a model and transparency into the decision-making process of model developers, increased openness in AI offers part of the solution to this inscrutability.

While the technical aspects of this problem, known as explainability, are an important unsolved issue requiring attention, good transparency practices implemented by model developers can help in understanding how certain biases appear in a model and can keep developers accountable for the choices they make in designing and training AI models. Generative AI models trained on data scraped from the public internet often embody the errors and biases in that data, which exacerbates long-standing concerns about algorithmic bias and its discriminatory effects.2 Increased access to information about data used for training is a vital first step in understanding how bias arises in an AI model.

The risks of AI stem from more than the models鈥攖hey include the whole systems in which those models are used. As Benjamin Brooks, a fellow at Harvard University鈥檚 Berkman Klein Center, recently observed in a filing before Australia鈥檚 Department of Industry, Science, and Resources, 鈥渢he risk posed by the system will depend on a range of factors, including the intended use-case, specific deployment environment, the extent of human oversight, and the possibility of correction and redress.鈥 Similar to the Open Source Initiative, Brooks suggests thinking of AI as a complete system and a model as 鈥渁 component: a prediction engine.鈥3 In other words, the details about how a model is set up influence what possible risks it poses. Those risks can be introduced in a variety of ways that extend beyond the model itself, from insecurities in the deployment environment to having no system for correcting the AI when it makes a mistake. An AI system consists of both a model and a software project run by the people who maintain it.

Transparency and accountability comprise one part of addressing AI explainability by offering a framework for evaluating non-technical aspects of a model, such as project management, oversight, and decisions about technical design. Moving beyond what a model is doing technically enables a shift from only focusing on AI as a new, mysterious technology to helpfully identifying the aspects AI shares with other software. There are many lessons in open source about how to structure and organize a large, globally distributed, technical project in fair and transparent ways. The types of lessons that an AI model project could emulate range from community implementation of best practices for managing code, tracking bugs, codes of conduct, or even building a nonprofit鈥檚 organizational or legal structure around a single software project.

In addition, open models can be more easily modified and deployed in service of specific public-interest goals than closed models. As Divya Siddarth and Saffron Huang of the Collective Intelligence Project鈥攁n initiative that aims to direct technological development toward the collective good鈥攁nd Audrey Tang, Taiwan鈥檚 first Minister of Digital Affairs, state, 鈥淭he open-source community can play a large role here, partnering with democratic innovation organizations to train open models that align with public perspectives.鈥4 They emphasize that the key features of transparency and open innovation also bolster trust in AI and broaden the base of people who participate in shaping AI鈥檚 impact on society.5

Open code can deliver some transparency benefits on its own, but given the explainability challenge facing all of AI鈥攐pen code, model weights, and training data should not be misconstrued as a silver bullet for achieving fully explained AI decision-making. Understanding how AI systems arrive at any given answer is still a puzzle that must be solved to build trust in AI. Open models, accompanied by other mechanisms of transparency, make more freely available all of the pieces that AI researchers will need to solve the puzzle of explainability.

But openness alone cannot democratize AI or equitably distribute its benefits throughout society, partly because of the concentration of power in the technology industry6 and also because privately run AI models cannot perfectly align with specific democratic or public-interest objectives. This realization has increased calls for the development of a 鈥減ublic鈥 AI infrastructure. Like the spectrum between the concepts of open and closed systems, it is also possible to think of a spectrum that spans the concepts of wholly private and wholly public AI models.7 Proponents of public AI define the term differently but consistently note that government-owned or other non-corporate models can be made democratically accountable in ways that privately owned models cannot.8 While a public AI infrastructure could be built with closed AI models, in most contexts, the transparency that accompanies open models is aligned with many goals of public AI.

Unexpected Innovation and Competition

In 2008, Jonathan Zittrain, an American professor of internet law and the George Bemis Professor of International Law at Harvard Law School, published The Future of the Internet and How to Stop It. In it, he warned that the internet was losing the 鈥済enerative鈥 potential of its earlier years and growing increasingly controlled by a few powerful, private gatekeepers. Zittrain wasn鈥檛 using the term 鈥済enerative鈥 as many people now do with generative artificial intelligence (AI). He defined generativity as 鈥渁 system鈥檚 capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences.鈥9 He issued a warning that the internet鈥檚 generativity was under threat:

鈥淭he serendipity of outside tinkering that has marked that generative era gave us the Web, instant messaging, peer-to-peer networking, Skype, Wikipedia鈥攁ll ideas out of left field. Now it is disappearing, leaving a handful of new gatekeepers in place, with us and them prisoner to their limited business plans.鈥 Even fully grasping how untenable our old models have become, consolidation and lockdown need not be the only alternative. We can stop that future.鈥10

Those ideas 鈥渙ut of left field鈥 proved profoundly transformative. The generative internet was worth protecting, but Zittrain鈥檚 prescient warning became a reality. 鈥淲eb 2.0,鈥 or the social media era of the internet, was dominated by the rise of large social media companies that became intermediaries for much of society鈥檚 online activity. The resulting environment is less generative. Internet monocultures have spread. A few corporations with access to massive amounts of data enjoy an outsized ability to determine the boundaries of online privacy, shape content creation and consumption, and narrow the range of experiences readily accessible online.

These same companies are at the forefront of today鈥檚 AI innovation race, and they are poised to extend their consolidation. But an ecosystem in which open models thrive alongside proprietary ones can culturally diversify the technology landscape, promote competition, and spur the kind of unexpected innovation that made the early internet such a powerful force for serving a variety of public-interest objectives.11

The past three decades of free and open-source software development have produced examples of lasting innovative impact in large, well-known projects, like the Linux kernel, and in smaller software projects built around the needs of specific communities. Time and again, the insights of people modifying open-source code to fit their own needs and open source鈥檚 tested value as a cost-effective foundation on which to build have advanced technological developments. The ability to examine and modify code has catalyzed innovative new technologies and changed the landscape of how software vulnerabilities are found and fixed.

Open source鈥檚 success, and the open standards that make up the internet, provide lessons that can make it easier for AI innovation to be generative in the way Zittrain meant it. As Leslie Daigle, former chair of the Internet Engineering Task Force Board鈥檚 oversight committee and internet architecture board, stated in a 2019 white paper: 鈥淭he more proprietary solutions are built and deployed instead of collaborative open standards-based ones, the less the internet survives as a platform for future innovation.鈥12 A more recent essay makes a similar point by invoking an ecological imperative 鈥渢o rewild the internet鈥 in ways that allow spontaneity and broad-based innovation to flourish again.13 Encouraging the development of more open AI models can play a vital role in such a process. Society cannot foresee the specific innovations that open AI models will create鈥攖hat is precisely the point鈥攂ut the lessons of open-source software demonstrate that they will occur and that some will have broad, transformative impacts.

鈥淪ociety cannot foresee the specific innovations that open AI models will create鈥攖hat is precisely the point鈥攂ut the lessons of open-source software demonstrate that they will occur and that some will have broad, transformative impacts.鈥

The Linux kernel serves as an illustrative and concrete example of how open source can have a broad yet unforeseen impact. Linux is everywhere in modern computing and is far more than a platform for tech enthusiasts; billions of non-technical users interact with Linux systems every day. Not only does Google have a long history of running Linux on its servers14 and having its own customized version of Linux for its developers,15 but both Google鈥檚 mobile operating system, Android, and its laptop operating system, ChromeOS, are built using Linux at their core. All Chromebook and Android users are running Linux. Many web hosting providers also use Linux-based systems that run a whole stack of open-source software to provide internet services, including web servers, databases, and remote code processing. These providers run the gamut in size, from small hosting providers to some of the biggest players in hosting, like Amazon Web Services. Open-source web servers make up roughly half of all web servers on the internet.16 Without Linux and popular open-source projects for tasks like running websites, the internet as we know it may have taken shape differently.

While Google, Amazon, Meta, Microsoft, and Apple certainly all have proprietary 鈥渃losed source鈥 software, those private tools are often built on top of or interact with open-source software. Recognizing the broader value of an open ecosystem, all of those companies have contributed code, paid labor, and financial or material support to open-source communities. Even one of the largest historical players in the closed-source software market, Microsoft, has deepened its embrace of open source over the last decade. Notably, the company has simplified the process of running certain open-source software in Windows and purchased GitHub, arguably the world鈥檚 largest repository of open-source code. There is a shared understanding in tech鈥攆rom developers of small software projects to large corporate players鈥攖hat open source has created a solid foundation on which tech innovations, both open and proprietary, are more easily built.

In 1991, when software engineer Linus Torvalds first released the Linux kernel, he could not have predicted that in over three decades, the software would be a cornerstone of the internet or run on billions of smartphones. The year 1991 was still relatively early in the personal computing revolution and the internet was in its commercial infancy; the future was not clear. To the extent that society is in the midst of an AI revolution, that revolution is still in its early days. While experts can speculate about possible paths in AI鈥檚 development, they should do so with an acknowledgment that unexpected paths are inevitable. By analogy to the 1990s, people are using thirty-pound desktop PCs on dial-up internet and trying to imagine the era of smartphones. There is simply no way to clearly forecast all the surprising ways in which people will use AI over the next 20 years, but based on historical trends, it is very likely that open models will sit at the foundation of some of the biggest advancements in AI.

Importantly, open models built for all sizes and purposes鈥攏ot simply large foundation models dominating discussion today鈥攚ill spur critical innovation.17 As smaller, bespoke models emerge and have transformative impact, the generative potential of open-source innovation in the AI context will become increasingly clear. Open models can be designed and modified to address needs in fields as diverse as medicine, cybersecurity, urban planning, and climate change. A few dominant proprietary models can perhaps be adapted to tackle a subset of these problems, but the interests of large corporations may not naturally align with context-specific applications that serve the public interest. A healthy open-source ecosystem is far more conducive to communities鈥 ability to define problems and solutions on their own terms.

As has been the case with open-source software, open AI models can allow people from various means and backgrounds to respond to use cases in their communities that aren鈥檛 being considered by the private sector and fill the technical gaps they identify. For example, the is an open-source effort aimed at allowing communities that are not currently served by mobile network operators (MNOs) to form their own MNOs by distributing both the software to build them and hardware schematics.18 Open-source models can similarly provide a toolset for people who find uses for AI in places where larger AI companies like , , or may not be looking or incentivized to look.

This kind of innovation requires enough time and space to take shape and scale. While the impact of many innovations鈥攍ike Linux鈥攖hat are attributable to open source can easily be identified in hindsight, they were not obvious when created. Given the space to develop and the ability to interact with real-word use cases, open-source AI models can follow a similar trajectory as software and catalyze creative new uses.

We must acknowledge that while open models鈥 innovative potential can contribute to a more competitive AI software ecosystem, they will still exist alongside market concentration elsewhere in the AI supply chain. This is particularly true at the infrastructure level, where there are large players who dominate in hardware and cloud computing. Proponents of open models should understand the resource challenges that smaller players currently face in building and training AI, but research toward less resource-intensive model training19 combined with open-source software鈥檚 history of enabling competition will produce a more innovative AI ecosystem.

We should not overstate open AI models鈥 ability to serve as a panacea for stopping the consolidation of AI. But the history of open-source software has shown that openness leads to a more innovative environment and brings competitive benefits to the entire tech ecosystem. This lesson should find an analog in AI. Open-source AI models can draw from the history of open-source software, anticipating that the iterative creativity that has brought hard-to-foresee innovation in open-source software for decades will bring about impactful new uses for AI, particularly because those contributions might come from the most unexpected places.

Educational and Research Purposes

One of the key benefits of open source has been its ability to lower the barriers people face when learning how technologies work. The ability to download, study, and freely modify code has led to a wide availability of open-source programming languages and training materials. This has made possible both new forms of training, like coding bootcamps, as well as new avenues for computer science research. Lowering barriers is not only about cost, but also about the ability to freely use and modify code for both training and research purposes.

Open-source models are vital to democratizing education about artificial intelligence (AI) as a technology and avoiding the concentration of knowledge among a small group of people granted access by a few companies to their closed models. Open models and code empower a wider range of people to access these technologies in ways that allow them to gain a hands-on understanding of how they work. Enabling a wider variety of researchers, students, technologists, and hobbyists鈥攁long with companies鈥攖o examine, run, and modify AI models in unrestricted ways will lead to greater insights about the technology. Such insights are possible when people are equipped with access to a technology and can form deep knowledge of that technology based on their experiences using it.

Consolidating AI around a few large players with closed-source models runs the risk of confining most high-level AI skills and expertise within the walls of the large tech companies that build those models. Without open models, there would be fewer opportunities for those who are not employed by an AI company鈥攐r in that company鈥檚 approved training pipeline鈥攖o learn about the fundamentals of AI technologies. Open models can create multiple pathways to learning AI.

Furthermore, the ability to examine and modify how model code works will lower barriers to academic research on AI. A wider variety of people researching the technology will produce active exploration about a larger and more diverse set of questions about AI.

Indeed, open models may already be impacting AI research, where researchers have already designed experiments using existing open-source AI models to advance the general understanding in the field. For example, Carnegie Mellon and Apple researchers, in 2024, used 鈥檚 open-source models to explore ways of creating higher quality training regimens for large-language models using synthetic data.20 What they found was a method for producing more accurate models, using a smaller training corpus and spending less overall time training. This type of research, which could make model training more efficient (and thus less resource intensive), was only possible because the researchers could conduct experiments using an open model. Because they used open tooling and clearly described their methodologies, their more efficient method for producing more accurate models can be replicated, expanded, and used collectively to strengthen AI models for everyone.

In a seminal 1999 article, Lawrence Lessig, the Roy L. Furman Professor of Law and Leadership at Harvard Law School, described the 鈥淥pen Source Software Moment鈥 taking shape at the time.21 Laying out the ideals of open source, he observed that 鈥減utting into the commons one鈥檚 work product鈥攐f giving away what one makes鈥 might seem 鈥渁lien to our tradition鈥 but actually functioned much like science where 鈥減rogress [is] made and given to the next generation.鈥22 Open models can create multiple pathways to learning AI, and they become a critical part of ensuring AI knowledge is more accessible to people who want to learn about it, whether they are hobbyists or professional researchers. They also make it easier for all of those people to share what they learn with others.

Mitigated Security Risks

Commentary about security in artificial intelligence (AI) often simplistically equates greater model openness with greater risk. Some commentary goes further, claiming that open-source AI is 鈥渦niquely dangerous鈥23 when compared to closed models like . All AI models carry security risks, but imprecise claims about open AI models miss the greater nuance required when discussing the security risks of AI and how those risks differ between open and closed models. As the National Telecommunications and Information Administration (NTIA) noted in a report required by Executive Order 14110 on 鈥渢he Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,鈥 it is more appropriate to analyze the marginal risks that open models present when compared to closed models and information that is publicly available online.24

There is a broad list of harms that could come from AI, including widespread disinformation, but researchers have concluded there is much less evidence about the marginal risks posed by open models.25 Academic researchers have highlighted certain areas, like the computer generation of child sexual abuse material,26 where there is evidence that open models pose concerning extra risks. We do not suggest that developers, policymakers, or researchers should take a cavalier approach to such risks but instead propose that they should rigorously study, monitor, and mitigate against marginal risks that do emerge in open models.

The NTIA鈥檚 report espouses this view, concluding that 鈥渢he government should not restrict the wide availability of model weights for dual-use foundation models at this time.鈥27 Instead, the NTIA calls for building governmental capacity for monitoring risks and better understanding the benefits of open models. The NTIA report highlights open models鈥 ability to benefit security, including in cyber deterrence and defense, advancing safety research and identifying vulnerabilities, and promoting transparency and accountability through third-party auditing.28

AI comes with security concerns, and a responsible approach to building and maintaining any AI model requires vigilance in monitoring potential security vulnerabilities. Security concerns germane to AI fall into three broad categories. The first consists of security concerns that resemble the vulnerabilities that have long been present in other kinds of software development. The second consists of concerns about the downstream uses of AI, which concern how the technology is used rather than how it is developed. The third consists of AI models that could be used to fuel novel cyberattacks.

All code is at risk of having vulnerabilities, regardless of whether developers use an open or closed model for licensing it, and this is likely to hold true even in AI. As researchers at the Wilson Center have noted, 鈥淰ulnerabilities can come from dependency management (what, how, and which software packages are pulled into a new software project) to bad-faith actors (people that intentionally break into systems, or contributors intentionally changing the software to be exploitable) whether the software is developed internally or in the open.鈥29

An additional source of vulnerability that could be added to this list is simple human error. For example, in some coding languages, an error as small as forgetting to put quotes around a single variable (e.g., $FOO versus 鈥$FOO鈥) could create a vulnerability in the code. The more complex a project鈥檚 codebase becomes, the more likely that human error will appear somewhere in that project鈥檚 code or even in one of its dependencies (and this risk increases with every new dependency).

Software vulnerabilities can occur whether or not the source code is open or closed. With AI, as with all software, there is a strong chance that some of the code running it will have errors that introduce vulnerabilities, regardless of the AI model鈥檚 licensing. However, discourse around open-source models must also account for the ways in which they will confer key security benefits long recognized in open-source software.

Actors from the U.S. private and public sectors have long recognized the ways in which open-source software is central to security. Microsoft鈥檚 2023 Digital Defense Report emphasizes open-source software鈥檚 benefits to cybersecurity, noting that the nature of open-source collaboration is key to mapping the threat landscape and scaling responses to those threats.30 Notably, open-source software has long been critical to the Department of Defense, and military modernization efforts call for broadening the use of open-source software and properly securing it.31 The Cybersecurity and Infrastructure Security Agency has also highlighted open source鈥檚 relationship to strong security in the context of open-source AI models, that 鈥渢he general consensus among the security community is that the benefits of open sourcing鈥 outweigh the harms that might be leveraged by adversaries.鈥32

The ability to modify codebases and run AI models independently of their creators is something unique to open models. While some closed models may allow 鈥渞ed teaming鈥 and have a method for bug reporting, security testers may still face constraints on what they can inspect about the model or restrictions on the types of attack they can simulate and when they can simulate them.

By contrast, running code on machinery that is not owned by a model鈥檚 writer allows security researchers to use a number of security analysis techniques and methods (such as simulating brute force attacks) that might not be available or allowable when evaluating a publicly available closed model like or . Such independent security research could uncover ways to manipulate model outputs or alter how an AI system makes choices, which would then make future versions of those models more impervious to such attacks.

In addition to allowing researchers to improve models, the ability to examine an AI model by completely accessing its model weights, training data, and code also allows for formal security vetting by independent third parties. Third parties, whether governmental or private, bring their own goals and lend their reputation to such reviews, which can increase public trust in a model.

Bugs and vulnerabilities will exist in AI code, as they do in all code, but given the strong track record of security in open source, there is a whole class of risks that aren鈥檛 substantially new or different鈥攕uch as the environment where the model is run鈥攁nd there are many well-established methods for addressing them.

The risks that are novel with AI generally manifest 鈥渄ownstream鈥 from the technology itself. That is to say, the harm happens as a result of the application of AI rather than the AI itself. There is much well-founded concern about the use of AI in supercharging disinformation campaigns, but thus far, there is no clear evidence of differential impact based on whether the AI used in such campaigns are open or closed models. Indeed, Microsoft鈥檚 own research team notes that large-language models like , which runs on a closed foundation model, have been weaponized by adversary governments, including China.33

There are legitimate security concerns when it comes to AI. These concerns deserve attention and action that addresses them. However, to ensure that a narrow security lens isn鈥檛 used to hinder the development of an open AI ecosystem, experts must precisely identify where open models present risks beyond publicly available closed models or information that is publicly available online.

Citations
  1. See, e.g., Jaden Fiotto-Kaufman, Alexander R. Loftus, et al., 鈥淣Nsight and NDIF: Democratizing Access to Foundation Model Internals,鈥 arXiv, July 18, 2024, .
  2. See, e.g., Spandana Singh, Charting a Path Forward: Promoting Fairness, Accountability, and Transparency in Algorithmic Content Shaping (国产视频鈥檚 Open Technology Institute, September 9, 2020), source.
  3. Benjamin Brooks, 鈥淐onsultation on Safe and Responsible AI in Australia,鈥 Department of Industry, Science and Resources, October 4, 2024, .
  4. Divya Siddarth, Saffron Huang, and Audrey Tang, 鈥淎 Vision of Democratic AI,鈥 Digitalist Papers, September 22, 2024, .
  5. Siddarth, Huang, and Tang, 鈥淎 Vision of Democratic AI,鈥 .
  6. 鈥淲e find that even though there are a handful of meaningfully transparent, reusable, and extensible AI systems, these and all other 鈥榦pen鈥 AI exists within a deeply concentrated tech company landscape. With scant exceptions that prove the rule, only a few large tech corporations can create and deploy large AI systems at scale鈥iven the immense importance of scale to the current trajectory of artificial intelligence, this means 鈥榦pen鈥 AI cannot, alone, meaningfully 鈥榙emocratize鈥 AI, nor does it pose a significant challenge to the concentration of power in the tech industry.鈥 Widder, West, and Whittaker, 鈥淥pen (For Business),鈥 .
  7. Nik Marda, Jasmine Sun, and Mark Surman, Public AI: Making AI Work for Everyone, By Everyone (Mozilla, September 2024), 7, .
  8. See, e.g., Marda, Sun, and Surman, Public AI, ; Public AI Network, 鈥淧ublic AI,鈥 ; Sitaraman and Pascal, 鈥淭he National Security Case for Public AI,鈥 ; Sanders, Schneier, and Eisen, 鈥淗ow Public AI Can Strengthen Democracy,鈥 .
  9. Zittrain, The Future of the Internet, .
  10. Zittrain, The Future of the Internet, x, .
  11. Rishi Bommasani, Sayash Kapoor, et al., 鈥淥n the Societal Impact of Open Foundation Models,鈥 arXiv, February 27, 2024, .
  12. Leslie Daigle, The Internet Invariants: The Properties Are Constant, Even as the Internet Is Changing (Thinking Cat, May 16, 2019), 40, .
  13. 鈥淸The] unpredictability [of鈥 internet infrastructure makes it generative, worthwhile, and deeply human.鈥 Maria Farrell and Robin Berjon, 鈥淲e Need to Rewild the Internet,鈥 No膿ma Magazine, April 16, 2024, .
  14. Marc Merlin, 鈥淟ive Upgrading Thousands of Servers from an Ancient Red Hat Distribution to a 10-Year Newer Debian Based One,鈥 presented at the Large Installation System Administration Conference, Washington, DC, November 3鈥8, 2013, .
  15. Kordian Bruck, Margarita Manterola, and Sven Mueller, 鈥淗ow Google Got to Rolling Linux Releases for Desktops,鈥 Google Cloud (blog), July 12, 2022, .
  16. 鈥淥ctober 2024 Web Server Survey,鈥 Netcraft, October 31, 2024, .
  17. James Thomason, 鈥淲hy Small Language Models Are the Next Big Thing in AI,鈥 Bloomberg, April 12, 2024, .
  18. 鈥淥penCellular,鈥 Telecom Infra Project, .
  19. Paolo Faraboschi, Ellis Giles, Justin Hotard, Konstanty Owczarek, and Andrew Wheeler, 鈥淩educing the Barriers to Entry for Foundation Model Training,鈥 arXiv, October 14, 2024, .
  20. Pratyush Maini, Skyler Seto, He Bai, David Grangier, Yizhe Zhang, and Navdeep Jaitly, 鈥淩ephrasing the Web: A Recipe for Compute & Data-Efficient Language Modeling,鈥 arXiv, January 20, 2024, .
  21. Lessig, 鈥淥pen Code and Open Societies, 104, .
  22. Lessig, 鈥淥pen Code and Open Societies,鈥 1411鈥1412, .
  23. David Evan Harris, 鈥淥pen-Source AI Is Uniquely Dangerous,鈥 Spectrum, January 12, 2024, .
  24. Dual-Use Foundation Models, 10, .
  25. Bommasani, Kapoor, et al., 鈥淥n the Societal Impact of Open Foundation Models,鈥 6, .
  26. David Thiel et al., Generative ML and CSAM: Implications and Mitigations (Stanford Internet Observatory, June 24, 2023), 7鈥8, .
  27. Dual-Use Foundation Models, 36, .
  28. Dual-Use Foundation Models, 17, .
  29. Ashley Schuett, Alison Parker, and Alex Long, 鈥淥pen Source Software and Cybersecurity: How Unique Is This Problem?鈥 CTRL Forward (blog), Wilson Center, November 10, 2022, .
  30. Microsoft Threat Intelligence, Microsoft Digital Defense Report 2023 (Microsoft, October 2023), 116, .
  31. Ben FitzGerald, Jacqueline Parziale, and Peter L. Levin, Open Source Software and the Department of Defense (Center for a 国产视频n Security, August 30, 2016), .
  32. Jack Cable and Aeva Black, 鈥漌ith Open Source Artificial Intelligence, Don鈥檛 Forget the Lessons of Open Source Software,鈥 Cybersecurity and Infrastructure Security Agency, July 29, 2024, .
  33. Microsoft Threat Intelligence, 鈥淪taying Ahead of Threat Actors in the Age of AI,鈥 Microsoft Security, February 14, 2024, .
Benefits of Open-Source AI

Table of Contents

Close