国产视频

In Short

Unpacking the White House鈥檚 Executive Order on AI

White House
Alex E. Proimos / Flickr

Last week was defined by big-ticket U.S. government activity on artificial intelligence (AI). On October 30, President Biden issued an on the 鈥淪afe, Secure, and Trustworthy Development and Use of Artificial Intelligence.鈥 The document is a sweeping, hundred-plus-page effort that directs federal agencies to pursue multiple policy objectives central to the responsible development and use of AI. Executive Order 14110 (鈥渢he EO鈥) focuses on directing federal agency activities, but its effects will be felt throughout our governance ecosystem, including government, the private sector, academia, and civil society. The order instructs federal agencies to advance key policy objectives, including ensuring AI鈥檚 safety and security, promoting responsible innovation and competition, supporting American workers, advancing equity and civil rights, protecting consumer interests, safeguarding privacy and civil liberties, and promoting global cooperation on AI governance.

Shortly after President Biden signed the order, the Office of Management and Budget issued a draft of its (鈥淥MB guidance鈥) for public review. Administration officials have emphasized that these steps constitute the 鈥渕ost significant鈥 action on AI that any government has undertaken. Whether or not one agrees with this assertion, the comprehensive and ambitious nature of the Biden administration鈥檚 effort to alter the national and global governance landscape is hardly up for debate.

Notable Elements in the Executive Order and the OMB guidance

What should we take away from last week鈥檚 developments? This analysis outlines key elements of the executive order without attempting to be exhaustive. In particular, we focus on requirements related to safety and security, protecting civil rights and civil liberties, and mitigating harms to people.

Safety and Security

Section 4 establishes a number of requirements on security and safety in AI and is the section of the order that focuses most comprehensively on managing product safety risks. It directs the National Institute of Standards and Technology (NIST) to develop guidelines and best practices 鈥渨ith the aim of promoting consensus industry standards鈥 for trustworthy AI systems. This guidance will include a specific resource on generative AI that will accompany the AI Risk Management framework. Importantly, the EO directs the Department of Commerce to create a reporting framework for companies developing dual-use foundation models that could pose security risks. The Department of Commerce is also required to assess the risks posed by synthetic content and develop guidance for watermarking and authenticating U.S. government digital content. Additionally, Section 4 prioritizes managing AI-specific risks to critical infrastructure and cybersecurity and the intersection of AI and chemical, biological, radiological, and nuclear threats.

Addressing Harms to People

Multiple sections of the EO take a people-centric approach to discussing potential harms from AI systems. Section 8 focuses on protecting 鈥渃onsumers, patients, passengers, and students鈥 from a range of potential harms that arise from AI, including fraud, discrimination, and threats to privacy. It directs agencies to address these threats across various sectors of the economy, including healthcare, transportation, and communications networks. Section 6 reflects the Biden administration鈥檚 focus on supporting workers and ensuring employees鈥 wellbeing through the significant economic shifts that AI will engender.

Protecting Civil Rights & Civil Liberties

The EO focuses considerably on the need to center civil rights and civil liberties in an AI governance regime. Section 7鈥檚 focus on advancing equity and civil rights builds on the White House鈥檚 issued last October. It directs the Attorney General to address civil rights violations and discrimination related to AI with a focus on the use of AI in the criminal justice system. Section 7 also directs agencies to prevent and remedy discrimination and other harms that could occur when AI is used in federal programs and to administer benefits. Several agencies are directed to take actions to strengthen civil rights enforcement in various sectors of the economy, including housing, financial services, and federal hiring practices.

Section 9 focuses specifically on mitigating privacy risks that arise from large-scale data collection and AI models鈥 inferences about people. In a welcome move, the EO directs federal agencies to invest in privacy enhancing technologies and methods鈥攕uch as differential privacy鈥攖o ensure that we can reap the benefits of advanced analytics while limiting the risks to people鈥檚 privacy. Additionally, the White House鈥檚 messaging鈥攊ncluding its accompanying for the EO鈥攊ncluded a push from the President to Congress to 鈥減ass bipartisan federal privacy legislation to protect all Americans.鈥

The accompanying OMB guidance applies to all federal agencies鈥 use of AI except for national security systems. The guidance provides further detail on the EO鈥檚 requirements that each agency designate a Chief Artificial Intelligence Officer, remove barriers to responsibly using AI, and submit AU use case inventories to OMB, among other stipulations. Perhaps most significantly, the OMB guidance establishes two important categories of 鈥渟afety-impacting AI鈥 and 鈥渞ights-impacting AI.鈥 These designations each are accompanied by their own requirements and minimum practices, which include:

  • Complete an AI impact assessment
  • Test the AI for performance in a real-world context
  • Independently evaluate the AI
  • Conduct ongoing monitoring and establish thresholds for periodic human review
  • Mitigate emerging risks to rights and safety
  • Ensure adequate human training and assessment
  • Provide appropriate human consideration in decisions that pose a high risk to rights or safety
  • Provide public notice and clear documentation through the AI use case inventory

The guidance goes on to establish additional minimum practices for rights-impacting AI:

  • Take steps to ensure that AI will advance equity, dignity, and fairness
  • Consult and incorporate feedback from affected groups
  • Conduct ongoing monitoring and mitigation for AI-enabled discrimination
  • Notify negatively affected individuals
  • Maintain human consideration and remedy processes
  • Maintain options to opt out where practicable

Key Takeaways

Stepping back, here are a few broader observations on the impact of the EO and the draft OMB guidance.

  1. Values, ethics, and democratic principles are front and center. The EO is unapologetic about enshrining values and ethics as an explicit part of U.S. government policy and oversight. The importance of safety, addressing bias and equity, and protecting civil rights and civil liberties are emphasized, and federal agencies are directed to take concrete actions with these values as guideposts. In doing so, the administration has lent substance to a U.S. vision for AI governance grounded in core democratic tenets and laid out a rough template that Congress could adapt. This approach is also a signal to U.S. companies and the international community鈥攊ncluding both partners and adversaries鈥攖hat the U.S. governance approach will focus on the relationship between AI and democratic health.
  2. The EO puts innovation and mitigating harms on an equal footing. The order breaks with a long U.S. tradition of adopting a largely laissez-faire approach to prioritizing innovation and focusing on risks at the margins. The administration鈥檚 decision to focus squarely on harms is an acknowledgment that developments in AI have the potential to fundamentally reshape societies and economies. The EO sends a clear message that responsible development, prioritizing safety, and protecting rights should be integral parts of governing AI systems鈥攏ot ancillary issues considered as an afterthought to the race to innovate. Given the administration鈥檚 rhetoric over the last year and more, this focus isn鈥檛 surprising鈥攂ut it is nonetheless significant within the context of a historical U.S. government approach to emerging technologies.
  3. The EO and the accompanying OMB guidance embrace a focus on both risks and rights. The EO builds on the important foundations laid by the Blueprint for an AI Bill of Rights and , which adopt rights- and risk-based governance models, respectively. The administration鈥檚 approach rejects a false choice between these models and attempts to give both full treatment within a single directive. In doing so, both the EO and the draft OMB guidance demonstrate to Congress that it is possible to embrace both the human-centric, rights-based approach and a product safety approach.
  4. Both the EO and the OMB guidance reflect the importance of public participation in AI governance. The EO reflects inputs from civil society that shaped the administration鈥檚 Blueprint for an AI Bill of Rights, which prioritize a rights-based approach and a focus on ensuring that AI benefits the most vulnerable groups without disproportionately harming them. The EO incorporates key aspects of the voluntary commitments that companies announced at the White House, which, unfortunately, meaningful civil society input. The administration appears focused here on ensuring meaningful public input by inviting public comments on OMB鈥檚 draft guidance. This decision ensures that public inputs can shape the implementation of a landmark executive order. OTI looks forward to submitting more detailed comments on the guidance.

More 国产视频 the Authors

Prem M. Trivedi
P Trivedi Bio.jpg
Prem M. Trivedi

Director, Open Technology Institute, 国产视频

Unpacking the White House鈥檚 Executive Order on AI