Prem M. Trivedi
Director, Open Technology Institute, 国产视频
Last week was defined by big-ticket U.S. government activity on artificial intelligence (AI). On October 30, President Biden issued an on the 鈥淪afe, Secure, and Trustworthy Development and Use of Artificial Intelligence.鈥 The document is a sweeping, hundred-plus-page effort that directs federal agencies to pursue multiple policy objectives central to the responsible development and use of AI. Executive Order 14110 (鈥渢he EO鈥) focuses on directing federal agency activities, but its effects will be felt throughout our governance ecosystem, including government, the private sector, academia, and civil society. The order instructs federal agencies to advance key policy objectives, including ensuring AI鈥檚 safety and security, promoting responsible innovation and competition, supporting American workers, advancing equity and civil rights, protecting consumer interests, safeguarding privacy and civil liberties, and promoting global cooperation on AI governance.
Shortly after President Biden signed the order, the Office of Management and Budget issued a draft of its (鈥淥MB guidance鈥) for public review. Administration officials have emphasized that these steps constitute the 鈥渕ost significant鈥 action on AI that any government has undertaken. Whether or not one agrees with this assertion, the comprehensive and ambitious nature of the Biden administration鈥檚 effort to alter the national and global governance landscape is hardly up for debate.
What should we take away from last week鈥檚 developments? This analysis outlines key elements of the executive order without attempting to be exhaustive. In particular, we focus on requirements related to safety and security, protecting civil rights and civil liberties, and mitigating harms to people.
Section 4 establishes a number of requirements on security and safety in AI and is the section of the order that focuses most comprehensively on managing product safety risks. It directs the National Institute of Standards and Technology (NIST) to develop guidelines and best practices 鈥渨ith the aim of promoting consensus industry standards鈥 for trustworthy AI systems. This guidance will include a specific resource on generative AI that will accompany the AI Risk Management framework. Importantly, the EO directs the Department of Commerce to create a reporting framework for companies developing dual-use foundation models that could pose security risks. The Department of Commerce is also required to assess the risks posed by synthetic content and develop guidance for watermarking and authenticating U.S. government digital content. Additionally, Section 4 prioritizes managing AI-specific risks to critical infrastructure and cybersecurity and the intersection of AI and chemical, biological, radiological, and nuclear threats.
Multiple sections of the EO take a people-centric approach to discussing potential harms from AI systems. Section 8 focuses on protecting 鈥渃onsumers, patients, passengers, and students鈥 from a range of potential harms that arise from AI, including fraud, discrimination, and threats to privacy. It directs agencies to address these threats across various sectors of the economy, including healthcare, transportation, and communications networks. Section 6 reflects the Biden administration鈥檚 focus on supporting workers and ensuring employees鈥 wellbeing through the significant economic shifts that AI will engender.
The EO focuses considerably on the need to center civil rights and civil liberties in an AI governance regime. Section 7鈥檚 focus on advancing equity and civil rights builds on the White House鈥檚 issued last October. It directs the Attorney General to address civil rights violations and discrimination related to AI with a focus on the use of AI in the criminal justice system. Section 7 also directs agencies to prevent and remedy discrimination and other harms that could occur when AI is used in federal programs and to administer benefits. Several agencies are directed to take actions to strengthen civil rights enforcement in various sectors of the economy, including housing, financial services, and federal hiring practices.
Section 9 focuses specifically on mitigating privacy risks that arise from large-scale data collection and AI models鈥 inferences about people. In a welcome move, the EO directs federal agencies to invest in privacy enhancing technologies and methods鈥攕uch as differential privacy鈥攖o ensure that we can reap the benefits of advanced analytics while limiting the risks to people鈥檚 privacy. Additionally, the White House鈥檚 messaging鈥攊ncluding its accompanying for the EO鈥攊ncluded a push from the President to Congress to 鈥減ass bipartisan federal privacy legislation to protect all Americans.鈥
The accompanying OMB guidance applies to all federal agencies鈥 use of AI except for national security systems. The guidance provides further detail on the EO鈥檚 requirements that each agency designate a Chief Artificial Intelligence Officer, remove barriers to responsibly using AI, and submit AU use case inventories to OMB, among other stipulations. Perhaps most significantly, the OMB guidance establishes two important categories of 鈥渟afety-impacting AI鈥 and 鈥渞ights-impacting AI.鈥 These designations each are accompanied by their own requirements and minimum practices, which include:
The guidance goes on to establish additional minimum practices for rights-impacting AI:
Stepping back, here are a few broader observations on the impact of the EO and the draft OMB guidance.