Biden’s executive order sets new standards for AI development
While an executive order is a significant step towards regulating AI in the US, it is unclear to what extent it will be enforceable without further legislative action.
On Monday, President Biden issued a sweeping executive order to guide the development of safe, secure, and trustworthy AI. The executive order is part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, leading to safe, secure, and trustworthy AI. The president’s order contains several actions and measures, including:
- New standards for AI safety testing: The president will invoke an emergency federal power, the Defense Production Act, to marshal the tech industry in support of national defence. The National Institute of Standards (NIST) will develop new testing standards, and powerful AI model developers will have to share their safety test results and other essential data with the government before any public release.
- New AI safety and security standards: the National Security Council and White House Chief of Staff are ordered to develop a National Security Memorandum, directing further actions on AI and security. The document will also ensure that the US military and intelligence community use AI safely, ethically, and effectively in their missions and take measures to counter enemies’ military use of AI.
- Protecting privacy: AI makes it easier to extract and collect personal data at scale, which developers use to train their models. The order seeks to establish safeguards to protect individuals whose data is used in large-scale model training.
- AI-generated content: Ensure the safety of citizens from AI-driven fraud and deception by establishing guidelines and protocols for identifying AI-generated material and verifying official content. The Department of Commerce plans to create directives for content authentication and watermarking, ensuring clear identification of AI-generated content.
- Healthcare sector: the president is directing the Department of Health and Human Services to establish a safety program to receive reports of—and immediately act to remedy—any harmful or unsafe healthcare practices implicating AI. The administration is also expanding grants for AI research in healthcare.
- Protecting against dangerous biomaterials: The President’s order aims to develop robust standards to prevent AI-engineered synthesis of harmful biological substances.
- AI and the future of work: develop principles and best practices to minimize the negative impact of AI-led automation on the human workforce.
Why does it matter?
Driven by OpenAI’s GPT and other foundation models, the emergence of generative AI has fueled a global debate on the need for guardrails and prompted a flurry of AI-related policy news. In May, at the G7 meeting in Japan, the Hiroshima AI Process was formed with guiding principles and a voluntary code of conduct for AI developers to follow. On 30 October, the G7 leaders agreed on an 11-point code of conduct for companies developing advanced AI systems.
Last week, the UN established a High-Level Advisory Body on AI to offer perspectives on AI governance as the EU was zeroing in on its long-awaited EU Act. This week, UK Prime Minister Rishi Sunak is convening a global AI safety summit at historic Bletchley Park, with US Vice President Kamala Harris in attendance. Monday’s White House announcement builds on prior actions taken by the President, including securing voluntary commitments from 15 leading tech companies—including Google, IBM, OpenAI, Microsoft, Meta, Nvidia, and Amazon—to allow third-party testing of their AI systems before public release and to ensure AI-generated content is marked and identified.
While an executive order is a significant step towards regulating AI in the US, it is unclear to what extent it will be enforceable without further legislative action. To that effect, the Biden-Harris Administration has called on Congress to follow suit, suggesting that more binding instruments will be required.