What Biden's executive order on AI does and means
President Biden on Monday signed an executive order creating new standards for safety and privacy protections over artificial intelligence, a move the White House insists will safeguard Americans' information, promote innovation and competition, and advance U.S. leadership in the industry.
With laws lagging far behind technological advances, the administration is touting the new EO as building on prior voluntary commitments from some of the leading tech companies on the safe and secure development of AI. In remarks Monday, the president called his executive order "the most significant action any government anywhere in the world has ever taken on AI safety, security and trust."
"We're going to see more technological change in the next 10, maybe the next five years, than we've seen in the last 50 years," Mr. Biden said. "And that's a fact. And the most consequential technology of our time, artificial intelligence, is accelerating that change. It's going to accelerate it at warp speed. AI is all around us."
AI provides incredible opportunities, but comes with significant risks, the president said.
"One thing is clear — to realize the promise of AI and avoid the risk, we need to govern this technology," he said. "There's no other way around it, in my view. It must be governed."
The president specifically mentioned "deepfakes," fake videos that, mimicking a person's voice, appear to show a person saying and doing something he or she never did.
"I've watched one of me. I said, 'When the hell did I say that?'" the president said, to laughs.
A senior administration official told reporters the EO "has the force of law" and they'll be using executive powers "pretty fulsomely," but the president will still pursue various bipartisan legislation with Congress.
What the executive order does
The executive order puts in place additional standards and requirements.
- The order requires that developers of AI systems share their safety test results with the federal government. That's simply in line with the Defense Production Act, the White House says, requiring that companies developing a model that could pose a risk to national security, national public health or the national economic security notify the federal government and share the results.
- The administration is also going to develop standards for biological synthesis screening, aimed at protecting against the risky use of AI for creating dangerous biological materials. These standards will be a condition of federal funding.
- The National Institute of Standards and Technology will set standards for safety before public release, and the Department of Homeland Security will apply those standards to critical infrastructure sectors and establish an AI Safety and Security Board. The Department of Energy will work with DHS to address threats to infrastructure as well as chemical, biological and other types of risks.
- The order is also supposed to strengthen privacy by evaluating how agencies collect and use commercially available information, and develop guidelines for federal agencies to evaluate how effective privacy-ensuring techniques are. The administration also wants to strengthen privacy-preserving tech and research, like cryptography tools.
- The president's order also tries to address what it calls algorithmic discrimination so the Department of Justice and federal civil rights offices can best investigate and prosecute civil rights violations related to AI. And the administration intends to develop best practices of the use of AI in sentencing, pretrial release and detention, risk assessments, surveillance and crime forecasting, among other parts of the criminal justice system.
- The executive order also develops best practices to minimize the harms and harness the benefits of AI when it comes to job displacement and labor standards.
- The administration also wants to harness the expertise of highly skilled immigrants and nonimmigrants with expertise in key areas to stay, study and work in the U.S., by making the visa interview and review process more efficient.
The White House says the administration has consulted on AI governance frameworks with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
Why now?
Mr. Biden has urged Congress to craft legislation on AI, but technology is accelerating very quickly, the senior administration official who spoke to reporters said. And Congress has a lot on its plate. The administration thinks it's likely Congress will continue to work on AI.
— Kristin Brown contributed to this report