President Biden’s executive directive introduces six new benchmarks related to the safety and security of AI technology and outlines clear objectives for promoting ethical AI deployment in the government.
On October 30, the Biden administration issued an executive order outlining a vision for AI safety and security.
The Biden administration’s directive explicitly cites its foundation in prior initiatives, including the AI safety pledges made by 15 prominent companies within the industry.
The latest guidelines encompass six key principles, along with strategies aimed at fostering ethical AI adoption in governmental operations, safeguarding the privacy of citizens, and implementing measures to ensure the security of consumer privacy.
The first standard requires that developers of advanced AI systems must share details about how they are testing for safety risks and provide important information about their systems to the government.
Secondly, the National Institute of Standards and Technology will create tools and tests that can be used to assess the safety, security, and trustworthiness of AI systems.
Additionally, the administration is focused on reducing the risks related to AI manipulation of “hazardous biological materials” by implementing fresh screening standards for biological synthesis.
Another facet of the standards involves making efforts to shield against the potential for AI-enabled fraudulent activities and misleading practices.
The document indicates a commitment to formulating standards and optimal methods for identifying AI-generated content and validating official content.
Additionally, the administration has plans to extend the current AI Cyber Challenge initiated by the administration by enhancing a cybersecurity initiative dedicated to creating AI solutions for detecting and addressing vulnerabilities within critical software systems.
Lastly, it has mandated the creation of a national security memorandum that will provide additional guidance on actions related to AI security.
The order also addressed the privacy risks of AI:
“Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”
In response to this, the president has formally requested Congress to pass bipartisan data privacy legislation, urging the allocation of federal resources for advancing and exploring privacy techniques and technologies.
U.S. Officials are also committed to directing their attention towards promoting progress in equity and civil rights within the realm of AI. They aim to use AI responsibly to enhance consumer welfare and closely observe its effects on employment and various other social aspects.
In conclusion, the executive order outlines the administration’s intentions for engaging in international AI regulatory efforts.
Among the G7 nations, the United States was part of a recent consensus where they jointly established non-binding principles for AI developers to follow.
However, within its administrative framework, the government has outlined its intentions to introduce well-defined criteria for safeguarding rights and safety, enhancing AI procurement processes, and bolstering the utilization of AI. Additionally, it will facilitate AI training for all eligible employees within relevant fields.
In July, Senators in the United States convened a confidential session at the White House to deliberate on potential regulations for this technology. The Senate further organized a series of “AI Insight Forums” to gain insights from leading experts in the AI industry.
Read also: The need for interoperability across multiple bank chains: experts share insights