The White House has formed a committee of volunteer companies to discuss developments around the use of AI. To advance the responsible development of artificial intelligence (AI), the Biden-Harris Administration secured voluntary commitments from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability.
Around the world, there are growing concerns about AI with some experts worrying about the dangers of uncontrolled AI development to humanity. A few of these fears centered around job loss due to AI and caused big changes in how people work all over the world.
In response to these fears and other concerns, AI professionals as well as nations are calling for a responsible use and development path that will be for the betterment of humans. As a result, the US has taken steps to collaborate with top players to properly harness the potential of the technology.
Collaboration with AI experts
The creation of this committee is built upon the groundwork laid in July when the Biden-Harris Administration garnered support from seven leading AI companies, including Google, Microsoft, and OpenAI. The collective aim is to harness the benefits of AI while mitigating potential risks.
“These commitments represent an important bridge to government action, and are just one part of the Biden-Harris Administration’s comprehensive approach to seizing the promise and managing the risks of AI,” the White House said.
“The Administration is developing an Executive Order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development.”
Guiding principles and pledge
White House noted that the principles underlying these commitments emphasize three key pillars for the future of AI: safety, security, and trust. Furthermore, the eight companies involved, have made specific pledges.
Firstly, they want to ensure that products are safe before introducing them to the public. The companies commit to rigorous internal and external security testing of their AI systems before release, with input from independent experts. This safeguards against significant AI risks such as biosecurity, cybersecurity, and societal impacts.
Secondly, they pledge to build systems that put security first. Investment in cybersecurity and insider threat safeguards to protect proprietary and unreleased models will be a priority, as AI models will only be released when intended and deemed secure.
Thirdly, to earn the trust of the public The companies pledge to develop technical mechanisms that enable users to identify AI-generated content, such as a watermarking system. Additionally, they commit to transparent reporting on their AI systems’ capabilities, limitations, and appropriate and inappropriate use, covering both security and societal risks.
The committee also mentioned the importance of research to mitigate societal risks posed by AI systems, including harmful bias and discrimination, and protecting privacy. They also commit to deploying advanced AI systems to address pressing societal challenges.
AI regulation in Europe
Far away in Europe, Spain achieved a milestone in the AI industry by launching the first AI regulatory agency in Europe, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA).
The Spanish Agency for the Supervision of Artificial Intelligence (AESIA) is an autonomous agency of the Spanish Department of Economy responsible for oversight, counseling, awareness, and training regarding the proper use of artificial intelligence (AI)
AESIA was created to supervise the development, supervision, and monitoring of AI projects within the framework of the National AI Strategy, as well as the projects promoted by the European Union.