In the bid to make the use of AI efficient and safer, the White House, OpenAI, and other AI firms have come together to forge the future of AI technology. The organizations involved have pledged to voluntarily commit to reinforcing the safety, security, and trustworthiness of AI technology and services.
The initiative is coordinated by the White House and aims to advance meaningful and effective AI governance on a global scale.
As part of its mission to build safe and beneficial AGI (Artificial General Intelligence), OpenAI commits to piloting and refining governance practices tailored to highly capable foundation models like the ones they produce. They will also invest in research to inform regulations, particularly in areas like assessing potentially dangerous capabilities in AI models.
OpenAI revealed in the update the set of commitments mentioned by the White House towards the development of AI. They include:
- Safety: Committing to internal and external red-teaming of AI models or systems in areas such as misuse, societal risks, and national security concerns. Companies will develop specialized red-teaming regimes and publicly disclose safety procedures in their transparency reports.
- Information Sharing: Working towards information sharing among companies and governments regarding trust and safety risks, dangerous capabilities, and attempts to circumvent safeguards. They aim to establish forums or mechanisms to develop shared standards and best practices for AI safety.
- Security: Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Companies will incentivize third-party discovery and reporting of issues and vulnerabilities.
- Trust: Developing mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated content.
They commit to publicly reporting model or system capabilities, limitations, and domains of appropriate and inappropriate use, as well as prioritizing research on societal risks posed by AI systems and deploying frontier AI systems to address society’s greatest challenges.
Overall, the commitments aim to promote responsible AI development, transparency, and collaboration in the AI community and with governments, civil society organizations, and academia.
By voluntarily adopting these practices, companies aim to contribute to the ongoing discussion on AI governance and create a more trustworthy and secure AI ecosystem for the benefit of all.
Speaking about the commitments, Anna Makanju, VP of Global Affairs, said that global policymakers are in the process of contemplating fresh legislation concerning highly capable AI systems and that the commitments made between the organizations offer concrete and specific practices that contribute to this ongoing dialogue.
“This announcement is part of our ongoing collaboration with governments, civil society organizations, and others around the world to advance AI governance.”