Connect with us

News

OpenAI unveils Preparedness Team to address Frontier Risk

Published

on

OpenAI is taking bold steps to ensure the safety of highly-capable AI systems. In a recent blog post, the organization outlined its commitment to addressing catastrophic risks by introducing a new initiative called the Preparedness Team, along with an innovative challenge.

OpenAI is a private research laboratory that aims to develop and direct artificial intelligence (AI) in ways that benefit humanity as a whole. 

The company was founded in 2015 by Elon Musk, Sam Altman, and others, and is headquartered in San Francisco.

OpenAI was created in part because of its founders’ existential concerns about the potential for catastrophe resulting from carelessness and misuse of general-purpose AI. 

The company has a long-term focus on fundamental advances in AI and its capabilities, to ensure that artificial general intelligence (AGI) benefits all of humanity. 

OpenAI faces steep challenges, notably fixing its products’ issues with accuracy, bias, and harm. The company has employed Moderation API, an AI-based moderation system that’s been trained to assist developers in determining whether language goes against OpenAI’s content policy.

Why the Preparedness team

In response to the need for heightened preparedness, OpenAI is forming a new team called Preparedness. 

The Preparedness team’s mission is to monitor, evaluate, forecast, and safeguard against catastrophic risks in various categories, including individualized persuasion; cybersecurity; chemical, biological, radiological, and nuclear (CBRN) threats; and autonomous replication and adaptation (ARA).

OpenAI said that the team will work on developing a Risk-Informed Development Policy (RDP). This policy will outline OpenAI’s strategy for rigorous evaluation of frontier model capabilities, monitoring, protective measures, and the establishment of a governance structure to ensure accountability and oversight throughout the development process. 

The RDP is designed to complement OpenAI’s existing risk mitigation efforts, contributing to the safety and alignment of new, highly capable AI systems. 

This team, also led by Aleksander Madry, will play a pivotal role in tightly integrating capability assessment, evaluations, and internal red teaming for frontier models. Their focus will extend from models developed soon to those possessing AGI-level capabilities.

Read also; OpenAI in talks for historic share sale, eyes $90 billion valuation

0 0 votes
Article Rating
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Crypto News Update

Latest Episode on Inside Blockchain

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x