Connect with us

News

OpenAI forms “Preparedness Team” with board’s final AI safety authority

Published

on

OpenAI forms Preparedness Team with board's final AI safety authority

OpenAI aims to mitigate the risks associated with developing advanced AI systems through its new “Preparedness Framework.”

In anticipation of potential risks, OpenAI has announced the implementation of a “Preparedness Framework,” which involves establishing a dedicated team to evaluate and predict risks.

OpenAI’s Dec. 18 blog post outlines that the company’s “Preparedness team” will act as a link between the safety and policy teams, fostering closer collaboration.

The blog post emphasizes that the “checks and balances” system provided by the various teams will help mitigate the “catastrophic risks” associated with the development of advanced AI models.

OpenAI underscores in the blog post that it will only deploy its AI technology only if  deemed safe.

The new “Preparedness team” will be responsible for reviewing safety reports, which will then be sent to senior executives and the OpenAI board for review

The blog post clarifies that while the company’s executives hold ultimate responsibility for final decisions, the board now has the authority to challenge or overturn any decisions regarding the safety of AI models.

 

Recent changes in the company are likely influenced by the leadership transition in November when Sam Altman was unexpectedly fired and then reinstated as CEO.

Following Altman’s reinstatement, OpenAI announced new board members, including Bret Taylor as the chair, as well as Adam D’Angelo and Larry Summers.

The public release of ChatGPT in November 2022  has generated interest in AI while also highlighting potential dangers and ethical challenges.

To promote responsible AI development, OpenAI, along with companies like Microsoft, Google, and Anthropic, founded the Frontier Model Forum in July, aiming to monitor the self-regulation of AI development.

In October, President Biden signed an executive order establishing new standards for AI safety and accountability, outlining requirements for developers, including transparency, security, and risk mitigation.

Before the executive order signing, President Biden convened a meeting with leaders from several AI companies, including OpenAI, to discuss their commitment to developing safe and transparent AI.

 

Read also: Why Rollups present a unique business model in crypto: Insights from Galaxy Ventures Exec

0 0 votes
Article Rating
Continue Reading
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Crypto News Update

Latest Episode on Inside Blockchain

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x