Connect with us

News

OpenAI creates new board to oversee safety standards

Published

on

OpenAI establishes oversight board to ensure adherence to safety standards in AI development.

OpenAI has formed a new board committee to evaluate the safety and security of its artificial intelligence models. This move is coming after the resignation of one of its prime executives focused on safety. Jan Leike resigned from OpenAI, expressing concerns about the company’s “core priorities.”

OpenAI’s core priorities encompass facilitating safe AI for societal good, advancing research boundaries, transparency in development, collaboration, ethical deployment, and aligning AI with human values for long-term societal benefit.

Additionally, Helen Toner and Tasha McCauley, former board members of the AI firm, have also aired their views on OpenAI’s governance. The duo believes that developments since Altman was reinstated to the board and the departure of senior safety-focused talent bode ill for OpenAI’s self-governance experiment.

OpenAI announced that the new committee will spend 90 days evaluating the safety measures in its technology, after which, the committee will issue a report detailing their findings.

“Following the full board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,” the company stated in a blog post on Tuesday.

The board will comprise three board members — Chairman Bret Taylor, CEO of Quora, Adam D’Angelo, and former Sony Entertainment executive Nicole Seligman — along with six employees, including head of alignment science John Schulman, and Sam Altman.

OpenAI added that it will continue to be in touch with other experts such as Rob Joyce, a Homeland Security adviser under Donald Trump, and John Carlin, a former Justice Department official under President Joe Biden.

In other news, Human or Machine? OpenAI’s latest model GPT-4o offers a human-like experience

OpenAI has faced several significant challenges from when it launched its globally acclaimed product, ChatGPT. On November 17, 2023, the company’s board of directors removed co-founder and chief executive Sam Altman stating they had no confidence in his leadership. Days later, he was recalled back to the firm he started with Elon Musk.

OpenAI is also facing a growing number of legal battles on several fronts. News outlets like The New York Times, The Intercept, Raw Story, and AlterNet have sued OpenAI, accusing it of misusing their copyrighted articles to train ChatGPT. The lawsuits allege that ChatGPT reproduces the articles verbatim or nearly verbatim.

Authors such as George R.R. Martin and Jodi Picoult have also accused OpenAI of misusing their work to train its AI models. The lawsuits sought monetary damages and court orders to stop OpenAI’s unauthorized use of copyrighted content.

One of the latest is that of Hollywood actress Scarlett Johansson. Despite declining for “personal reasons” when approached by Sam Altman to use her voice for ChatGPT, OpenAI released a new voice assistant featuring a voice imitation similar to that of the actress. Although it was taken down, it does speak to how the company can infringe on IPs.

Furthermore, the SEC has also started an investigation of OpenAI following the drama that saw CEO Sam Altman ousted and rehired by the board. The FTC has opened an inquiry into whether OpenAI’s business arrangements with Microsoft violate antitrust law.

0 0 votes
Article Rating
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

Crypto News Update

Latest Episode on Inside Blockchain

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x