Home » Technology » OpenAI is launching an ‘independent’ safety board that can stop its model releases

Share This Post

Technology

OpenAI is launching an ‘independent’ safety board that can stop its model releases

OpenAI is launching an ‘independent’ safety board that can stop its model releases

/

The company’s Safety and Security Committee will become an ‘independent Board oversight committee.’

Share this story

Vector illustration of the ChatGPT logo.

OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”

The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says. OpenAI’s full board of directors will also receive “periodic briefings” on “safety and security matters.”

The members of OpenAI’s safety committee are also members of the company’s broader board of directors, so it’s unclear exactly how independent the committee actually is or how that independence is structured. We’ve asked OpenAI for comment.

By establishing an independent safety board, it appears OpenAI is taking a somewhat similar approach as Meta’s Oversight Board, which reviews some of Meta’s content policy decisions and can make rulings that Meta has to follow. None of the Oversight Board’s members are on Meta’s board of directors.

The review by OpenAI’s Safety and Security Committee also helped “additional opportunities for industry collaboration and information sharing to advance the security of the AI industry.” The company also says it will look for “more ways to share and explain our safety work” and for “more opportunities for independent testing of our systems.”

Share This Post