A group of U.S. senators demand OpenAI turn over safety data

July 23, 2024

A group of U.S. senators demand OpenAI turn over safety data

Five senators sent a letter to OpenAI CEO Sam Altman. Here’s what they asked for.

BY Chris Morris

A group of five U.S. senators are demanding OpenAI submit data showing how it plans to meet safety and security commitments that it has made about its artificial intelligence systems after a growing number of employees and researchers have raised red flags about the technology and the company’s safety protocols.

In a letter to OpenAI CEO Sam Altman, the four Democrats and one Independent lawmaker asked a series of questions on how the company is working to ensure AI cannot be misused to provide potentially harmful information—such as giving instructions on how to build weapons or assisting in the coding of malware—to members of the public. In addition, the group sought assurances that employees who raise potential safety issues would not be silenced or punished.

The concerns voiced by former employees have led to a flurry of media reports—and the senators expressed concerns about how the company is addressing safety concerns.

“We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats,” the five senators wrote in a letter obtained by The Washington Post.

Earlier this month, a whistleblower filed a complaint with the Securities and Exchange Commission (SEC), accusing OpenAI of preventing employees from warning officials about the risks of the company’s products. These allegedly included nondisclosure (and non-disparagement) agreements and the need to obtain prior consent from the company before discussing confidential information with federal regulators.

“Artificial intelligence is a transformative new technology, and we appreciate the importance it holds for U.S. competitiveness and national security,” a spokesperson for OpenAI told Fast Company in a statement. “We take our role in developing safe and secure AI very seriously and continue to work alongside policymakers to establish the appropriate safeguards going forward.”

The whistleblower letter followed several public warnings from workers who opted to leave OpenAI, the maker of ChatGPT. Jan Leike, a coleader in the company’s superalignment group (dedicated to ensuring that AI stays aligned with the goals of its makers) who left the company in May, was harshly critical of Altman and the company, saying executives made it harder for him and his team to ensure the company’s AI systems aligned with human interests.

 

ABOUT THE AUTHOR

Chris Morris is a contributing writer at Fast Company, covering business, technology, and entertainment, helping readers make sense of complex moves in the world of tech and finance and offering behind the scenes looks at everything from theme parks to the video game industry. Chris is a veteran journalist with more than 35 years of experience, more than half of which were spent with some of the Internet’s biggest sites, including CNNMoney.com, where he was director of content development, and Yahoo! Finance, where he was managing editor 


Fast Company

(16)