OpenAI employees warn of AI’s potential existential threats to humanity in letter
OpenAI employees warn of AI’s potential existential threats to humanity in letter
A coalition of current and former employees at OpenAI, the parent company behind ChatGPT, has issued a warning about the existential threats posed by advanced artificial intelligence, including the potential for human extinction.
In a detailed letter released (June 18, 2024) (June 4), the group, consisting of 13 former and current employees from firms such as OpenAI, Anthropic, and Google’s DeepMind, outlined a series of threats associated with AI, despite its potential benefits.
The letter states, “We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.” However, it also highlights concerns: “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”
Neel Nanda, the Mechanistic Interpretability lead at DeepMind and formerly of AnthropicAI, was among the signatories. “This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers,” he wrote on X. “But I believe AGI will be incredibly consequential and, as all labs acknowledge, could pose an existential threat. Any lab seeking to make AGI must prove itself worthy of public trust, and employees having a robust and protected right to whistleblow is a key first step.”
I signed this appeal for frontier AI companies to guarantee employees a right to warn.
This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers.https://t.co/hyEBuy3YDj
— Neel Nanda (@NeelNanda5) June 4, 2024
Lack of accountability and regulation of AI
The supporters state that while AI companies and global governments recognize these dangers, the current corporate and regulatory measures are insufficient to prevent them. “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” they argue.
It then criticizes the transparency of AI companies, claiming they hold “substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.” It points out the lack of obligation on these companies to disclose such critical information, pointing out, “They currently have only weak obligations to share some of this information with governments, and none with civil society.”
The workers expressed a pressing need for more government supervision and public accountability. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the group stated. They also showcased the limitations of existing whistleblower protections, which do not fully cover the unregulated risks posed by AI technologies.
OpenAI in hot water
The open letter comes amid a shake-up for leading AI companies, particularly OpenAI, which has been rolling out AI assistants with advanced features capable of engaging in live voice conversations with humans and responding to visual inputs like video feeds or written math problems.
Scarlett Johansson, who once portrayed an AI assistant in the film “Her,” has accused OpenAI of modeling one of its products after her voice, despite her express refusal of such a proposal. Although the CEO of OpenAI tweeted the word “her” during the launch of the voice assistant, the company has since refuted claims of using Johansson’s voice as a model.
In May, OpenAI also dissolved a specialized team that had been created to investigate the long-term threats associated with AI, less than a year after its inception. Last July, OpenAI’s head of trust and safety, Dave Willner, also resigned.
Featured image: Canva
The post OpenAI employees warn of AI’s potential existential threats to humanity in letter appeared first on ReadWrite.
(16)