OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

The company told Engadget it’s already working with DARPA on cybersecurity.

OpenAI's policy no longer explicitly bans the use of its technology for 'military and warfare' | DeviceDaily.com
Kenneth Cheung via Getty Images

Just a few days ago, OpenAI’s usage policies page explicitly states that the company prohibits the use of its technology for “military and warfare” purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 “to be clearer and provide more service-specific guidance,” as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to “develop or use weapons.” However, the company has removed language pertaining to “military and warfare.”

While we’ve yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. “Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication.

The explicit mention of “military and warfare” in the list of prohibited uses indicated that OpenAI couldn’t work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn’t have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people.

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company “aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs.” Felix explained that “a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts,” adding that OpenAI “specifically cited weapons and injury to others as clear examples.” However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to “harm” others included all types of military use outside of weapons development.

In a statement to Engadget, an OpenAI spokesperson admitted that the company is already working with the US Department of Defense. “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” the spokesperson said. “There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

(26)