Some people say ChatGPT made up damaging stories about them. Now the FTC is investigating

 

By Chris Morris

OpenAI has drawn the attention of the Federal Trade Commission (FTC) after its ChatGPT generated false information about a number of people, potentially damaging their reputations.

In a civil investigative demand (essentially, the civil version of a subpoena) published by the Washington Post, the FTC says it is looking into whether OpenAI has “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm.”

It’s the latest push by FTC Chair Lina Khan to police the fast-growing technology, but it comes on the heels of a failed attempt to halt the merger of Microsoft and Activision Blizzard. (The FTC is appealing that court ruling.)

At the heart of the ChatGPT inquiry is whether the generative AI chatbot has been harmful to consumers. The FTC also asked several questions in its letter about OpenAI’s security practices and how it trains ChatGPT, specifically where and how it obtains data to do so.

In response, OpenAI CEO Sam Altman tweeted: “It is very disappointing to see the FTC’s request start with a leak and does not help build trust. That said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.” He added: “We’re transparent about the limitations of our technology, especially when we fall short.”

The increased federal scrutiny comes roughly two months after Altman testified before a Senate subcommittee, where he called on the government to regulate the AI industry, including his company.

The following month, Altman and some of the AI field’s biggest names, including Microsoft’s chief scientific and technology officers, signed a statement warning that the technology could be an existential risk. 

The FTC inquiry is hardly the first legal headache for the ChatGPT creator, however. Radio talk show host Mark Walters has filed suit against OpenAI for defamation, alleging the AI system had reported that he was accused of defrauding and embezzling funds from the Second Amendment Foundation. Walters says he is neither a plaintiff nor defendant in the suit that the chatbot cited.

And while there’s no known legal action pending, ChatGPT is also accused of saying a law professor had sexually harassed one of his students during a class trip to Alaska, citing a 2018 Washington Post article as the source of the information. The article, though, never existed. The trip never happened. And the professor was never accused of any incident.

It’s incidents like this that have led Princeton University computer science professor Arvind Narayanan to refer to ChatGPT as “the greatest bullshitter ever.”

 

The demand for information also follows a March data leak that potentially allowed some users to see “titles from another active user’s chat history” along with “unintentional visibility of payment-related information” for premium customers. (The bug causing that leak was patched the same day it was discovered, the company said.)

The FTC has also asked for details about how many people were affected by that incident—and instructed OpenAI to suspend any routine procedures for document destruction during the course of the investigation.

OpenAI’s interactions with the U.S. government aren’t likely to end with this inquiry. Congress is looking at how best to establish regulations for companies making AI products, though any legislation is expected to be months down the road.  

The company also has its eye on Europe, where EU lawmakers have already moved along a bill that puts restrictions on the use of AI in an effort to protect consumers. 

Update, July 14, 2023: This story has been updated with a tweet response from OpenAI CEO Sam Altman.

Fast Company

(18)