When generative AI lies, who is liable?
Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on the legal system’s looming challenge of determining whether a chatbot can be found guilty of defamation. I’m also looking at the thorny issues corporations face when considering the use of AI in key business functions.
If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com.
ChatGPT-maker OpenAI sued for defamation in Georgia
In what seems to be the first defamation case against an AI chatbot, the nationally syndicated talk show host Mark Walters has sued OpenAI, claiming that the company’s ChatGPT tool generated false and harmful information about him embezzling money. He is seeking unspecified monetary damages from OpenAI.
According to the suit filed in Georgia state court, Fred Riehl, editor of the gun publication AmmoLand, had asked ChatGPT for information on Walters’ role in another, unrelated, lawsuit in Washington State. Per Walters’ filing, ChatGPT contrived a fictional part of the Washington lawsuit, saying that Walters had embezzled money from a special interest group for which he’d served as a financial officer. By doing so, ChatGPT “published libelous matter regarding Walters,” his lawsuit states. “OAI knew or should have known its communication to Riehl regarding Walters was false, or recklessly disregarded the falsity of the communication,” the suit reads.
There’s plenty of precedent for cases where humans defame humans, but precious little when an AI is causing the harm. “Defamation is kind of a new area,” says John Villafranco, a partner with the law firm Kelley Drye & Warren. “There are a lot of juicy issues to be worked out.” The Walters v. OpenAI suit may or may not prove to be a landmark test case for defamation-by-AI, but it likely will raise important legal questions that will be repeated in future cases involving generative AI tools.
Meta has open-sourced a new AI model that can reason and learn
Meta announced Tuesday that it is open-sourcing a new computer vision model that can better help machines interpret the visual world. Unlike other computer vision models that break down images pixel by pixel, the Image Joint Embedding Predictive Architecture (I-JEPA) understands and compares images as abstract representations that convey the meaning of the image. (While chatbots process words, computer vision AI interprets or classifies images.) It processes and compares millions of images in this way, and in doing so forms an internal model of how the world works, Meta says. This approach allows I-JEPA to learn much more quickly than other models, even while using less computing power, Meta says. The end result is a model that can accomplish complex tasks and more easily adapt to unfamiliar situations. Meta says the model is already turning in high scores across a number of computer vision tests.
The foundation models are developed by Meta’s large AI research organization, led by AI pioneer and Turing Award winner Yann LeCun. Because Meta’s foundational AI models are open-source, they can be studied by other researchers and even used as the basis for other developers’ own models or apps.
Accenture is doubling the size of its AI practice
The consulting firm Accenture says it will invest $3 billion in AI over the next three years, and increase the size of its AI practice to 80,000 people. Accenture and other large consulting firms are seeing rapidly rising demand from Fortune 500 companies in a range of industries that need help comprehending and implementing AI tools. These companies are trying to understand how to ride the wave of new AI-model development. (Their curiosity is, in part, driven by fear of lagging behind competitors that might adopt AI products more quickly.) And they need the help of firms like Accenture and McKinsey to understand the risks of AI. For instance, many corporations fear that sending their proprietary data out to AI models hosted by third parties like OpenAI might present security or privacy risks, especially in heavily regulated industries, such as healthcare. Firms like Accenture can help corporations navigate the pros and cons of augmenting key business functions with AI and help them manage risk when they do.
Salesforce works to soothe customer worries over data security
Addressing security and privacy concerns has emerged as a major part of Salesforce’s AI pitch to large-enterprise customers. During a product event Monday, the company announced more details about how it will deliver customers the benefits of AI models while mitigating the risks. CEO Marc Benioff says corporations fear losing control of their proprietary data when they send it to AI models hosted by third parties, a condition he says is causing an “AI trust gap.”
To address this, Salesforce said its customers’ data will run through a “trust layer” in its new “AI Cloud” before it goes through any third-party AI models (a corporation might want to run its product data through an Anthropic or Cohere LLM to power an AI customer-service assistant, for example). Here, Salesforce secures the data, anonymizes sensitive or competitive data, and masks personal data for privacy. Salesforce executives stress that Salesforce doesn’t retain any of the data, and even will flag any data for toxicity or harmfulness that comes back from the model.
Amazon and Oracle are taking a similar model-agnostic and security-centric approach. Amazon announced in April that customers can access a number of popular generative AI models securely through the AWS cloud. Oracle announced this week that it will develop a number of new generative AI services in partnership with Cohere, which its SaaS customers can access securely through the Oracle cloud.
(31)