Why Meta’s Yann LeCun isn’t buying the AI doomer narrative

 

By Issie Lapowsky

 

In late January, just a few months after OpenAI unleashed ChatGPT on the world, Meta’s vice president and chief AI scientist Yann LeCun dashed out a tweet that was at once a pointed dig at a rising competitor and a gentle nudge to his own company’s higher ups.

 

“By releasing public demos that, as impressive and useful as they may be, have major flaws, established companies have less to gain and more to lose than cash-hungry startups,” LeCun wrote. “If Google and Meta haven’t released chatGPT-like things, it’s not because they can’t. It’s because they won’t.”

At the time, at least, that was true. For years, corporate giants like Google and Meta had all of the technical prowess of OpenAI and a sliver of its risk appetite. But less than a year later, ChatGPT has changed all that, kicking off a race among once-cautious companies to turn the science they’d been working on behind closed doors into public-facing products. 

Now, Meta has answered LeCun’s subtle challenge by taking an arguably greater risk than anyone with the debut of Llama 2, its latest large language model. Unlike GPT-4, which is available from OpenAI for a fee, Llama 2 is freely available for commercial and research use, throwing the gates wide open to almost anyone who wants to experiment with it. (Though, as purists note, while Meta describes Llama 2 as “open source,” access to its training data is still closed off.)

 

To LeCun, Meta’s about-face was a welcome change: Expanding access to this technology and letting other people build stuff on top of it, is, he argues, the only way to ensure that it’s not ultimately controlled by a small group of Silicon Valley engineers. “Imagine the future when everyone uses some sort of chatbot as their main interface to the digital realm. You don’t go to Google. You don’t go to Facebook. . . . You just talk to your virtual assistant,” LeCun says. “You want that AI system to be open source and to be transparent, because there’s going to be a lot riding on it.” 

LeCun, of course, is one of the most prominent leaders in this small group of Silicon Valley engineers. Often referred to as one of the “godfathers of AI,” LeCun pioneered in the 1990s and early 2000s the subset of machine learning known as deep learning, upon which large language models like GPT-3 and GPT-4 are built. In 2013, LeCun created the Facebook AI Research lab, which he said at the time “would brin[g] about major advances in Artificial Intelligence.”

But despite the company’s investment in research, LeCun saw firsthand how it often resisted releasing this technology into the wild out of fear of legal risk and public backlash. In fact, less than two weeks before ChatGPT’s debut, Meta released its own chatbot called Galactica, but pulled the plug on it three days later, after it was widely panned for spewing nonsense—you know, kind of like ChatGPT.

 

With Llama 2, Meta is making no such apologies. LeCun acknowledges that giving this technology away for free comes with a risk of user abuse. Facebook itself started as a social network for college kids and wound up being used to subvert elections and fuel terrorist propaganda. There will undoubtedly be unintended consequences of generative AI. But LeCun believes giving more people access to this technology will also help rapidly improve it—something that LeCun says we should all want. 

He likens it to a car: “You can have a car that rides three miles an hour and crashes often, which is what we currently have,” he says, describing the latest generation of large language models. “Or you can have a car that goes faster, so it’s scarier . . . but it’s got brakes and seatbelts and airbags and an emergency braking system that detects obstacles, so in many ways, it’s safer.”

Of course, there are those who believe the opposite will be true—that as these systems improve, they’ll instead try to drive all of humanity off the proverbial cliff. Earlier this year, a slew of top AI minds, including Geoffrey Hinton and Yoshua Bengio, LeCun’s fellow “AI godfathers” who shared a 2018 Turing Award with him for their advancements in the field of deep learning, issued a one-sentence warning about the need to mitigate “the risk of extinction from AI,” comparing the technology to pandemics and nuclear war.

 

LeCun, for one, isn’t buying the doomer narrative. Large language models are prone to hallucinations, and have no concept of how the world works, no capacity to plan, and no ability to complete basic tasks that a 10-year-old could learn in a matter of minutes. They have come nowhere close to achieving human or even animal intelligence, he argues, and there’s little evidence at this point that they will. 

Yes, there are risks to releasing this technology, risks that giant corporations like Meta have quickly become more comfortable with taking. But the risk that they will destroy humanity? “Preposterous,” LeCun says.


This story is part of AI 20, our monthlong series of profiles spotlighting the most influential people building, designing, regulating, and litigating AI today.

Fast Company

(25)