These 7 experts say our fears about AI are overblown

 

By Chris Stokel-Walker

Watch enough TV or read enough news online and you’d be forgiven for thinking we’re on the verge of societal collapse. Artificial intelligence is going to upend our society—perhaps for the worse, say the doom-mongers. To hear those who worry about AI as an existential threat to our society, we are not just sleepwalking, but actively running into our AI-fueled demise.

But not everyone is convinced of AI’s world-ending potential. A cadre of experts say eye-catching fears about the tech are overblown. 

Below, seven rationalists (or so we hope) whose counterarguments around AI can alleviate at least some of our concerns.

Yann LeCun: AI is dumber than a dog

The head of AI at Meta, Yann LeCun should, in theory, be one of those touting the capabilities of generative AI the most. He’s a long-tenured expert in the field, one of the driving forces behind the adoption of neural networks that underpin many of the AI tools we’re so wowed by today.

And yet, LeCun said at a tech conference in June that “we are not going to have human-level intelligence, we are not going to have dog-level or cat-level [intelligence]” until AI can understand that a floating object is unusual.

LeCun also said: “A fear that has been popularized by science fictions [is] that if robots are smarter than we are, they are going to want to take over the world . . . there is no correlation between being smart and wanting to take over.”

Nick Clegg: Hype has run ahead of technology

LeCun’s colleague, Nick Clegg, Meta’s president of global affairs, is equally downbeat about AI’s prospects. The “hype has somewhat run ahead of the technology,” he told the BBC. In many ways, AI models were “quite stupid,” he added.

Emily Bender: If you think AI is the equal of humans, you think little of humans

Emily Bender is one of the most prominent realists about AI’s potential, and a coauthor of a 2021 landmark paper highlighting how artificial intelligence models are little more than “stochastic parrots” that repeat things.

In a March interview with New York magazine, she said those that compare AI to humanity do a disservice to humans. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do,” Bender said.

Michio Kaku: AI is a glorified tape recorder

Michio Kaku, a theoretical physicist at City College of New York, told CNN in August that generative AI is not the amazing technology people seem to think it is. “It takes snippets of what’s on the web created by a human, splices them together, and passes it off as if it created these things,” Kaku said. “And people are saying, ‘Oh my God, it’s a human, it’s humanlike.’”

 

Far from it, the theoretical physicist said. It was nothing more than a “glorified tape recorder.”

Melanie Mitchell: Not humanlike understanding

Professor Melanie Mitchell of the Santa Fe Institute in New Mexico is another one who expresses caution about over-claiming AI’s capabilities. In an interview with New Scientist, she pointed out that large language models’ abilities lay in predicting the next word in a sentence—and little more.

“Simply scaling up these models is probably not going to take us to the kind of humanlike understanding that we want,” she said.

Noam Chomsky: AI shows “prehuman” intelligence

Famed philosopher Noam Chomsky cowrote a March essay for the New York Times about how ChatGPT showed false promise. “OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Sydney are marvels of machine learning,” he wrote. But that’s about as far as he was willing to go in praise of the technology.

“Such programs are stuck in a prehuman or nonhuman phase of cognitive evolution,” he wrote. “True intelligence is demonstrated in the ability to think and express improbable but insightful things.”

Evgeny Morozov: Neither artificial not intelligent

A leading author who translates complicated concepts into understandable formats for the public, Evgeny Morozov wrote in March that AI “is neither artificial nor intelligent. Machines cannot have a sense (rather than mere knowledge) of the past, the present and the future; of history, injury or nostalgia,” he wrote. “Without that, there’s no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the ‘intelligence’ part.”

Fast Company

(18)