Anthropic CEO Dario Amodei pens a smart look at our AI future
Anthropic CEO Dario Amodei pens a smart look at our AI future
Neither a doomer or a profiteer, Amodei talks in reasoned scenarios, not abstractions.
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
Anthropic’s Dario Amodei pens a cool-headed essay on the arrival of AGI
In the debate over artificial general intelligence, it’s often the “doomers” (Eliezer Yudkowsky) or showmen (OpenAI CEO Sam Altman, X CEO Elon Musk) making the most noise. But many of these viewpoints—whether optimistic or pessimistic—are ultimately vague and abstract. That’s why it’s worth listening to people like Dario Amodei.
Amodei and his company, Anthropic, have spent lots of time and money erecting safeguards against the potential harms of AI. In his new essay, “Machines of Loving Grace,” Amodei explores the most likely ways that superintelligence—that is, AI that exceeds human intelligence—might bring about measurable positive change. In the essay, he describes what superintelligence, or “strong AI” as he calls it, will look like, and how it might begin to enable progress in such fields as biology and neuroscience that will “directly improve the quality of human life.”
Strong AI could show up as early as 2026, Amodei believes. This model could look similar to today’s large language models, he posits, or it might consist of a system of interacting models that are trained differently than the LLMs we know today. The system will be smarter than the Nobel Prize winners in various fields, he says, and will access all “interfaces” available to a human working in a digital domain (text, audio, video, internet, etc.). Strong AI will be able to control robots and other equipment, he says, and work through large, complex problems autonomously. The model also will be able to share its training data with other models, creating potentially thousands of superintelligent AIs within a data center.
The essay gets even more interesting when Amodei’s focus leaves the data center. He notes that, in biology, life-saving advances are hindered by a lack of reliable data about complex systems. Amodei considers, as an example, the vast complexity of the human metabolism. “[I]t’s very hard to isolate the effect of any part of this complex system, and even harder to intervene on the system in a precise or predictable way,” he writes. Modeling such biological systems involves lots of “wet lab” work by humans, and it’s a slow process. AI can do far more than just analyze or look for patterns in existing data, but rather can act as a “principal investigator” that plans, directs, and manages new research projects (perhaps conducted by robots).
“I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do,” he writes. This could dramatically increase the pace of research, which could mean that major breakthroughs such as CRISPR or mRNA vaccines could come every 10 years instead of every 100. Amodei believes that this research acceleration could lead to the reliable prevention and treatment of nearly all infectious diseases, effective treatments of most cancers, and elimination of genetic disease such as Alzheimer’s.
An acceleration in research progress will have pronounced effects on the economies of both developing and developed nations. Amodei says AI-assisted research will improve technologies that slow or prevent climate change, and will speed up the development process for food alternatives such as lab-grown meat, which “reduces reliance on carbon-intensive factory farming.” Amodei makes clear that all these positive changes won’t magically appear once the AI has reached superintelligence. The speed of the application is limited by a variety of factors, such as dearth of data or computing power, natural laws (the speed of the development of a cell, the maximum number of transistors a chip can hold), misplaced human fears (perhaps expressed in legislation), or even misinformation leading to a luddite reaction to AI itself. Still, it’s hard to read Amodei’s meditation and not come away feeling excited, if a little nervous, for our AI future.
Yann LeCun calls BS on claims that LLMs can “reason”
One of the biggest debates in AI circles is whether or not current AI systems can truly “reason.” OpenAI claims that its most recent model, the o1 series, can “think” about various approaches to problem-solving and select the best one. But the o1 model is still a large language model; it uses complex math to work out the most probable next word in a sequence. How can that constitute reasoning?
Ilya Sutskever, the recently departed OpenAI cofounder, says that by predicting a lot of next words, the model can indeed “reason” through complex problems. In a recent fireside chat with Nvidia CEO Jensen Huang, Sutskever offered a thought exercise: “Say you read a detective novel, and on the last page, the detective says, ‘I am going to reveal the identity of the criminal, and that person’s name is _____.’. . . predict that word.” This suggests that by processing the meaning and order of the words in the novel, the LLM learns enough to generate deep insights.
One of the pioneers of modern neural networks, Meta’s chief AI scientist Yann LeCun, told me back in 2019 that large language models (we were talking about “RoBERTa” at the time) can indeed learn about the world from processing huge amounts of training data on hundreds of servers for months at a time. An AI model might understand, for example, that when a car drives off a cliff it doesn’t just hang in the air, but plummets to the rocks below.
But an AI model capable of reasoning through complex problems would need to gain a more advanced understanding of the world, LeCun believes. With that advanced understanding, it could develop the ability to break down problems into smaller parts, as humans do. LeCun says that when planning a trip from New York to Paris, we don’t plan out the second-by-second movements of our muscles needed to get us there. Instead, we break down the plan in a hierarchy of sub-plans (i.e. how to get to the airport, where to hail a taxi, and so on).
“We’re doing hierarchical planning,” LeCun said during a recent keynote at Hudson Forum. “How to do this with AI systems is completely unsolved; we have no idea how to do this. And that seems like a pretty big requirement for intelligent behavior.” LeCun believes a system capable of hierarchical planning needs a more advanced understanding of the world in order to reason through complex problems.
Still, on the question of when superhuman intelligence will arrive at all, LeCun isn’t much more pessimistic than his peers: “Human-Level AI,” he says, will take “several years—if not a decade.”
The New York Times sends Perplexity a cease and desist
First, the New York Times sued OpenAI for using its content for LLM training purposes. Now the Grey Lady has sent a cease-and-desist letter to the AI search startup Perplexity, demanding that it stop using Times content in its custom answers to user queries.
The Times also gave Perplexity until the end of the month to explain how it is still accessing Times content after promising earlier that its web crawlers would stop doing so. Perplexity’s “answer engine” works by crawling large swaths of information on the web and then creating a big database (an index) of content it grabs from web pages. Pieces of that content are then used to form custom answers to user queries.
The Times is probably using the Robots Exclusion Protocol to keep web crawlers away from its content. Crawlers are supposed to honor it. It’s likely that Perplexity’s own crawler, PerplexityBot, is indeed honoring the Times’s robot.txt file, but, as we learned back in June, PerplexityBot isn’t the only crawler Perplexity uses.
“Perplexity is not ignoring the Robot Exclusions Protocol and then lying about it,” Perplexity cofounder and CEO Aravind Srinivas told me back in June. “I think there is a basic misunderstanding of the way this works,” Srinivas said at the time. “We don’t just rely on our own web crawlers, we rely on third-party web crawlers as well.”
ABOUT THE AUTHOR
(2)