Report warns AI could usher in a new era of bioweapons

By Chris Morris

There have been lots of warnings about AI’s impact on people’s jobs and mental health, as well as the technology’s ability to spread misinformation. But a new report from the RAND Corporation, a California research institute, might be the most disturbing of all.

One day after venture capitalist Marc Andreessen published a lengthy manifesto on techno optimism, arguing that AI can save lives “if we let it,” the think tank cautioned that the rapid advancement of the technology could increase its potential to be used in the development of advanced biological weapons. While the report acknowledges that AI, alone, isn’t likely to provide step-by-step instructions on how to create a bioweapon, it argues that it could fill in areas that were previously unknown to bad actors—and that could be all the help they need.

“The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations,” the report reads. “Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.”

The current lack of oversight, it further argues, could be just the window of opportunity that terrorists need.

AI is already offering more help than most people would be comfortable with, even if it is inadvertent. In one fictional scenario researchers ran, the Large Language Model (LLM)—a building block of generative AI that’s essential to tools like ChatGPT, Bard, and more—“suggested aerosol devices as a method and proposed a cover story for acquiring Clostridium botulinum while appearing to conduct legitimate research.” In another, it discussed how to cause a large number of casualties.

The report did not discuss which AI system it had run its scenarios on.

Bioweapons are an especially frightening threat for officials, not only because they can spread so wildly (and mutate), but because they’re a lot easier to create than cure. RAND notes that the cost to resurrect a virus similar to smallpox can be as little as $100,000, but developing a vaccine can run over $1 billion.

 

While the report did raise a red flag about AI’s potential for harm in this space, RAND also noted that it “remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online.”

This isn’t the first time RAND has warned about the potential catastrophic effects of AI. In 2018, the group looked at how artificial intelligence could affect the risk of a nuclear war. That study found that AI had “significant potential to upset the foundations of nuclear stability and undermine deterrence by the year 2040,” later adding “Some experts fear that an increased reliance on AI could lead to new types of catastrophic mistakes. There may be pressure to use it before it is technologically mature.”

RAND’s emphasizing its findings only hint at potential risk and do not provide a full picture of real-world impact.

However, RAND is hardly alone in its warnings about the catastrophic potential of AI. Earlier this year, a collective of AI scientists and other notable figures signed a statement from the Center for AI Safety, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Fast Company

(18)