POV: The media has to be extra careful when covering AI

 

By Chris Stokel-Walker

Around a month ago, I stumbled into a chance conversation with a colleague, a senior tech journalist at a major media outlet. The main topic of conversation: how tired we both were. It can be exhausting covering the seemingly nonstop flow of industry news and controversies.

When I look at the media’s recent AI coverage, I worry that, collectively, our fatigue is showing. Over this past week, journalists have made a number of egregious errors and oversights that suggest we need to take a moment and breathe before rushing to report.

First came the credulous coverage of a 22-word statement put out by an organization that appeared practically out of nowhere warning of the existential threat of AI. No outlet has yet fully uncovered how the statement came to be, and what the motives behind it were. Instead, we’ve allowed signatories to the letter to deliver schlocky science fiction, unchallenged, on television, much to the chagrin of groups of AI ethicists who have devoted their life’s work to thinking about these problems—many of whom believe we’ve been duped by a splashy claim.

Then came Japan. The country has had a carve-out for copyright exemption for text and data mining for around a decade. Yet a poorly written blog post relying on automated English translation of a Japanese parliamentary discussion, and a credulous tweet sharing it, led to a social media outrage cycle claiming Japan had given a free pass for AI companies to trawl the internet without consequences. It hadn’t, and the “pass” the parliamentarians were talking about has long been part of Japan’s approach to AI. Despite that, it was presented as a new development. 

And finally, another shady blog post—this one claiming confirmation from the U.S. Air Force’s chief of AI that an AI-powered drone had killed its operator during a simulated test—was picked up by a number of mainstream news outlets. As it turns out, nobody was killed, and the simulation itself never occurred. In reality, an Air Force staffer was speaking hypothetically about a “thought experiment” outside the military.

Many of the reporters who ran with these stories are excellent. But they (we!) are tired, and they’re expected to act fast to cover a story with—to borrow the AI terminology—endless parameters and tribal, internecine squabbles between warring factions of academics and entrepreneurs. 

This work is hard. And none of us are immune to mistakes. We need to slow down. (We might also turn to the writing guides that a number of leading academics have created, which point out potential pitfalls in our coverage.)

 

AI is moving quickly—far quicker than those who have been studying it for years thought it would. The technology that enabled ChatGPT wasn’t meant to evolve so quickly. We thought we had time to figure this out, but we don’t. That’s exciting and scary and vitally important. Meanwhile, we’re all worn out because Elon Musk is still causing headaches over at Twitter, and the confusion around Microsoft’s purchase of Activision Blizzard seems to know no end, and we’re expected to cover all that, too.

But we don’t always need to be first to the story. (SEO as we know it is dying anyway, replaced by generative AI results.) It’s crucial that we in the media help to contextualize these endless developments in AI—and, when necessary, supply a healthy dose of skepticism. 

It’s important, because there’s plenty of real-world AI news that does deserve our attention. The drumbeat of AI regulation is getting louder, and hardly any of us have covered the OpenAI security fund announced in the middle of this latest “existential threat” news cycle. Surely, there’ll be some other new claim just around the corner, and it’ll require diligence and doggedness—and scrutiny.

Fast Company

(2)