Britain’s big AI safety summit will be a waste of time, experts say

 

By Chris Stokel-Walker

Eighty years ago, at Bletchley Park, a country house 50 miles outside of central London, mathematician Alan Turing and a team of experts cracked the Enigma code, the German secret cryptogram for transmissions in World War II. Next week, the U.K. government will attempt to ride on the coattails of that success—and show it’s a key player in a technology that Turing was inexorably tied up with: artificial intelligence.

Bletchley Park will be home to this week’s AI Safety Summit, which is designed to position the U.K. as a key player in global AI regulation. “Right now, we don’t have a shared understanding of the risks that we face,” Prime Minister Rishi Sunak said in a speech last week. “And without that, we cannot hope to work together to address them. That’s why we will push hard to agree [on] the first ever international statement about the nature of these risks.”

There’s just one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

And the prime minister’s dire, doom-laden warnings in the days leading up to the summit—calling out AI’s potential as a terrorist tool—have not helped soothe critics, who worry that the guest list is dominated by industry representatives with vested interests. “AI won’t grow up like The Terminator,” says Rashik Parmar, CEO of BCS, the UK’s Chartered Institute for IT, a professional industry body. “If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.” One way to ensure that future, he says, is through creating licenses and ethical standards for anyone working in the AI space.

 

Those who have been studying the AI space for years outside the grip of the big tech giants have little hope for something significant coming out of Bletchley Park. “I’m not holding my breath for meaningful action to curb AI’s veritable impacts on society and a new accord that addresses anything but an AI boogeyman that the PM [Sunak] invented after a late-night reading of [sci-fi novelist Isaac] Asimov,” says Sasha Luccioni of Hugging Face, an AI company. (Hugging Face is on the summit guest list, but Luccioni isn’t.)

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

“If civil society had been more than an afterthought then we might have been able to talk about the kinds of values we want AI to be governed by,” says Jonathan Tanner, founder and CEO of AI and data nonprofit consultancy Rootcause. “Do we really expect some of the most powerful companies in the world to willingly agree to significant limits on their power? I’m expecting voluntary commitments, vague time frames and toothless oversight.”

Fast Company

(16)