The global AI safety movement is already dead
The global AI safety movement is already dead
A drastically reduced guest list at this week’s global AI safety summit suggests waning interest in safeguarding against the tech.
Six months ago, the world’s attention was on the U.K., where attendees from national governments and international tech companies convened for the first global AI safety summit to discuss the threat and potential of artificial intelligence as the world watched.
On Tuesday, a smaller number of attendees will gather in Seoul to continue that conversation. The comparatively lower-key event this week isn’t just an indication of how much more branding the U.K. gave the summit in an attempt to provide embattled prime minister Rishi Sunak with a legacy of his time in power. It’s a suggestion that a united, global AI safety movement has died before it even got started.
Gone is the star-studded guest list at the U.K. summit, where Elon Musk and prime minister Sunak participated in a fireside chat to discuss AI. Some of the countries that visited the U.K.’s summit, like Canada and the Netherlands, have said they’re not sending representatives to this meeting. Their refusal to attend is in part owed to the fact that the Seoul meeting has been branded as a “mini virtual summit” (much of it held on videoconference software) ahead of a larger event in France scheduled for February 2025.
But experts say their absence suggests something more alarming: that the broader movement toward a global agreement on how to handle the rise of AI is faltering.
“It’s more technical and low-key, with no major announcements from political leaders or companies,” says Igor Szpotakowski, an AI law researcher at Newcastle University, speaking about the Seoul summit. “It appears that the U.K. [which is cohosting the Seoul summit] and South Korean governments aren’t considered influential enough in this context to draw attention from other global leaders for such discussions.”
The underwhelming nature of the event in comparison to its U.K. predecessor also highlights an issue inherent in reaching a global consensus on AI: the highly fragmented approach to policy. “AI regulation is currently more regionalized due to political tensions, so events like the AI Safety Summit aren’t expected to be major breakthroughs,” says Szpotakowski.
“It will mean nothing if it doesn’t lead to action,” adds Carissa Véliz, a researcher at the University of Oxford specializing in AI ethics. “And I think the jury’s still out.”
Of course, turning those discussions into real action is easier said than done—and while supranational groups have been discussing dates to discuss ideas, individual countries have gotten on with regulating the tech (see, for example, the EU’s AI Act and the AI roadmap unveiled by Chuck Schumer earlier this month). “Other countries like India have set out a clear path to bring together innovation and responsible AI,” says Ivana Bartoletti, global chief privacy and AI governance officer at Wipro, and a Council of Europe expert on AI and human rights.
But individual countries’ efforts could be duplicated in these global summits, and without many of the major contributors to the current AI revolution present in South Korea, finding that consensus—or even having that discussion—will be difficult. It’s the reason that 25 of the world’s leading academic experts on AI have published an open letter in the journal Science warning that not enough is being done to reach agreement on AI safety at a global level.
“The world agreed during the last AI summit that we needed action,” says Philip Torr, a professor in AI at the University of Oxford and coauthor of the letter. “but now it is time to go from vague proposals to concrete commitments.”
ABOUT THE AUTHOR
(12)