Civil society groups issue a surprise open letter to the U.K.’s AI Safety Summit

 

By Chris Stokel-Walker

The U.K.’s landmark AI Safety Summit, a conclave of around 120 representatives from leading AI companies, academics, and civil society representatives taking place at Bletchley Park this week, has encountered an unexpected wrinkle thanks to an urgent letter by nearly a dozen participants who believe the summit is targeting the wrong issues.

Representatives from 11 civil society groups, all of which are included on the government’s official guestlist for the event, have signed an open letter urging the politicians gathered to prioritize regulation to address the full range of risks that AI systems can raise, including current risks already impacting the public.

“While potential harms of ‘frontier’ models may have motivated the Summit, existing AI systems are already having significant harmful impacts on people’s rights and daily lives,” the letter explains.

Focusing on those risks, rather than the existential risks of a superintelligent AI—something that many of those studying the field question is a real likelihood—that have so far framed the schedule of the AI Safety Summit would be a better use of time, the letter argues.

“Governments must do better than today’s discussions suggest,” the letter reads. “It is critical that AI policy conversations bring a wider range of voices and perspectives into the room, particularly from regions outside of the Global North.” The guest list for this week’s summit includes representatives from the U.S., Chinese, Nigerian, and Indian governments but is notably absent of many academics and civil society campaigners.

The letter continues: “Framing a narrow section of the AI industry as the primary experts on AI risks further concentrating power in the tech industry, introducing regulatory mechanisms not fit for purpose, and excluding perspectives that will ensure AI systems work for all of us.”

In a statement issued to Fast Company, Alexandra Reeve Givens, one of the signatories and CEO of the Center for Democracy & Technology, says the letter was prepared “because we worried that the Summit’s narrow focus on long-term safety harms might distract from the urgent need for policymakers and companies to address ways that AI systems are already impacting people’s rights.” (Reeve Givens is attending the summit this week.)

 

Chinasa T. Okolo, a fellow at the Brookings Institution, which is a participant at the summit, adds that the summit has been light on any discussion around the harms that AI can cause to data labelers, “who are arguably the most essential to AI development.” 

“There is much more work needed to understand the harms of AI development and increase labor protections for data workers, who are often left out of conversations on the responsible development of AI,” she adds.

The letter also asks participants at the AI Safety Summit to ensure that “companies cannot be allowed to assign and mark their own homework. Any research efforts designed to inform policy action around AI must be conducted with unambiguous independence from industry influence.”

Criticism has been levied against the summit organizers for focusing on points of discussion that favor incumbent companies that have already developed successful AI tools. The list of those invited to the event, critics argue, also skews conversations in favor of maintaining the status quo.

The hope from the letter is that the conversation is recast at the second day of the summit to consider more of those issues that have been overlooked to date at Bletchley Park.

“Today what we’ve seen is the U.S. and the U.K. set out voluntary policies for the management of the most powerful models we have,” says Michael Birtwistle, associate director, Ada Lovelace Institute, a research institute represented at the summit. “The last decade of voluntary commitments from tech leaders are no meaningful replacement for hard rules.”

Fast Company

(14)