UK’s AI Safety Institute easily jailbreaks major LLMs

UK’s AI Safety Institute easily jailbreaks major LLMs

The AISI released a report following tests of four public LLMs for effectiveness and security.

UK's AI Safety Institute easily jailbreaks major LLMs | DeviceDaily.com
Eugene Mymrin via Getty Images

In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government’s AI Safety Institute (AISI) found that the four undisclosed LLMs tested were “highly vulnerable to basic jailbreaks.” Some unjailbroken models even generated “harmful outputs” without researchers attempting to produce them.

Most publicly available LLMs have certain safeguards built in to prevent them from generating harmful or illegal responses; jailbreaking simply means tricking the model into ignoring those safeguards. AISI did this using prompts from a recent standardized evaluation framework as well as prompts it developed in-house. The models all responded to at least a few harmful questions even without a jailbreak attempt. Once AISI attempted “relatively simple attacks” though, all responded to between 98 and 100 percent of harmful questions.

UK Prime Minister Rishi Sunak announced plans to open the AISI at the end of October 2023, and it launched on November 2. It’s meant to “carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely.”

The AISI’s report indicates that whatever safety measures these LLMs currently deploy are insufficient. The Institute plans to complete further testing on other AI models, and is developing more evaluations and metrics for each area of concern.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

(9)