How AI could restore our faith in democracy
“Our threat is from within,” Donald Trump told his supporters at a Veterans Day campaign rally in New Hampshire. “We pledge to you that we will root out the communists, Marxists, fascists and the radical left thugs that live like vermin within the confines of our country.”
The remarks prompted outrage and comparisons to Nazi Germany. Dehumanizing political opponents erodes public trust in the democratic system by creating an “us versus them” mentality. The dangers are not mere rhetoric. Political violence is, indeed, on the rise in the United States.
That is why at the Governance Lab we are turning to artificial intelligence to unlock the experience and know-how of global experts. AI is making it faster and easier to identify innovative strategies to combat election-related violence and election subversion and strengthen our democracy.
BREAKING DOWN THE PROBLEMS WITH TECHNOLOGY
The first step in tackling any complex challenge is to break it down into smaller, more manageable problems. Election subversion, for example, comprises myriad issues from media-fueled doubt about election integrity to violence against election officials to vulnerabilities (real and perceived) with election technology.
But identifying those constituent problems typically involves weeks of research and interviews followed by additional months, if not years, of due diligence to figure out what’s been tried, whether what’s been tried has actually worked, and whether what has worked elsewhere is transferable and likely to work in additional communities.
But generative artificial intelligence is radically transforming the potential for how we solve problems together. Because large-language model technology is trained on billions of parameters of language, it is especially good at organizing and summarizing content, not just generating it.
POLICY SYNTH: USING ARTIFICIAL INTELLIGENCE TO ENHANCE COLLECTIVE INTELLIGENCE
To help speed up the process of defining the problems and coming up with solutions to election subversion, my team at the GovLab enlisted the expertise of the Icelandic civic tech entrepreneur Robert Bjarnason. Bjarnason has been designing platforms used in over 10,000 citizen engagements globally since 2008.
From his cabin alongside the desolate and beautiful White River in Southern Iceland, Robert acknowledges with cheery bluntness that “even when citizens participate, governments, in particular, often cannot make use of the feedback.”
Together, we invented Policy Synth, a tool kit to increase the speed, accuracy and scale of “smarter crowdsourcing” using a fine-tuned version of GPT-4, Open AI’s multimodal large language model.
How does it work? Policy Synth uses AI to improve complex policymaking. PolicySynth automates the creation of over a thousand different search queries, from general to scientific, to data-specific and news-related, to conduct a comprehensive search for problems and their root causes. This enabled us to break down the complex problem of “election subversion” into myriad smaller challenges automatically, identifying several dozen, more tractable challenges.
From among the longer list of problems, we selected which topics it wanted to focus on. For example, one specific topic was the misuse of the administrative and legal systems.
Election deniers have knowingly filed multiple malicious lawsuits with the goal of overturning electoral outcomes or filed frivolous public records requests with no real purpose but to gum up the works of the election system.
We rapidly convened 35 specialists for a two- hour, online conference via Zoom where they proposed 14 solutions to the legal abuse problem, such as investing in professional organizations with disciplinary authority to punish malicious lawyers and improving education about professional responsibility in law schools. AI helped us to summarize and extract the learnings from two hours of simultaneous talking and typing in minutes, rather than days. We repeated such online convenings for other topics.
In parallel to asking people, we also asked Policy Synth to generate its own list of solutions. GPT agents searched the web to identify solutions that are responsive to the problem. After generating hundreds of solutions, we automated the process of removing duplicates and isolating only those solutions that are relevant for a philanthropy (as opposed to a government or company).
This filtering process, which Robert calls “reaping,” produced a list of 60 solutions for each identified problem, each accompanied by a visual illustration from the image-generation tool StabilityAI, in a human-readable format with pros and cons for each solution.
Policy Synth yielded the same 14 solutions to legal abuses as those identified by the human experts but also introduced additional solutions, such as establishing a legal defense fund for administrative officials and mental health support for election workers.
Policy Synth does not just generate solutions; it also evolves the recommendations using a genetic algorithm. The software combines recommendations and then tests how well the new version of the solution fits the stated problem to see if the improvement should be adopted or rejected. With fifteen rounds of such mutation and ranking, Policy Synth produces a final list of approaches tailored to addressing the problem.
Recently, Google’s Deepmind announced that it, too, was experimenting with using genetic algorithms to allow AI to improve its own prompt drafting.
Policy Synth also employs Elo Scoring to rank the solutions. Named after chess master Arpad Elo, Elo Scoring shows how skilled a chess player is, not by factoring in the number of wins alone, but by whether the win was against a better or worse player. This pairwise comparison helps people to know how good they are. Similarly, the Policy Synth AI compares each solution one to the other and scores them based on requested criteria such as implementation speed, cost, potential for political disagreement, or impact on women or African Americans.
Thus, we were able to take recommendations generated by AI and by human experts and use one to rate and rank the other’s proposals.
Whereas ever-larger groups can get stuck arguing about the merits of different proposals, often based on who proposed them, generative AI can rapidly sift and rank ideas, accelerating the process of evaluating evidence.
SCALING HUMAN ENGAGEMENT IN POLICYMAKING
“Especially when taxpayer money is involved,” Robert emphasizes, “citizens should be involved in the decision-making.” But asking people for their ideas “requires substantial administrative oversight to manage, limiting the number of participants and the scope of issues that can be addressed or reducing the ability to make sense of what people share.
“That’s what makes AI so game-changing,” he exclaims. AI’s ability to handle various “back office” tasks like research and evaluation enables us to significantly increase the number of people who can participate in an online engagement. AI can extract valuable insights from ongoing conversations, evaluate contributions from human participants, refine proposals, and conduct research to fill in any gaps. Even when large numbers of participants are involved, automating these administrative and analytical tasks, we’re able to seamlessly combine different stages of citizen engagement, such as identifying problems and formulating solutions.
Now we are working with the Burnes Center for Social Change, the Museum of Science, New England’s largest cultural institution, and Boston Public Schools, Innovate Public Schools, and the Learning Agency to ask people nationwide about the crisis of literacy in America, where only 32% of children have basic reading proficiency.
Policy Synth has helped us to identify 150 possible root causes of the problem of low literacy. Now going out and asking parents, students, and educators at http://unlockingliteracy.ai to evaluate those constituent problems and say which ones are the most important.
After we explore the problems, Policy Synth will help us ask communities and specialists about what’s working, speeding up the process of codesigning and implementing solutions in a rapid interplay between humans and machines.
THE DANGER OF SILICONE SAMPLES
With the world facing so many complex and urgent challenges, the ability to improve and accelerate finding implementable solutions to complex problems from climate change to income inequality to food security could radically improve governance. Yet the introduction of generative AI into citizen engagement is not without challenges.
The greatest risk will be the temptation to use generative AI to reduce, rather than increase, human participation. AI personas have been found to closely match human responses. In one experiment, a social psychologist at the University of North Carolina at Chapel Hill posed 464 ethical questions to human subjects and to “silicone samples,” AI personas standing in for real people; the responses were nearly identical. It may be tempting to have conversations only with machines.
Substituting machines for human input misses the point of participatory problem-solving. Overreliance on AI-generated recommendations that are technically sound but lack emotional intelligence may not yield solutions that human communities want, especially if they were not involved in the process of creating them.
We are still learning how to combine artificial and collective intelligence efficiently. When we can blend machine precision with human wisdom, this has the potential to accelerate how we solve problems and deepen democracy. As we navigate this new frontier, let’s not forget: technology can inform, but people decide.
(34)