These people already know how to build better AI. We just need to pay attention to them

 

By Solana Larsen

One minute we’re supposed to believe that artificial intelligence is the most incredible accelerator of human progress, and the next that it’s the most dangerous technology in the world.

I am not going to tell you that the truth is somewhere in between. We have got to stop talking about AI as though “it” has any innate qualities. What we’re talking about are people, companies, and institutions who have values and goals that are embedded into data and technology.

So who do we listen to?

A new report from the digital investigation group Root Cause (commissioned by the European AI and Society Fund of which Mozilla is a partner) argues that dominant media frames around AI are aligned with the interests of technology companies. This tracks with my experience at Mozilla.

But I believe there’s a fix—at least to this.

For the past seven years, I have been the editor of Mozilla’s Internet Health Report, a semiannual health check on the status of the internet that compiles research from different fields.

Let me ask, do you think Facebook or Amazon has your best interests at heart? Does OpenAI? As much as we celebrate all that is possible with the internet, the Big Tech behaviors that most concern us are being replicated and amplified in the new technologies surrounding AI.

Disdain for privacy? Check. Lack of transparency? Check. Consolidation of power that stifles open innovation? Unconscionable behavior toward communities that are harmed by biases of algorithms? Lobbying to dazzle regulators into passivity? Check, check, check.

Yes, we should call for regulation, for transparency, and for the support of open-source technology in service of independent research and accountability. But we should also amplify perspectives that help us rethink what technology can be, and who it should serve in this wide world. In order to mitigate uses of AI that pose a risk to society, we have got to start listening more to the people who are proposing trustworthy alternatives. Sure, that means reading this article — but it especially refers to decision makers and opinion shapers in tech who are being led by the nose.

My team’s recent research and podcast highlight the voices of people behind such work. For instance, Sasha Luccioni, a researcher at Hugging Face who develops more energy-efficient AI, and Andriy Mulyar, the cofounder of GPT4All that develops an offline privacy-preserving alternative to ChatGPT, are part of a movement of open-source challengers to the large language models of trillion-dollar companies. And they are proving that generative AI can have more than one trajectory and market-domineering purpose. Open source could help fuel competition in AI and allow for more oversight, yet the fear of existential risk is being operationalized to shut down opportunities for more people to have a say in what technologies are built and who they should serve.

 

We also heard from content moderators in Kenya who have been chewed up and spit out by global tech companies, and are fighting back with a newly formed labor union. And we heard from Karya, a startup in India that has a new approach to fair compensation for data work behind AI. They shine a light on the hidden ways AI companies contract millions of people in the global south to do data labeling and moderation, as well as how it could be done more equitably with fair pay. Yet when it comes to designing regulation that could protect people from harm and provide mechanisms to hold companies accountable, it’s the companies with the biggest dollar signs attached to their name that can call meetings with policy makers on a whim, as Sam Altman did this year worldwide. What about data workers or the countless populations who are vulnerable from the adoption of AI for decision-making at banks, governments, schools, and employers?

We also spoke with Keoni Mahelona, an Indigenous language tech developer for Te Hiku Media in New Zealand, and Kathleen Siminyu, who works with grassroots AI builders across Africa. They do community work based on values that are alien to Silicon Valley and do not align with the priorities of Big AI. From them, we understand how control over data that feeds AI can also be a precious community resource that uplifts rather than extracts. But as long as local innovators are unable to compete because of the concentration of power of large AI companies (funded by familiar firms like Google, Amazon, and Microsoft, no less) it will zap incentives to build alternatives.

There are multitudes to the conversation around AI that mostly go unheard. If we want technology that works for us, rather than plunging us into harm, we need to listen. On regulation, on competition and on open source, solutions are actually being communicated but not heard. There are rumblings of a shift on the regulatory front with the past week’s executive order in the U.S. and the AI Safety Summit in the U.K. We can hope it will lead to action. But without a values-based direction for what we wish to see from AI in the future, we will never know how to steer toward it.


Solana Larsen is the editor of Mozilla’s Internet Health Report and the Webby award-winning podcast, IRL: Online Life Is Real Life. Prior to Mozilla, Larsen worked at Global Voices as the managing editor for seven years.

Fast Company

(14)