14 experts say how the net’s worst problems could be solved by 2035
In the early 21st century, the internet—and the social internet, in particular—has enabled a more connected world. But it’s also enabled and amplified some of humanity’s worst behaviors. Fringy, toxic opinions and outright disinformation proliferate. Antisocial behavior is normalized. Facts—when they can be recognized—are used to bolster preexisting opinions, not to challenge assumptions. Kids (and adults) measure their self-worth by their Instagram comments and follower count. Expecting the huge tech companies that operate the platforms to proactively fix the problems gets more unworkable as online communities grow into the billions.
When the Pew Research Center reached out to numerous tech opinion leaders to ask them to envision the internet of 2035–which might end up being called the metaverse, or Web3, or yet something completely else–many of them seized on the problems of poor online governance, lax content moderation, misaligned incentives, and a lack of trust as key challenges. Their ideas on how to fix these problems, and the progress that might be made over the next 12 years, are downright illuminating. In the interest of TL;DR, I extracted nuggets from these experts’ sometimes lengthy comments.
Doc Searls, internet pioneer, coauthor of The Cluetrain Manifesto, author of The Intention Economy, and cofounder and board member of Customer Commons
“The new and improved digital realm of 2035 is one in which the Web still works but has been sidelined because there are better models on the Internet for people and organizations to get along, and better technologies than can be imagined inside the client-server model of browser-website interactions. To see what is likely by 2035, imagine having your own personal privacy policies and terms and conditions for engagement, rather than always agreeing to those of others. The Internet supports that. The Web does not. On the Web, only sites and services can have privacy policies or proffer terms and conditions. You are never the first party, but only the second—and a highly subordinate one as well.”
Zizi Papacharissi, professor of political science and professor and head of the communication department at the University of Illinois-Chicago
“In most of the spaces we inhabit, humans have developed some form of curation. For example, a closed door may signify a preference for privacy; it may also signal a desire for security (and one of heightened security if the door is locked). Doors allow us to curate what enters our spaces and what remains out. Similarly, we humans have developed ways of chastising or punishing inappropriate behavior in commonly shared spaces. For example, if a person misbehaves in a bar, they are thrown out by a bouncer. We do not have a debate in this case about whether that person’s rights to free speech were violated because they started yelling in a bar. We simply kick them out. As of yet, we have no such types of broadly adopted rules for what appropriate behavior is online and how to enforce those rules online. When we try to establish them, we spark all kinds of debates about free speech. Yet free speech is not the same as free reach.”
Lucy Bernholz, director of Stanford University’s Digital Civil Society Lab and author of How We Give Now: A Philanthropic Guide for the Rest of Us
“Designing digital spaces for safety and serendipity is a next step. Enabling people to go in, out, and between such spaces as they choose is critical. And allowing groups of people to control information they generate about them is also important. Digital spaces need to become tools of the people, not of corporations and governments. They need to be fragmented, pluralistic, multitudinous, and interconnected at the will of people, not by profit-driven lock-in. . . . We need to remember and maintain the best of our physical spaces online—our parks, libraries, sidewalks, stoops, benches, busses, trains, and town squares—and bring that multiplicity of choice and its privacy within crowds, and safe serendipity into digital spaces.”
Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure at the University of Massachusetts, Amherst
“I prefer to imagine a 2035 in which internet communities strengthen our civic and democratic muscles. Imagine a world in which most people are members of dozens of different online communities. Some are broad in membership and purpose, while others are narrowly focused and support small groups of users. These smaller groups, in particular, demand that their participants be involved in governing these spaces, acting as moderators, and the authors of the community guidelines. The larger networks use representative systems to nominate interested users to task forces that write and rewrite guidelines, and participants take shifts as moderators, using mechanisms similar to jury service. Through the rise of these community governance mechanisms, social networks not only become less toxic but become a training ground for full participation in a democracy.”
Vint Cerf, vice president and chief internet evangelist at Google and Internet Hall of Fame member
“[In 2035,] people will still do harmful things, but it will be much harder to get away with it.”
Yvette Wohn, associate professor in informatics at New Jersey Institute of Technology and director of the Social Interaction Lab
“Misinformation, online harassment, etc., will not go away, but more efforts will be made toward building resilience and inquisitive questioning of information.”
Brad Templeton, internet pioneer, futurist, and activist, and chairman emeritus of the Electronic Frontier Foundation
“People are asked to make one-time choices about what policies they want to govern their social feeds. People are more willing to make reflective choices when divorced from actual content, while they react emotionally when presented with incendiary content. People are encouraged to socially network with others of a diverse set of creeds, and steps are taken to make it less likely people would ‘unfollow’ such people by defusing confrontation.”
danah boyd, founder and president of Data & Society Research Institute and a principal researcher at Microsoft
“Imagine a world where we can ‘see’ how we’re all connected [as in an immersive 3D internet], how our lives are dependent on all of those around us. Imagine a system where we can identify vulnerabilities in our social fabric, and work collectively to repair them. Where we can ‘see’ people who aren’t like us and appreciate how we are like one another. Where we can build tools to empower the collective to solve social problems.”
Susan Price, CEO, UX strategist, and human-centered design innovator at Firecat Studio
“I’ve long had a vision for a Human API, an application programming interface that would serve as a filter layer between each individual human and any systems delivering content or services to us, or carrying/sharing information about us, such as our location, actions, decisions, identifiers, history, and so forth—our data. I want a Human API that stores and enforces the rules I set about what is allowed to come into my awareness, what takes up my time, and what information is shared about my activities. For example, the Human API would store all the ‘I Agree’ contractual agreements I’ve made with various companies and services. It would store all the subscriptions and payment commitments I’ve executed and provide a single dashboard to examine, analyze, and manage them. Rather than my preferences, settings, records, and agreements being stored in dozens or hundreds of vendor databases, the data would be stored under my control.”
Tony Smith, a Melbourne, Australia-based researcher of complex systems
“My hope is that in 2035, digital spaces serve as a fleet of lifeboats for those trying to navigate the terminal collapse of final-stage capitalism and nation-states and their enforcement operations. . . . The pressure to appear to be doing something will have largely disappeared as slowing down proves to be nowhere near as catastrophic as the evermore-frequent challenges of keeping life viable.”
Jay Owens, London-based research and innovation consultant, New River Insight
“It’s 2035, and the K-pop fan armies turn their prodigious social media organizing capabilities toward local democracy. Live streamers broadcast every council meeting, offering acerbic commentary and contextualization—no decision goes unscrutinized. Fan squads use a ‘cell’ structure to mobilize their neighbors on messenger apps; each young person is responsible for getting the news out to their apartment block or street and collecting messages, signatures, videos, and demands to feed to elected representatives.”
Alejandro Pisanty, Internet Hall of Fame member and professor of information society and internet governance at UNAM, National Autonomous University of Mexico
“By 2035, it is likely that there will be ‘positive’ digital spaces. In them, ideally, there will be enough trust, in general, to allow significant political discussion, diffusion of trustworthy news, and vital information (such as health-related) in which digital citizenship can be exerted and enrich society. . . . The hope we can have is that enough people and organizations (including for-profit) will push the common good so that the ‘positive’ spaces can still be useful. They may become gated, to everyone’s loss. Education and political pressure on platforms will be key for the possible improvements.”
Mei Lin Fung, chair and cofounder of the People-Centered Internet
“Digital Humanism driven by love of people and planet is urgently needed as a counterweight to what will otherwise be an inevitable Digital Colonization driven by profit. Surely, you say, we all want to prioritize Flourishing over the Fury that will result in the subsequent conflicts between the Digital Masters and the Digital Slaves? . . . But lacking understanding that this is what we are heading toward, millions already watch in horror as dystopian nightmares of Poverty, Pandemic, Prejudice, encroach. Feeling helpless, blaming others, justifying inaction, finding comfort in complaining as an alternative to acting, the silent majority is like a frog where the water is so gradually heating up that the frog does not try to jump out.”
Susan Crawford, John A. Reilly clinical professor of law at Harvard Law School and Special Assistant to the President for science, technology, and innovation policy in the Obama administration
“Someday, we’ll cease to differentiate between on- and offline, just as we have stopped talking about ‘electrified’ life. Much that we now treasure will disappear. But the human spirit is creative and playful—we’ll be up to new augmented shenanigans that we cannot now imagine.”
(30)