The 1996 law that made the web is in the crosshairs
It was 1995. Bill Clinton was in the White House. Seinfeld and Friends dominated primetime on TV. Radiohead was on the radio. The internet was still an innocent baby (except for porn, of course), but people could anticipate that it was going to be huge.
That year, the story goes, California Congressman Chris Cox was on an airplane back to Washington when he read that the early online service provider Prodigy had been sued for defamation over something that a user had posted to one of its bulletin boards. The anonymous person had accused a Long Island securities firm (later immortalized in The Wolf of Wall Street) of fraud in connection with an initial public offering. The majority opinion of the New York State Supreme Court said that because Prodigy posted user guidelines, used moderators, and scanned for indecent content, it was therefore a “publisher” and therefore legally liable for content that users published on its site.
That troubled Cox, who thought Prodigy was being punished for actively trying to keep its own house clean and orderly. Cox imagined a future where budding internet companies were sued out of existence because of their liability for harmful content posted by users. So he decided to do something about it.
Back in D.C., Cox worked with Oregon Congressman Ron Wyden to co-sponsor a bill that would insulate internet companies from much of their legal exposure from user content. The bill the two men wrote eventually became an add-on to the Communications Decency Act, which is part of the Telecommunications Act of 1996.
Wyden told me last month that § 230 opened the doors for trillions of dollars of investment in the internet. And, for a variety of reasons, the web flourished, and changed our personal and business lives in profound ways. Fortunes were made.
But the internet has changed a lot since 1996. Tech platforms have grown larger than anyone imagined they would, and they’re being used in different ways than anyone could have foreseen back then–some good, some bad. And the problem of harmful content posted by users never went away–instead it got much worse and arguably out of hand.
In the face of that toxic content’s intractability and the futility of the tech giants’ attempts to deal with it, it’s become a mainstream belief in Washington, D.C.–and a growing realization in Silicon Valley–that it’s no longer a question of whether to, but how to, regulate companies like Google, Twitter, and Facebook to hold them accountable for the content on their platforms. One of the most likely ways for Congress to do that would be to revise Section 230.
Understanding Section 230
Section 230 remains a misunderstood part of the law. As Wyden explained it to me, the statute provides both a “shield” and a “sword” to internet companies. The “shield” protects tech companies from liability for harmful content posted on their platforms by users. To wit:
(c) (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Specifically, it relieves web platform operators of liability when their users post content that violates state law by defaming another person or group, or painting someone or something in a false light, or publicly disclosing private facts. Section 230 does not protect tech companies from federal criminal liability or from intellectual property claims.
“Because content is posted on their platforms so rapidly there’s just no way they can possibly police everything,” Senator Wyden told me.
The “sword” refers to the 230’s “good samaritan” clause, which gives tech companies legal cover for choices they make when moderating user content. Before § 230, tech companies were hesitant to moderate content for fear of being branded “publishers” and thus made liable for toxic user content on their sites. Per the clause:
(c) (2) (a) No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
“I wanted to make sure that internet companies could moderate their websites without getting clobbered by lawsuits,” Wyden said on the House floor back in March. “I think everybody can agree that’s a better scenario than the alternative, which means websites hiding their heads in the sand out of fear of being weighed down with liability.”
Many lawmakers, including Wyden, feel the tech giants have been slow to detect and remove harmful user content, that they’ve used the legal cover provided by § 230 to avoid taking active responsibility for user content on their platforms.
And by 2016 the harmful content wasn’t just hurting individuals or businesses, but whole societies. Social sites like YouTube became unwitting recruiting platforms for violent terrorist groups. Russian hackers weaponized Facebook to spread disinformation, which caused division and rancor among voters, and eroded confidence in the outcome of the 2016 U.S. presidential election.
As Wyden pointed out on the floor of the Senate in March, the tech giants have even profited from the toxic content.
“Section 230 means they [tech companies] are not required to fact-check or scrub every video, post, or tweet,” Wyden said. “But there have been far too many alarming examples of algorithms driving vile, hateful, or conspiratorial content to the top of the sites millions of people click onto every day?–companies seeming to aid in the spread of this content as a direct function of their business models.”
And the harm may get a lot worse. Future bad actors may use machine learning, natural language, and computer vision technology to create convincing video or audio footage depicting a person doing or saying something provocative that they didn’t really do or say. Such “Deepfake” content, skillfully created and deployed with the right subject matter at the right time, could cause serious harm to individuals, or even calamitous damage to whole nations. Imagine a deep-faked president taking to Twitter to declare war on North Korea.
It’s a growing belief in Washington in 2018 that tech companies might become more focused on keeping such harmful user content off of their platforms if the legal protections provided in § 230 were taken away.
Shielding giants
There’s a real question over whether Wyden’s “shield” still fits. Section 230 says web companies won’t be treated as publishers, but they look a lot more like publishers in 2018 than they did in 1996.
In 1996 websites and services often looked like digital versions of real-world things. Craigslist was essentially a digital version of the classifieds. Prodigy offered an internet on-ramp and some bulletin boards. GeoCities let “homesteaders” build pages that were organized (by content type) in “neighborhoods” or “cities.”
Over time the dominant business models changed. Many internet businesses and publishers came to rely on interactive advertising for income, a business model that relied on browser tracking and the collection of users’ personal data to target ads.
To increase engagement, internet companies began “personalizing” their sites so that each user would have a different and unique experience, tailor-made to their interests. Websites became highly curated experiences served up by algorithms. And the algorithms were fed by the personal data and browsing histories of users.
Facebook came along in 2004 and soon took user data collection to the next level. The company provided a free social network, but harvested users’ personal data to target ads to them on Facebook and elsewhere on the web. And the data was very good. Not only could Facebook capture all kinds of data about a user’s tastes, but it could capture the user’s friends’ tastes too. This was catnip to advertisers because the social data proved to be a powerful indicator of what sorts of ads the user might click on.
Facebook also leveraged its copious user data, including that on the user’s clicks, likes, and shares, to inform the complex algorithms that curate the content in users’ news feeds. It began showing users the posts, news, and other content that the user–based on their personal tastes–was most likely to respond to. This put more attention-grabbing stuff in front of its users’ eyeballs, which pumped up engagement and created more opportunities to show ads.
This sounds a lot like the work of a publisher. “Our goal is to build the perfect personalized newspaper for every person in the world,” Facebook CEO Mark Zuckerberg said in 2014.
But Facebook has always been quick to insist that it’s not a publisher, just a neutral technology platform. There’s a very good reason for that: Publishers are liable for the content they publish on their websites; neutral platforms enjoy the protections in Section 230.
An “outdated loophole that Google and Facebook can exploit”
But in reality, Facebook isn’t so different from a publisher, argues former White House technology staffer Mikey Dickerson. Once companies put their hands (or their algorithms) into the process of making decisions on what content to show individual users, Dickerson says, they’re acting like publishers.
Dickerson, who spent a good chunk of his career at Google, says big web platforms have evolved so far since 1996 that Section 230 no longer fits. “I think it’s an outdated loophole that Google and Facebook can exploit,” he told me.
Electronic Frontier Foundation staff attorney Aaron Mackey doesn’t believe that tech companies’ site curation necessarily makes them less deserving of Section 230’s protections. Mackey said the language in § 230 was written with the intention of giving websites the legal protection to exert control over the user content at their sites. That is, to moderate.
Is there an important difference between curation and moderation? Internet companies use curation to maximize user engagement. They moderate to keep their platforms clean from toxic user content. Curation, arguably, is a process of (algorithmically) selecting content to show to a user, while moderation is process of de-selecting content–either proactively via a set of content rules or reactively via search and removal methods.
How to avoid hurting the little guys
Revising Section 230 would be an easy thing to screw up. Laws are only as good as their practical effects. Good laws cause more good than harm.
There’s no neat fix to the language in § 230 that would suddenly retrofit it to the needs of 2018. Any change seems destined to help some interests and hurt others.
Completely removing § 230’s protections might actually help big internet companies like Google and Facebook, and hurt the small innovative companies the statute was originally meant to help.”Section 230 has always been about the little guy,” Wyden told me, “the smaller companies that didn’t have big lobbies and didn’t have deep pockets, so that they could someday grow.” Wyden says he fears that leaving small internet companies exposed to lawsuits from individuals and states might run them out of business. They’d be spending all their time in court, and all their money paying court costs, legal fees, and damages.
Meanwhile, the big, established internet companies can afford the legal costs. And they might benefit because the new legal exposure might create a high barrier to entry for young companies trying to get into the market.
“There is a danger of changing the law in a way that doesn’t improve the situation for small companies, but rather causes the big companies become further and further entrenched,” EFF’s Mackey adds.
Dickerson says there may be ways of dealing with that problem. Lawmakers could reserve § 230’s protections for small companies, but not huge ones. “If you have earnings of more than $100 million a year, you are liable for the content, for example” Dickerson said. “If you are smaller, you get the protections of § 230.” Dickerson says setting such thresholds in the law is more common in Europe, where regulators are not so squeamish about regulating big business.
There’s also a real possibility that making tech companies more liable for user content would have a chilling effect on free speech online. Web companies facing the new threat of lawsuits might err on the side of safety, restricting or removing all but the safest kinds of user content.
They’d be put in the position of making judgments on whether a piece of user content is, for example, harmful disinformation or legitimate satire. On the other hand, as Democratic Senator Mark Warner has argued, the site operators, not the government and not the courts, may be in the best position to make those judgments and take down the content that violates community guidelines.
The risk of taking action for the wrong reasons
In the current political environment there’s a risk that lawmakers might attack § 230 for political reasons, or because they don’t fully understand Wyden and Cox’s statute and its intent.
Anger against tech companies isn’t a good enough reason to take action, Mackey says. “There is a lot of backlash against online platforms right now, and especially social media companies.” But acting on that as a reason to remove the protections in Section 230 would only make matters worse, Mackey said.
Many Republicans believe that Silicon Valley tech companies are determined to suppress conservative content on their platforms.
One of them is Senator Ted Cruz, who had this to say during an October debate with Beto O’Rourke: “Right now, big tech enjoys an immunity from liability on the assumption they would be neutral and fair,” Cruz said. “If they’re not going to be neutral and fair, if they’re going to be biased, we should repeal the immunity from liability so they should be liable like the rest of us.”
Cruz suggested the same thing when questioning Facebook’s Mark Zuckerberg during a Senate hearing in April: “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum,” Cruz said. “Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?”
Actually, the language in Section 230 puts no expectation of neutrality or fairness on websites, nor does it say that tech platforms much choose between the Section 230 protections and the right to free speech, as Cruz seemed to suggest. He’s an accomplished lawyer, so Cruz probably knows this but seems to be willing to twist the facts for partisan ends.
Chipping Away
The fact that the term Section 230 is being mentioned on the floor of the Senate, in hearings, and even on the campaign trail suggests that changing the law is on lawmakers’ minds–for the first time in years. The EFF’s Mackey believes members of Congress have already telegraphed their intention.
“We know this is the case because Congress just passed a bill that undercut the liability protections with respect to content about adult material and sex work online,” he said.
He’s referring to the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), which was signed into law in March. The law creates another exception to the broad legal immunity provided to website operators by Section 230. FOSTA added language to Section 230 saying that websites that host sex trafficking ads are now subject to lawsuits filed by sex trafficking victims and their families, or brought by state attorneys general.
It was the website Backpages.com that set the stage for FOSTA. The site was found to be hosting thousands of ads posted by sex traffickers. Backpages, in effect, was providing the conduit between johns and prostitutes, many of them underage. And some of those prostitutes ended up dead. But since the site’s operators were shielded by Section 230, the families of the victims could not sue Backpages.com. Nor could state attorneys general bring actions on behalf of victims and their families. Only after a federal investigation found that Backpages was actively soliciting the sex trafficking ads were the principals brought to justice. FOSTA exposed any site dealing in sex trafficking content to lawsuits.
FOSTA has big implications for the future of Section 230. Sex trafficking is probably not the last type of user content that lawmakers will decide to exempt from 230’s protections. Future revisions could use FOSTA as a model to carve out content like ads designed to undermine U.S. elections, online hate speech, or revenge porn.
Here’s Wyden talking on the Senate floor about FOSTA’s long-term effects before the bill passed back in March: “Ultimately, I fear this bill will set off a chain reaction that leads Congress to cut away more categories of behavior from Section 230, and dismantle the legal framework that’s given the United States the position it holds as a tech-economy superpower.”
It was notable that Sheryl Sandberg and Facebook came out in favor of FOSTA. It was a highly calculated political move. The big tech companies sense that new regulation, in some form, is coming, and the main job of their growing leagues of lobbyists in D.C. is to influence new legislation in their own interests. Supporting a relatively harmless law might head off the advancement of more painful ones in the future, the thinking goes. Supporting FOSTA was a low-cost way of presenting Facebook as a non-hostile participant in the process.
“Involvement and connection”
If sex trafficking content disqualifies the site operator from Section 230’s protections, what other “categories of behavior” might be carved out? What separates content that’s deserving of Section 230’s immunity from content that isn’t?
The answer lies not just in the nature of the content but in the site operator’s relationship to the content.
“The short answer is that Section 230’s limits on liability are when the platform ‘is responsible, in whole or in part, for the creation or development’ of the offending content,” points out EFF’s Mackey. The language he cites comes from the “definitions” section of Section 230. Mackey believes the definitions in the statute are already sufficient.
But the language in FOSTA seems to set the bar a bit lower. It stipulates that in order to be stripped of immunity provided by Section 230 a website operator has to have “promoted” or “enabled” prostitution, or acted with “reckless disregard” to the fact that its platform was creating those effects. In the case of Backpages, federal investigators found ample evidence that the website operators were working with third parties to actively solicit sex trafficking ads.
Chris Cox believes Section 230 as a whole could draw a clearer line between web platform and content creator. He told NPR in March that Congress should revisit Section 230, and add language explaining that when the site operator actively solicits unlawful content, or is otherwise “connected to unlawful activity,” it should no longer enjoy the legal immunity provided by the statute. Section 230’s protections are meant for sites that act as “pure intermediaries,” Cox suggested.
It begs the question of whether or not–after all the election meddling, disinformation, terrorist recruiting, and hate speech we’ve seen in the last few years–we can look at Facebook, Google, Twitter, and others and truthfully call them “pure” intermediaries.
Were Facebook and Google “involved or connected” with the ads purchased on their platforms by Russian trolls attempting to disrupt the 2016 presidential election? Very possibly. What if the party buying the ads says they are somebody who they’re not?
Was Google involved or connected with the ISIS groups that posted systems of recruitment videos on YouTube? After all, those videos were served up by the YouTube suggestion engine to increase engagement and ad views. Similarly, the algorithms that run Facebook’s news feed were tuned to bring wide exposure to the controversial and divisive fake news stories and graphics favored by Russian trolls. Does that constitute “involvement” and “connection”? Can tech platforms be seen as “pure intermediaries” if it’s their algorithms, acting as proxies, doing the curating, moderating, and promoting of user content?
Lawmakers may have to wrestle with those distinctions when choosing the next types of content to carve out from Section 230, or when looking at the statute’s relevancy in general.
Wyden, and most of the people I interviewed for this article, think that the best solution to the user content problem is for tech companies to voluntarily do what it takes find and remove the toxic stuff from their platforms. Actually, 230 was meant to give them the legal cover to do that–to wield Wyden’s sword, early and often. The problem is that Section 230 doesn’t contain the force of law to compel them to do it. There’s no language saying, “Get that garbage off your site within 24 hours or else!” So, for the tech companies, it remains a largely PR and public policy issue, not something that directly affects their bottom line. The big platform companies have been doing just enough content takedowns and bad-actor ejections to keep new regulations at bay.
The only real leverage that Congress has is the threat of taking away the protections in Section 230 that tech companies have enjoyed for so long. Wyden was sure to remind Facebook’s Sheryl Sandberg and Twitter’s Jack Dorsey of that when he questioned them during a hearing in September.
“I told both of them, ‘If you’re not willing to use the sword there are those who may try to take away the shield.”
(8)