By Andrew Jacobsohn.
Amidst the constant barrage of unprecedented and breaking news in the post-2020 election and pre-inauguration season, it was easy to miss a less-notable first: on January 1, 2021, Congress overrode a then-President Trump veto for the first (and only) time of his term. Trump vetoed the bill because it did not include changes to the Communications Decency Act § 230 (“CDA 230”). He claimed CDA 230 unfairly permitted large technology platforms to censor him and other conservative figures. But Trump wasn’t the only one clamoring for change to CDA 230. President Biden and Speaker Pelosi, along with other congressional Democrats, have expressed support for removing or reforming the provision.
This bipartisan support indicates that CDA 230 reform (or perhaps even revocation) seems inevitable. But given CDA 230’s immense impact on our online lives, any changes to it will likely have a similar widespread impact.
What is CDA 230?
CDA 230 1(a) is only twenty-six words: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It was first passed as part of the Communications Decency Act of 1996, and was conceived to ensure internet companies could self-regulate without fear of legal liability. In other words, Congress intended the Act to allow internet companies to remove offensive or illegal materials posted by its users, without punishing the company if it failed to remove this material. Since the Act’s passage, online forums have been shielded from culpability for material posted by their users.
But CDA 230 has been called “the twenty-six words that created the Internet,” despite being passed when the Internet as we now know it was still in its infancy. In 1996, the Internet was nifty, newfangled, and limited. Although it seems ludicrous now, many industry leaders suggested at the time that the Internet would never creep into our lives in any significant fashion. It was against this backdrop that CDA 230 was conceived.
These predictions never materialized. The Internet today is not just pervasive—it’s omnipresent and arguably monopolistic. This is true for better and for worse. On one hand, the Internet has allowed some students and employees to continue their work in the face of the global COVID-19 pandemic. On the other hand, social media has facilitated misinformation and foreign and domestic terrorism.
What does CDA 230 do today?
The massive impact of the Internet has led to a proportionally massive reach for CDA 230. CDA 230’s safe harbor only has a few hard limits: federal crimes, intellectual property claims, and sex trafficking violations are the most significant. But outside of those areas, CDA’s protection is broad. For example, Harvard and MIT invoked CDA 230 in an attempt to escape liability for third-party content hosted on their websites which was not compliant with the Americans with Disabilities Act (“ADA”). The U.S. District Court of Massachusetts agreed that the universities could not be held responsible for noncompliant third-party content.
The court probably got this one right. Although ADA noncompliance is a far cry from the defamatory or offensive content originally envisioned by CDA 230, it’s hard to imagine a university (or, for that matter, any platform whose offerings include educational videos made by users, such as YouTube) being held liable under the ADA for content it had no hand in creating. And although the court ruled that the universities could escape liability here, it also correctly held that any content created by the universities (or “someone associated” with the universities) would not receive CDA 230 protection.
A New Example: Google’s CDA 230 Gamble
A federal judge for the Ninth Circuit dismissed a proposed class action against Google on February 10th, 2021, which had alleged that Google had illegally enticed children to gamble by allowing lootbox-based games in its Play store. Google employed CDA 230 as its defense, and the judge found Google immune to any liability, even if the lootboxes were considered gambling.
Google did not create either the lootboxes or games that contained them, just as Harvard did not create the ADA noncompliant videos. But Google’s relationship to its Play store content exemplifies the strange behaviors that platforms have protected using CDA 230.
Lootbox systems are akin to virtual slot machines: players pay real money to receive an unknown virtual item within the game in the hopes that the result is a rare in-game item. Each “spin of the wheel” is called a microtransaction, and could cost merely $1, or more than $100, depending on the game. One of the most successful profiteers of this system, Electronic Arts, grossed $1.49 billion last year from the lootboxes in its sports games alone. The incredible efficiency with which these games have persuaded players (or, more often, their unwitting parents) to part with their money has prompted numerous lawsuits like the one against Google. Numerous countries have considered or implemented restrictions on this quasi-gambling model.
Google hosts countless games on its Play store which thrive on this model (popular examples include Clash of Clans and Puzzles & Dragons). Google facilitates the transactions by hosting each one through the Play store’s uniform system, and taking 30% of the revenue from each microtransaction. So although Google did not create the content at issue, arguments persist that it has a much closer beneficial relationship to a particular lootbox game than (for example) Twitter has to a particular Twitter user. To that end, the judge granted plaintiffs leave to amend their claims to demonstrate how Google’s conduct went beyond mere “publishing” of third-party conduct.
How will CDA 230 survive, if at all?
Attempts at CDA 230 reform (or repeal) have begun to escalate under the new presidential administration. One proposed reform is the SAFE TECH Act, which would carve out a sweeping exception to the safe harbor for cases in which the online platform has been paid to promote the content.
While this change would easily resolve Google’s case in favor of the plaintiffs, its “exception” is cut so broadly that it would effectively act as a repeal. Internet companies all receive payment for the speech on their platforms—whether that payment is direct payment for promoting certain content, indirect payment from embedded advertisements, or something in the middle like in Google’s case. And this reform does not address Republican quarrels with Twitter or other platforms arguably censoring conservative opinion (even if these particular quarrels may ultimately have more to do with the First Amendment than they do CDA 230).
SAFE TECH’s language is too broad to be feasible, but it probably gets the spirit right. CDA 230 has not grown with the Internet. Drafters in 1996 could not have predicted lootboxes, Twitter-based presidential campaigns, or Reddit-fueled stock market irrationality. That is not to say that these things are inherently bad: there is a certain beauty to the memetic cultures that have developed online in the pandemic’s wake. But these examples hint at the social stakes at play in our online world. It is unclear if there is a targeted reform that can square the social stakes and immense monetary influence. But if a reform can do that, it might just lead to a better Internet.