Musk wants Twitter to stop making contentious decisions about speech. “[G]oing beyond the law is contrary to the will of the people,” he declares. Just following the First Amendment, he imagines, is what the people want. Is it, though? The First Amendment is far, far more absolutist than Musk realizes.
Remember Neo-Nazis with burning torches and screaming “the Jews will not replace us!”? The First Amendment required Charlottesville to allow that demonstration. Some of them were arrested and prosecuted for committing acts of violence. One even killed a bystander with his car. The First Amendment permits the government to punish violent conduct but—contrary to what Musk believes—almost none of the speech associated with it.
The Constitution protects “freedom for the thought that we hate,” as Justice Oliver Wendell Holmes declared in a 1929 dissent that has become the bedrock of modern First Amendment jurisprudence. In most of the places where we speak, the First Amendment does not set limits on what speech the host, platform, proprietor, station, or publication may block or reject. The exceptions are few: actual town squares, company-owned towns, and the like—but not social media, as every court to decide the issue has held.
Musk wants to treat Twitter as if it were legally a public forum. A laudable impulse—and of course Musk has every legal right to do that. But does he really want to? His own statements indicate not. And on a practical level, it would not make much sense. Allowing anyone to say anything lawful, or even almost anything lawful, would make Twitter a less useful, less vibrant virtual town square than it is today. It might even set the site on a downward spiral from which it never recovers.
Can Musk have it both ways? Can Twitter help ensure that everyone has a soapbox, however appalling their speech, without alienating both users and the advertisers who sustain the site? Twitter is already working on a way to do just that—by funding Bluesky—but Musk doesn’t seem interested. Nor does he seem interested in other technical and institutional improvements Twitter could make to address concerns about arbitrary content moderation. None of these reforms would achieve what seems to be Musk’s real goal: politically neutral outcomes. We’ll discuss all this in Part II.
How Much Might Twitter’s Business Model Change?
A decade ago, a Twitter executive famously described the company as “the free speech wing of the free speech party.” Musk may imagine returning to some purer, freer version of Twitter when he says “I don’t care about the economics at all.” But in fact, increasing Twitter’s value as a “town square” will require Twitter to continue striking a careful balance between what individual users can say and creating an environment that so many people want to use so regularly.
User Growth. A traditional public forum (like Lee Park in Charlottesville) is indifferent to whether people choose to use it. Its function is simply to provide a space for people to speak. But if Musk didn’t care how many people used Twitter, he’d buy an existing site like Parler or build a new one. He values Twitter for the same reason any network is valuable: network effects. Digital markets have always been ruled by Metcalfe’s Law: the impact of any network is equal to the square of the number of nodes in the network.
No, not all “nodes” are equal. Twitter is especially popular among journalists, politicians and certain influencers. Yet the site has only 39.6 million active daily U.S. users. That may make Twitter something like ten times larger than Parler, but it’s only one-seventh the size of Facebook—and only the world’s fifteenth-largest social network. To some in the “very online” set, Twitter may seem like everything, but 240 million Americans age 13+ don’t use Twitter every day. Quadrupling Twitter’s user base would make the site still only a little more than half as large as Facebook, but Metcalfe’s law suggests that would make Twitter roughly sixteen times more impactful than it is today.
Of course, trying to maximize user growth is exactly what Twitter has been doing since 2006. It’s a much harder challenge than for Facebook or other sites premised on existing connections. Getting more people engaged on Twitter requires making them comfortable with content from people they don’t know offline. Twitter moderates harmful content primarily to cultivate a community where the timid can express themselves, where moms and grandpas feel comfortable, too. Very few Americans want to be anywhere near anything like the Charlottesville rally—whether offline or online.
User Engagement. Twitter’s critics allege the site highlights the most polarizing, sensationalist content because it drives engagement on the site. It’s certainly possible that a company less focused on its bottom line might change its algorithms to focus on more boring content. Whether that would make the site more or less useful as a town square is the kind of subjective value judgment that would be difficult to justify under the First Amendment if the government attempted to legislate it.
But maximizing Twitter’s “town squareness” means more than maximizing “time on site”—the gold standard for most sites. Musk will need to account for users’ willingness to actually engage in dialogue on the site.
Short of leaving Twitter altogether, overwhelmed and disgusted users may turn off notifications for “mentions” of them, or limit who can reply to their tweets. As Aaron Ross Powell notes, such a response “effectively turns Twitter from an open conversation to a set of private group chats the public can eavesdrop on.” It might be enough, if Musk truly doesn’t care about the economics, for Twitter to be a place where anything lawful goes and users who don’t like it can go elsewhere. But the realities of running a business are obviously different from those of traditional, government-owned public fora. If Musk wants to keep or grow Twitter’s user base, and maintain high engagement levels, there are a plethora of considerations he’ll need to account for.
Revenue. Twitter makes money by making users comfortable with using the site—and advertisers comfortable being associated with what users say. This is much like the traditional model of any newspaper. No reputable company would buy ads in a newspaper willing to publish everything lawful. These risks are much, much greater online. Newspapers carefully screen both writers before they’re hired and content before it’s published. Digital publishers generally can’t do likewise without ruining the user experience. Instead, users help a mixture of algorithms and human content moderators flag content potentially toxic to users and advertisers.
Even without going as far as Musk says he wants to, alternative “free speech” platforms like Gab and Parler have failed to attract any mainstream advertisers. By taking Twitter private, Musk could relieve pressure to maximize quarterly earnings. He might be willing to lose money but the lenders financing roughly half the deal definitely aren’t. The interest payments on their loans could exceed Twitter’s 2021 earnings before interest, taxes, depreciation, and amortization. How will Twitter support itself?
Protected Speech That Musk Already Wants To Moderate
As Musk’s analysts examine whether the purchase is really worth doing, the key question they’ll face is just what it would mean to cut back on content moderation. Ultimately, Musk will find that the First Amendment just doesn’t offer the roadmap he thinks it does. Indeed, he’s already implicitly conceded that by saying he wants to moderate certain kinds of content in ways the First Amendment wouldn’t allow.
Spam. “If our twitter bid succeeds,” declared Musk in announcing his takeover plans, “we will defeat the spam bots or die trying!” The First Amendment, if he were using it as a guide for moderation, would largely thwart him.
Far from banning spam, as Musk proposes, the 2003 CAN-SPAM Act merely requires email senders to, most notably, include unsubscribe options, honor unsubscribe requests, and accurately label both subject and sender. Moreover, the law defines spam narrowly: “the commercial advertisement or promotion of a commercial product or service.” Why such a narrow approach?
Even unsolicited commercial messages are protected by the First Amendment so long as they’re truthful. Because truthful commercial speech receives only “intermediate scrutiny,” it’s easier for the government to justify regulating it. Thus, courts have also protected the constitutional right of public universities to block commercial solicitations.
But, as courts have noted, “the more general meaning” of “spam” “does not (1) imply anything about the veracity of the information contained in the email, (2) require that the entity sending it be properly identified or authenticated, or (3) require that the email, even if true, be commercial in character.” Check any spam folder and you’ll find plenty of messages that don’t obviously qualify as commercial speech, which the Supreme Court has defined as speech which does “no more than propose a commercial transaction.”
Some emails in your spam folder come from non-profits, political organizations, or other groups. Such non-commercial speech is fully protected by the First Amendment. Some messages you signed up for may inadvertently wind up in your spam filter; plaintiffs regularly sue when their emails get flagged as spam. When it’s private companies like ISPs and email providers making such judgments, the case is easy: the First Amendment broadly protects their exercise of editorial judgment. Challenges to public universities’ email filters have been brought by commercial spammers, so the courts have dodged deciding whether email servers constituted public fora. These courts have implied, however, that if such taxpayer-funded email servers were public fora, email filtering of non-commercial speech would have to be content- and viewpoint-neutral, which may be impossible.
Anonymity. After declaring his intention to “defeat the spam bots,” Musk added a second objective of his plan for Twitter: “And authenticate all real humans.” After an outpouring of concern, Musk qualified his position:
Whatever “balance” Musk has in mind, the First Amendment doesn’t tell him how to strike it. Authentication might seem like a content- and viewpoint-neutral way to fight tweet-spam, but it implicates a well-established First Amendment right to anonymous and pseudonymous speech.
Fake accounts plague most social media sites, but they’re a bigger problem for Twitter since, unlike Facebook, it’s not built around existing offline connections and Twitter doesn’t even try to require users to use their real names. A 2021 study estimated that “between 9% and 15% of active Twitter accounts are bots” controlled by software rather than individual humans. Bots can have a hugely disproportionate impact online. They’re more active than humans and can coordinate their behavior, as that study noted, to “manufacture fake grassroots political support, promote terrorist propaganda and recruitment, manipulate the stock market, and disseminate rumors and conspiracy theories.” Given Musk’s concerns about “cancel culture,” he should recognize that online harassment, especially targeting employers and intimate personal connections, as a way that lawful speech can be wielded against lawful speech.
When Musk talks about “authenticating” humans, it’s not clear what he means. Clearly, “authentication” means more than simply requiring captchas to make it harder for machines to create Twitter accounts. Those have been shown to be defeatable by spambots. Surely, he doesn’t mean making real names publicly visible, as on Facebook. After all, pseudonymous publications have always been a part of American political discourse. Presumably, Musk means Twitter would, instead of merely requiring an email address, somehow verify and log the real identity behind each account. This isn’t really a “middle ground”: pseudonyms alone won’t protect vulnerable users from governments, Twitter employees, or anyone else who might be able to access Twitter’s logs. However such logs are protected, the mere fact of collecting such information would necessarily chill speech by those concerned of being persecuted for their speech. Such authentication would clearly be unconstitutional if a government were to do it.
“Anonymity is a shield from the tyranny of the majority,” ruled the Supreme Court in McIntyre v. Ohio Elections Comm’n (1995). “It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.” As one lower court put it, “the free exchange of ideas on the Internet is driven in large part by the ability of Internet users to communicate anonymously.”
We know how these principles apply to the Internet because Congress has already tried to require websites to “authenticate” users. The Child Online Protection Act (COPA) of 1998 required websites to age-verify users before they could access material that could be “harmful to minors.” In practice, this meant providing a credit card, which supposedly proved the user was likely an adult. Courts blocked the law and, after a decade of litigation, the U.S. Court of Appeals for the Eighth Circuit finally struck it down in 2008. The court held that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” The Supreme Court let that decision stand. The United Kingdom now plans to implement its own version of COPA, but First Amendment scholars broadly agree: age verification and user authentication are constitutional non-starters in the United States.
What kind of “balance” might the First Amendment allow Twitter to strike? Clearly, requiring all users to identify themselves wouldn’t pass muster. But suppose Twitter required authentication only for those users who exhibit spambot-like behavior—say, coordinating tweets with other accounts that behave like spambots. This would be different from COPA, but would it be constitutional? Probably not. Courts have explicitly recognized a right to engage send non-commercial spam (unsolicited messages), for example: “were the Federalist Papers just being published today via e-mail,” warned the Virginia Supreme Court in striking down a Virginia anti-spam law, “that transmission by Publius would violate the statute.”
Incitement. In his TED interview, Musk readily agreed with Anderson that “crying fire in a movie theater” “would be a crime.” No metaphor has done more to sow confusion about the First Amendment. It comes from the Supreme Court’s 1919 Schenck decision, which upheld the conviction of the head of the U.S. Socialist Party for distributing pamphlets criticizing the military draft. Advocating obstructing military recruiting, held the Court, constituted a “clear and present danger.” Justice Oliver Wendell Holmes mentioned “falsely shouting fire in a theatre” as a rhetorical flourish to drive the point home.
But Holmes revised his position just months later when he dissented in a similar case, Abrams v. United States. “[T]he best test of truth,” he wrote, “is the power of the thought to get itself accepted in the competition of the market.” That concept guides First Amendment decisions to this day—not Schenk’s vivid metaphor. Musk wants the open marketplace of ideas Holmes lauded in Abrams—yet also, somehow, Schenck’s much lower standard.
In Brandenburg v. Ohio (1969), the Court finally overturned Schenck: the First Amendment does not “permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Thus, a Klansman’s openly racist speech and calls for a march on Washington were protected by the First Amendment. The Brandenburg standard has proven almost impossible to satisfy when speakers are separated from their listeners in both space and time. Even the Unabomber Manifesto wouldn’t qualify—which is why The New York Times and The Washington Post faced no legal liability when they agreed to publish the essay back in 1995 (to help law enforcement stop the serial mail-bomber).
Demands that Twitter and other social media remove “harmful” speech—such as COVID misinformation—frequently invoke Schenck. Indeed, while many expect Musk will reinstate Trump on Twitter, his embrace of Schenck suggests the opposite: Trump could easily have been convicted of incitement under Schenck’s “clear and present danger” standard.
Self-Harm. Musk’s confusion over incitement may also extend to its close cousin: speech encouraging, or about, self-harm. Like incitement, “speech integral to criminal conduct” isn’t constitutionally protected, but, also like incitement, courts have defined that term so narrowly that the vast majority of content that Twitter currently moderates under its suicide and self-harm policy is protected by the First Amendment.
William Francis Melchert-Dinkel, a veteran nurse with a suicide fetish, claimed to have encouraged dozens of strangers to kill themselves and to have succeeded at least five times. Using fake profiles, Melchert-Dinkel entered into fake suicide pacts (“i wish [we both] could die now while we are quietly in our homes tonite:)”), invoked his medical experience to advise hanging over other methods (“in 7 years ive never seen a failed hanging that is why i chose that”), and asked to watch his victims hang themselves. He was convicted of violating Minnesota’s assisted suicide law in two cases, but the Minnesota Supreme Court voided the statute’s prohibitions on “advis[ing]” and “encourag[ing]” suicide. Only for providing “step-by-step instructions” on hanging could Melchert-Dinkel ultimately be convicted.
In another case, the Massachusetts Supreme Court upheld the manslaughter conviction of Michelle Carter; “she did not merely encourage the victim,” her boyfriend, also age 17, “but coerced him to get back into the truck, causing his death” from carbon monoxide poisoning. Like Melchert-Dinkel, Carter provided specific instructions on completing suicide: “knowing the victim was inside the truck and that the water pump was operating — … she could hear the sound of the pump and the victim’s coughing — [she] took no steps to save him.”
Such cases are the tiniest tip of a very large iceberg of self-harm content. With nearly one in six teens intentionally hurting themselves annually, researchers found 1.2 million Instagram posts in 2018 containing “one of five popular hashtags related to self-injury: #cutting, #selfharm, #selfharmmm, #hatemyself and #selfharmawareness.” More troubling, the rate of such posts nearly doubled across that year. Unlike suicide or assisted suicide, self-harm, even by teenagers, isn’t illegal, so even supplying direct instructions about how to do it it would be constitutionally protected speech. With the possible exception of direct user-to-user instructions about suicide, the First Amendment would require a traditional public forum to allow all this speech. It wouldn’t even allow Twitter to restrict access to self-harm content to adults—for the same reasons COPA’s age-gating requirement for “harmful-to-minors” content was unconstitutional.
Trade-Offs in Moderating Other Forms of Constitutionally Protected Content
So it’s clear that Musk doesn’t literally mean Twitter users should be able to “speak freely within the bounds of the law.” He clearly wants to restrict some speech in ways that the government could not in a traditional public forum. His invocation of the First Amendment likely refers primarily to moderation of speech considered by some to be harmful—which the government has very limited authority to regulate. Such speech presents one of the most challenging content moderation issues: how a business should balance a desire for free discourse with the need to foster the environment that the most people will want to use for discourse. That has to matter to Musk, however much money he’s willing to lose on supporting a Twitter that alienates advertisers.
Hateful & Offensive Speech. Two leading “free speech” networks moderate, or even ban, hateful or otherwise offensive speech. “GETTR defends free speech,” the company said in January after banning former Blaze TV host Jon Miller, “but there is no room for racial slurs on our platform.” Likewise, Gab bans “doxing,” the exposure of someone’s private information with the intent to encourage others to harass them. These policies clearly aren’t consistent with the First Amendment: hate speech is fully protected by the First Amendment, and so is most speech that might colloquially be considered “harassment” or “bullying.”
In Texas v. Johnson (1989), the Supreme Court struck down a ban on flag burning: “if there is a bedrock principle underlying the First Amendment, it is simply that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” In Matal v. Tam (2017), the Supreme Court reaffirmed this principle and struck down a prohibition on offensive trademark registrations: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express the thought that we hate.”
Most famously, in 1978, the American Nazi Party won the right to march down the streets of Skokie, Illinois, a majority-Jewish town where ten percent of the population had survived the Holocaust. The town had refused to issue a permit to march. Displaying the swastika, Skokie’s lawyers argued, amounted to “fighting words”—which the Supreme Court had ruled, in 1942, could be forbidden if they had a “direct tendency to cause acts of violence by the persons to whom, individually, the remark is addressed.” The Illinois Supreme Court disagreed: “The display of the swastika, as offensive to the principles of a free nation as the memories it recalls may be, is symbolic political speech intended to convey to the public the beliefs of those who display it”—not “fighting words.” Even the revulsion of “the survivors of the Nazi persecutions, tormented by their recollections … does not justify enjoining defendants’ speech.”
Protection of “freedom for the thought we hate” in the literal town square is sacrosanct. The American Civil Liberties Union lawyers who defended the Nazis’ right to march in Skokie were Jews as passionately committed to the First Amendment as was Justice Holmes (post-Schenck). But they certainly wouldn’t have insisted the Nazis be invited to join in a Jewish community day parade. Indeed, the Court has since upheld the right of parade organizers to exclude messages they find abhorrent.
Does Musk really intend Twitter to host Nazis and white supremacists? Perhaps. There are, after all, principled reasons for not banning speech, even in a private forum, just because it is hateful. But there are unavoidable trade-offs. Musk will have to decide what balance will optimize user engagement and keep advertisers (and those financing his purchase) satisfied. It’s unlikely that those lines will be drawn entirely consistent with the First Amendment; at most, it can provide a very general guide.
Harassment & Threats. Often, users are banned by social media platforms for “threatening behavior” or “targeted abuse” (e.g., harassment, doxing). The first category may be easier to apply, but even then, a true public forum would be sharply limited in which threats it could restrict. “True threats,” explained the Court in Virginia v. Black (2003), “encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” But courts split on whether the First Amendment requires that a speaker have the subjective intent to threaten the target, or if it suffices that a reasonable recipient would have felt threatened. Maximal protection for free speech means a subjective requirement, lest the law punish protected speech merely because it might be interpreted as a threat. But in most cases, it would be difficult—if not impossible—to establish subjective intent without the kind of access to witnesses and testimony courts have. These are difficult enough issues even for courts; content moderators will likely find it impossible to adhere strictly, or perhaps even approximately, to First Amendment standards.
Targeted abuse and harassment policies present even thornier issues; what is (or should be) prohibited in this area remains among the most contentious aspects of content moderation. While social media sites vary in how they draw lines, all the major sites “[go] far beyond,” as Musk put it, what the First Amendment would permit a public forum to proscribe.
Mere offensiveness does not suffice to justify restricting speech as harassment; such content-based regulation is generally unconstitutional. Many courts have upheld harassment laws insofar as they target not speech but conduct, such as placing repeated telephone calls to a person in the middle of the night or physically stalking someone. Some scholars argue instead that the consistent principle across cases is that proscribable harassment involves an unwanted physical intrusion into a listener’s private space (whether their home or a physical radius around the person) for the purposes of unwanted one-on-one communication. Either way, neatly and consistently applying legal standards of harassment to content moderation would be no small lift.
Some lines are clear. Ranting about a group hatefully is not itself harassment, while sending repeated unwanted direct messages to an individual user might well be. But Twitter isn’t the telephone network. Line-drawing is more difficult when speech is merely about a person, or occurs in the context of a public, multi-party discussion. Is it harassment to be the “reply guy” who always has to have the last word on everything? What about tagging a person in a tweet about them, or even simply mentioning them by name? What if tweets about another user are filled with pornography or violent imagery? First Amendment standards protect similar real-world speech, but how many users want to party to such conversation?
Again, Musk may well want to err on the side of more permissiveness when it comes to moderation of “targeted abuse” or “harassment.” We all want words to keep their power to motivate; that remains their most important function. As the Supreme Court said in 1949: “free speech… may indeed best serve its high purpose when it induces a condition of unrest … or even stirs people to anger. Speech is often provocative and challenging. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for the acceptance of an idea.”
But Musk’s goal is ultimately, in part, to attract users and keep them engaged. To do that, Twitter will have to moderate some content that the First Amendment would not allow the government to punish. Content moderators have long struggled on how to balance these competing interests. The only certainty is that this is, and will continue to be, an extremely difficult tightrope to walk—especially for Musk.
Obscenity & Pornography. Twitter already allows pornography involving consenting adults. Yet even this is more complicated than simply following the First Amendment. On the one hand, child sexual abuse material (CSAM) is considered obscenity, which the First Amendment simply doesn’t protect. All social media sites ban CSAM (and all mainstream sites proactively filter for, and block, it). On the other hand, nonconsensual pornography involving adults isn’t obscene, and therefore is protected by the First Amendment. Some courts have nonetheless upheld state “revenge porn” laws, but those laws are actually much narrower than Twitter’s flat ban (“You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.”)
Critical to the Vermont Supreme Court’s decision to uphold the state’s revenge porn law were two features that made the law “narrowly tailored.” First, it required intent to “harm, harass, intimidate, threaten, or coerce the person depicted.” Such an intent standard is a common limiting feature of speech restrictions upheld by courts. Yet none of Twitter’s policies turn on intent. Again, it would be impossible to meaningfully apply intent-based standards at the scale of the Internet and outside the established procedures of courtrooms. Intent is a complex inquiry unto itself; content moderators would find it nearly impossible to make these decisions with meaningful accuracy. Second, the Vermont law excluded “[d]isclosures of materials that constitute a matter of public concern,” and those “made in the public interest.” Twitter does have a public-interest exception to its policies, yet, Twitter notes:
At present, we limit exceptions to one critical type of public-interest content—Tweets from elected and government officials—given the significant public interest in knowing and being able to discuss their actions and statements.
It’s unlikely that Twitter would actually allow public officials to post pornographic images of others without consent today, simply because they were public officials. But to “follow the First Amendment,” Twitter would have to go much further than this: it would have to allow anyone to post such images, in the name of the “public interest.” Is that really what Musk means?
Gratuitous Gore. Twitter bans depictions of “dismembered or mutilated humans; charred or burned human remains; exposed internal organs or bones; and animal torture or killing.” All of these are protected speech. Violence is not obscenity, the Supreme Court ruled in Brown v. Entertainment Merchants Association (2011), and neither is animal cruelty, ruled the Court in U.S. v. Stevens (2010). Thus, the Court struck down a California law barring the sale of “violent” video games to minors and requiring that they be labeled “18,” and a federal law criminalizing “crush videos” and other depictions of the torture and killing of animals.
The Illusion of Constitutionalizing Content Moderation
The problem isn’t just that the “bounds of the law” aren’t where Musk may think they are. For many kinds of speech, identifying those bounds and applying them to particular facts is a far more complicated task than any social media site is really capable of.
It’s not as simple as whether “the First Amendment protects” certain kinds of speech. Only three things we’ve discussed fall outside the protection of the First Amendment altogether: CSAM, non-expressive conduct, and speech integral to criminal conduct. In other cases, speech may be protected in some circumstances, and unprotected in others.
Musk is far from the only person who thinks the First Amendment can provide clear, easy answers to content moderation questions. But invoking First Amendment concepts without doing the kind of careful analysis courts do in applying complex legal doctrines to facts means hiding the ball: it conceals subjective value judgments behind an illusion of faux-constitutional objectivity.
This doesn’t mean Twitter couldn’t improve how it makes content moderation decisions, or that it couldn’t come closer to doing something like what courts do in sussing out the “bounds of the law.” Musk would want to start by considering Facebook’s initial efforts to create a quasi-judicial review of the company’s most controversial, or precedent-setting, moderation decisions. In 2018, Facebook funded the creation of an independent Oversight Board, which appointed a diverse panel of stakeholders to assess complaints. The Board has issued 23 decisions in little more than a year, including one on Facebook’s suspension of Donald Trump for posts he made during the January 6 storming of the Capitol, expressing support for the rioters.
Trump’s lawyers argued the Board should “defer to the legal principles of the nation state in which the leader is, or was governing.” The Board responded that its “decisions do not concern the human rights obligations of states or application of national laws, but focus on Facebook’s content policies, its values and its human rights responsibilities as a business.” The Oversight Board’s charter makes this point very clear. Twitter could, of course, tie its policies to the First Amendment and create its own oversight board, chartered with enforcing the company’s adherence to First Amendment principles. But by now, it should be clear how much more complicated that would be than it might seem. While constitutional protection of speech is clearly established in some areas, new law is constantly being created on the margins—by applying complex legal standards to a never-ending kaleidoscope of new fact patterns. The complexities of these cases keep many lawyers busy for years; it would be naïve to presume that an extra-judicial board will be able to meaningfully implement First Amendment standards.
At a minimum, any serious attempt at constitutionalizing content moderation would require hiring vastly more humans to process complaints, make decisions, and issue meaningful reports—even if Twitter did less content moderation overall. And Twitter’s oversight board would have to be composed of bona fide First Amendment experts. Even then, it may be that the decision of such a board might later be undercut by actual court decisions involving similar facts. This doesn’t mean that attempting to hew to the First Amendment is a bad idea; in some areas, it might make sense, but it will be far more difficult than Musk imagines.
In Part II, we’ll ask what principles, if not the First Amendment, should guide content moderation, and what Musk could do to make Twitter more of a “de facto town square.”
Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.