Hate Speech & The First Amendment
Edited by Arya Kumar, Rishi Chandra, Amelia Cantwell, and Owen Andrews.
The Case for Protecting Hate Speech
“The conduct for which the defendant stands charged [...] is a serious affront to the values of a civilised society.” The defendant in question here is Alexander Phillip David Keating, a 37-year-old Australian citizen, who made over 40 posts on X between February and April 2025 that were racist, Islamophobic, homophobic, and violent. He was charged with using a “carriage service to offend” and pleaded guilty in late 2025. Deputy Chief Magistrate Theo Tsavdaridis, who was presiding over the case, warned that “any gains we have painstakingly made as a multicultural society over many years are quickly eroded and cast into oblivion” in the face of posts like Keating’s. There is no question that Keating's speech was despicable. However, the deeper issue is whether the government should have a broader power to criminalize expression that is deemed hateful or offensive. If so, who should have the authority to determine this? Condemning speech socially is not the same as criminalizing it, and separating the two modes of thought is crucial in understanding hate speech’s place in worldwide speech protections.
The question of who gets to classify hate speech is not hypothetical in Australia. Under the country’s Racial Discrimination Act of 1975, it is unlawful to engage in conduct that is reasonably likely to “offend, insult, humiliate or intimidate” a person or a group on the basis of race, color, or national or ethnic origin. While intended to protect vulnerable communities, this standard relies on the broad and subjective concepts of offense and humiliation. In cases like Keating’s, it gives the government the authority to determine when speech crosses the line to criminal activity.
Failing to protect hate speech opens the door to viewpoint-based censorship, since such censorship requires the government to decide which ideas are too offensive to be expressed. That power is especially dangerous because concepts like offense, harm, and hate are highly elastic and lack a clear or objective boundary. Once free speech standards become subjective, speech that is merely controversial or politically charged can be reclassified as hateful and punished accordingly. This danger isn’t theoretical either. Keating told the court that he attributed his conduct to the political environment and his perceived deterioration of Australian society, showing that his speech carried serious political weight. Although Keating never faced jail time, his case marks a troubling moment in which the Australian government assumed the authority to decide what speech was offensive and thus what political expression was unacceptable. When governments are allowed to decide what is offensive, so-called “harm prevention” can quickly slide into suppressing political expression.
American constitutional law attempts to stop that slide. Nothing in the First Amendment requires protecting true threats or fighting words (words meant to incite violence), but time and time again, the Court has asserted that the hateful or offensive nature of speech is not grounds to punish it. In 1969, Brandenburg v. Ohio came before the Supreme Court and challenged an Ohio law that prohibited public speech advocating illegal activity. Justice Douglas wrote in his concurrence to Brandenburg that the “government has no power to invade [one’s] sanctuary of belief and conscience.” Under this requirement, the government is obligated to prove causation and direct effects of speech to justify limiting it. It doesn’t matter what that belief is—Brandenburg tells us that the government has no right to limit it except when it incites “imminent lawless action,” a method of evaluation now known as the Brandenburg test. This standard appears in cases such as 1973’s Hess v. Indiana, where the Court held that an antiwar protester’s statement about taking the street “later” was protected, as it was neither directed towards a particular person or group nor intended and likely to incite imminent lawless action. As it stands in the US, the government’s power is limited to specific harms that speech causes rather than to any broad offense it could cause. The point is not to deny that hate speech can be damaging but to instead prevent the government from using subjective criteria as a basis for criminal punishment. When “offense” becomes the legal line, as it is in Australia, people may self-censor not only hateful speech but also any potentially controversial dissent, fearing where the offensive boundary might be drawn next.
The complexity of this issue stems from the intense public pressure it faces. A 2024 Notre Dame study found that a majority of Americans surveyed wanted to remove online hate posts targeting multiple groups, including Black people (60%+) and Jewish people (58%). This study also determined that the desire to remove hate speech online isn’t strictly partisan: both Democrats and Republicans supported censorship of hateful posts, even though Democrats were more likely to advocate for deactivating accounts sharing hateful comments, particularly those held by elected officials.
The First Amendment, as with the rest of the Bill of Rights, is meant to protect citizens from the government. Social media companies are private actors, and their moderation decisions are not beholden to the First Amendment. In 2024, the Supreme Court designated the algorithms social media companies use as a form of expression, and therefore, the government can’t force platforms to promote or suppress any speech. This creates a difficult dichotomy: the government cannot discriminate based on viewpoint, but companies like Meta and X can do so without repercussions. We must balance the speech rights of both parties by ensuring that individuals have the right to express hateful opinions, while platforms also have the right to choose not to promote them. Giving anyone the power to unilaterally restrict hate speech will lead to content-based distinctions.
This is where Australia becomes a warning for the United States. If the US were to allow a government power similar to Australia’s, or any similar policy, the line between protecting citizens and punishing viewpoints would become dangerously unstable. “Offense”-based standards are inherently elastic, and while “threats” and “imminent incitement” may fluctuate, humiliation and intimidation are considerably more flexible. In a polarized political environment, flexibility is not a neutral feature and invites the definition of hate to change with administrations.
The real choice at hand is not between endorsing hateful expression and banning it. The American system draws a clear line: speech can be punished when it crosses into concrete, legally defined harms. What it cannot do is punish a viewpoint for being offensive. That constraint is a safeguard against giving the state a tool that will inevitably be used selectively and to further political aims. What Keating’s case ultimately highlights is a division of responsibilities. Citizens can condemn, boycott, and answer hateful speech with counterspeech. Platforms can draw their own lines and enforce them. But the government should not be able to declare that speech illegal—that it is too close to tyranny. A democratic society ought to fight bad ideas with better ones, not with criminal law. The temptation to solve hatred by prohibiting it is understandable, but outlawing ideas changes the nature of public life. A healthy democracy cannot survive if the state becomes the arbiter of what citizens are allowed to hear.