Paying Attention to Legal Battles to Combat the Far Right
In the US and Germany. Also cute dogs.
There are two court cases in the US and in Germany that could have big repercussions for those who want to combat the far right online. (And some cute dogsā¦)

First, the US case. In the Gonzalez v. Google, currently before the Supreme Course, the lawsuit could shake the foundations of Section 230, the obscure legal statute that keeps tech companies from being held accountable for the content on their platforms. The Section 230 of the Telecommunications Act states, āNo provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1)).
So, because they arenāt āpublishersā of content but platforms act merely as conduits then they canāt be held accountable for harmful content on those platforms. The best analogy Iāve heard for understanding this is that current social media platforms want to be regarded like the old telephone company, that some called Ma Bell. The law held that if someone called and made a death threat against someone else, the individual person was responsible for making that threat, not Ma Bell. Lots of people equate this āindividual responsibilityā argument with the argument for āindividual right to free expression.ā The latter of these is where advocates for keeping Section 230 usually weigh in.
Many, many people in the tech world think that Section 230 should not be repealed (or, even revised) because, they argue, it is protecting free expression online. One of the leading civil society organizations seeking to influence the tech industry, the Electronic Frontier Foundation, is among those who advocate in favor of Section 230. In an article on their website, āSection 230 is Good, Actually,ā Jason Kelley writes:
And Section 230 doesnāt only allow sites that host speech, including controversial views, to exist. It allows them to exist without putting their thumbs on the scale by censoring controversial or potentially problematic content. And because what is considered controversial is often shifting, and context- and viewpoint- dependent, itās important that these views are able to be shared. āDefund the policeā may be considered controversial speech today, but that doesnāt mean it should be censored. āDrain the Swamp,ā āBlack Lives Matter,ā or even āAll Lives Matterā may all be controversial views, but censoring them would not be beneficial.Ā
This is the classic argument made in the American-based tech landscape (and beyond) that the argument that ācensorshipā of views is the greater evil than, well, just about anything else, even death, even terrorism. People from the platform companies have said as much. Andrew Bosworth, a top Facebook executive wrote in a 2016 memo (leaked in 2018):
āWe connect people. Period. Thatās why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
āSo we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies.
āMaybe someone dies in a terrorist attack coordinated on our tools.ā
Itās this exact point that is at the heart of the current case before the US Supreme Court: can social media platforms be held accountable for terrorist violence facilitated by these tools?
The Gonzalez v. Google case stems from the killing of 23-year-old Nohemi Gonzalez, a college exchange student, by gunmen associated with Islamic terrorists in Paris in 2015. The Gonzalez family contends that YouTube (owned by Google) acted as a ārecruiting platformā through its algorithm for recommending the next video (this pattern of radicalization via YouTube has been documented by researchers). Therefore, the Gonzalez family asserts that YouTubeās algorithm is a violation of U.S. laws against aiding and abetting terrorists. Itās Section 230 that Google and the other tech companies want to use as a shield.

Another relevant point in Gonzalez v. Google is that the case involves āIslamic terrorists,ā and I donāt think the case would have gone as far without all the racialized Otherness that term carries with it.
Thereās no decision on this case yet, but there is some really good reporting, like in this podcast episode, Strict Scrutiny, which has an interview and conversation between Melissa Murray legal scholar at NYU (and style goddess from MSNBC) and Danielle Citron, who is a legal scholar at University of Virginia. Theirs is an excellent discussion that really gets into the weeds of Section 230. Iām not gonna do that here, but I do appreciate this this recap of the arguments from Washington Post reporters:
During the nearly three-hour session, Google lawyer Lisa Blatt told the justices that a law known as Section 230 protects the company from legal responsibility for the third-party videos that its recommendation algorithms surface; such immunity is essential to tech companiesā ability to provide useful and safe content to their users, she said. Gonzalez family lawyer Eric Schnapper argued that applying Section 230 to algorithmic recommendations provides an incentive to promote harmful content; he urged the court to narrow those protections.
I wouldnāt expect anything too dramatic from this court on this issue. The reality that this very conservative court is much more likely to side with the financial interests of Google over the loss the Gonzalez family suffered. In addition, theyāre just not very well versed in the slew of digital technologies we call āthe Internet.ā As Justice Elena Kagan acknowledged today, she and her colleagues on the high court are ānot the greatest experts,ā on all things Internet. Ok, then. Could I see some puppies?

Now to the German case. In October 2016, a blogger posted a picture of Renate Künast, a well-known member of parliament German Green Party on the Internet. The picture of her included a doctored quote of Künastās that said, āCome on, if there is no violence, sex with minors is okay. Give it a rest.ā (The further background to the blog post is that it was a response to the publication of a report from a commission set up by the Green Party about the criminalization of sex with minors.)
Künast sued the blogger for damages. Then, in 2019, that same blogger complained about the proceedings on his Facebook page. Twenty other users posted replies that included sexist comments, calling Künast, among other things, a ābitch.ā Künast asked the Regional Court of Berlin to order Facebook to provide her with the account details of the users in question to hold them accountable in civil court. She could make this request because the law in Germany is written such that social media platforms must provide the account details of users who post unlawful content to the injured individual. Under this law, content is considered unlawful āwhen it amounts to an insult.ā In order to obtain the information, the injured individual have to file an application with the local district court, which then triggers an order for the social media platform to provide the data when the requirements are met.
However, the regional court in this case ruled that the comments did not rise to the level of āinsultsā under German law. Künast then appealed that ruling to the high court, and earlier this month, they handed down their decision.
In a ruling on February 2, 2022, the German Federal Constitutional Court (Bundesverfassungsgericht, BVerfG) said that protecting politicians from harassment (in this case violent sexist harassment) is within the public interest and that, in this context, the right to freedom of expression has limits.
The German approach to free speech is one that is situated in a human rights framework. Within that kind of framework, āfree expressionā is an important human right but one that is always weighed in the balance against the equal right to āfreedom from annihilation.ā I think this is a much more generative way to think about these issues than the cul-de-sac of absolutist free speech.

This idea that āthe right to freedom of expression has limits,ā is one that inevitably comes up in discussions of how to combat the far right online.
We, in the US, have a colloquial, everyday, taken-for-granted notion about free speech that is a kind of loopy fantasy. Itās often referred to as the āabsolutistā view of free speech, that any speech is allowable and should be defended. This is nonsense. Itās also not what even a conservative US Supreme Court has ruled. In fact, in a famous (and complicated) case, Virgnia v. Black, the court ruled in a 5-4 decision that state laws against cross-burning do not violate First Amendment protectionsā¦.as long as the āintent to intimidateā can be proven in a court of law. We still havenāt figured out what the online equivalent of a cross-burning is.
Unfortunately, the loopy fantasy of āabsoluteā free speech is the prevailing view in the tech world and among the CEOās of most of the platforms. This comes from those wild early Internet days, when characters like John Perry Barlow wrote manifestos such as, āA Declaration of Independence in Cyberspace,ā in 1996.
Barlow was a for real character ā former lyricist for The Grateful Dead, sometimes Wyoming rancher, and early Internet dude ā and this is the opening salvo of what he wrote:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
Barlowās vision of ācyberspace, the new home of Mind,ā was captured in a thirty-second television ad by a now defunct telecom company. Barlowās brand of cyberlibertarianism, the idea that āgovernments of the industrial worldā have āno sovereigntyā online, is one that pervades the tech policy world. Indeed, the Electronic Frontier Foundation, the organization that Barlow helped found, continues to advocate for a world where government regulation is often equated with ācensorshipā and the violation of individual rights is the greatest evil.
The thing is, the Internet is global and thereās no going back on the way that the information and networks flowing through fiber optic cables allows us to transcend national borders. White supremacy is also global (always has been), and now the spread of this violent ideology is being amplified and accelerated through global, digital networks powered by algorithms. The question becomes: how do nation-states grapple with globally networked white supremacy?

In an earlier book, I speculated about how human rights frameworks around free expression might clash with Silicon Valley-based tech firms. There I talked about the LICRA v. Yahoo case, in which the French government ultimately got the company Yahoo to restrict sales of Nazi memorabilia within its borders by imposing a steep fine against the tech firm. This, I argued, was an interesting case in which another countryās laws were able to curb (in a modest way) a US firmās stake in the spread of far right propaganda. In keeping with our core values here in the US, a financial fine was much more effective than a human rights argument at this ding against the far right.
Today, Iām skeptical that either the German courtās narrow ruling protecting a politicianās rights or the pending Gonzalez v. Google decision here in the US will make much of a difference in combatting the far right. I am cautiously optimistic that what legal scholar Julie Cohen calls the āpractical inevitability of the law,ā will eventually triumph over the loopy fantasy of Barlowās manifesto.