Substack Admits To Sweeping Ignorance About Content Moderation
The problem isn't "broken discourse." It's genocide, insurrection, and harassment.
Image: Tyler Merbler, CC
Substack released an official statement on hate speech yesterday in an effort to mitigate the damage caused by CEO Chris Best face-planting in an interview on the subject.
Hamish McKenzie spews his usual earnest word burble, and people have been parsing it since. But I think the most disturbing part for me was not any specific policy, but that Substack seems entirely ignorant of the history and purpose of content moderation.
In Substack’s view, most platforms have attempted aggressive content moderation from the beginning, and that failed to heal our “broken discourse.”
“Facebook, Instagram, Twitter and others have tens of thousands of engineers, lawyers, and trust & safety employees working on content moderation, and they have poured hundreds of millions of dollars into their efforts,” McKenzie writes, and you can hear the dollar signs clicking in his head. “But how is it all working out? Is there less concern about misinformation? Has polarization decreased? Has fake news gone away? Is there less bigotry?”
This is all incredibly confused. Most social media platforms are run by the same type of libertarian-ish tech bros who run Substack. They have mostly resisted aggressive content moderation for the same reasons Substack is resisting it; they have simplistic free speech utopian views, don’t want to spend money on moderation, and hope to monetize content from bigots and fascists.
Elon Musk literally purchased twitter promising to do away with content moderation; former owner Jack Dorsey said Musk was the best person to run the place based on Musk’s claim that he was going to end aggressive moderation of hate speech and reinstate Donald Trump. Dorsey was extremely reluctant to ban Donald Trump for repeated violations of twitter’s rules. Mark Zuckerberg at Facebook similarly describes himself as a libertarian and when he talks about free speech, he sounds a good bit like Hamish McKenzie.
Dorsey and Zuckerberg are wealthy white guy libertarians who don’t want to spend money on content moderation. They and Hamish McKenzie are all basically the same person. So why do Facebook and (pre Musk) twitter have extensive content moderation policies?
The answer is that the laissez faire attitude McKenzie prefers was tried, and resulted in catastrophic violence (which probably the owners of the platforms don’t care about that much) and massive damage to corporate reputations (which for them is more salient.)
Facebook’s reluctance to police hate speech led to a genocide in Burma, where political figures used the platform to inflame violent animosity towards Rohingya Muslims. Twitter’s reluctance to police hate speech and enforce its TOS led to a violent insurrection. Trump and allies used Twitter to spread election misinformation and to organize and plan the insurrection itself.
Genocide and insurrection are, to put it mildly, bad outcomes. And to get to the point where genocide and insurrection are possible, you of course have to first enable a lot of other awful things—like, say, death threats to children’s hospitals. Talking vaguely about “broken discourse” doesn’t really capture the possible downsides of allowing hate speech to flourish on your platforms.
In response to these sorts of violent acts, and the potential for more, social media platforms attempted to do more content moderation. McKenzie asks, as if no one knows, if that was really effective.
But the answer is clear that it was. The spread of misinformation on twitter declined dramatically after Trump was banned. Or, looking at the other way around, Musk unbanning Nazis and encouraging the worst people on earth has led to a dramatic rise in hate speech on twitter. Musk’s disastrous policies pushed many to flee his site, which is part of the reason Substack started Notes as a competitor in the first place.
I’m not saying that Substack is Twitter or Facebook, or that genocide and insurrection will inevitably follow. Among other things, Substack remains fairly small. I also appreciate some of the controls McKenzie says he’s putting in place.
But it’s clear from Best’s interview and from McKenzie’s post that Substack’s leaders know little about content moderation and are determined to not talk to anyone who actually does know something. They seem focused mainly on the negative effects that spending on content moderation would have up front on their bottom line, and secondarily on the fact that right wing assholes will continue to whine about moderation policies no matter what they do.
A lot of boilerplate gush about how Substack is different from everyone else is not encouraging when it’s clear that the people in charge have only the dimmest idea of what everyone else has done in the past or how it’s worked out. Best’s response to the interview was not serious; McKenzie’s note is not serious. But genocide, insurrection, harassment, and death threats are serious problems, and Substack had best address them quickly, rather than waiting until the worst happens.
It bears saying that, from all appearances, and the naivety and earnestness of the words they’re using, that they’re white cisgender heterosexual Christian non disabled males. Conventional apex privileged people, in other words.
Such people tend to really really really don’t understand what harassment, persecution, or hate really are like and what it can result in. Just from what they’re saying.
Because yeah, the problems that you and others are bringing up are definitely not about “dialogue”.
I agree. That’s why Substack needs to take a more active role in content moderation as well. It’s much better to be proactive than reactive.