Dublin gay pride 9dbd1

Facebook And Twitter Have A History Of Protecting Misogynists And White Supremacists

by Rachele Merliss

If you’ve been anywhere near the internet for the past fifteen years, you might have noticed that sometimes people are assholes. They say inflammatory things just to get people fired up, mercilessly insult others, and use the anonymity of a screen as protection. They also do worse things: engage in hate speech, make threats, and share violent content. Sharing your opinions is not illegal. But doing that other stuff is. That’s why social media platforms like Facebook and Twitter have reporting options and moderators who go through flagged content and judge whether or not it (and the user who posted it) should be allowed to stay online.

But sometimes the rules feel pretty damn arbitrary, or like they’re being applied to some people more than others.

Like when, as The Daily Beast reports, a woman posts, “Men continue to be the worst” after a man has used Facebook to threaten to find her house and attack her, and she is banned from the site for her comment before she can report the man for his threats.

Or when this happens:

marcia belsky tweet 1b836Twitter.

Social media platforms’ content rules seem to protect some more than others. For example, Facebook’s attempts to distinguish between “serious” and “humorous” speech can mean that jokes about rape and intimate partner violence are allowed to stay up because they’re apparently not “serious.” 

But jokes about TERFs (trans-exclusionary radical feminists) can get you banned—like when journalist Danielle Corcione was permanently suspended from Twitter after they posted a tongue-in-cheek comment about people who claim to fight for women’s rights, but don’t include trans people in their activism. According to Jezebel, Corcione’s account was later reinstated, but they were never given an explanation for why it was suspended in the first place.

Another example of this occurred when comedian Marcia Belsky’s account was suspended for 30 days after she posted the phrase “men are scum.” According to The Guardian, Facebook considers that phrase to be a threat. A threat, you guys. You guys, a threat.  

In response, Belsky got nearly 500 women to post some variation of “men are scum” on Facebook at the same time—and nearly all of them had their accounts suspended.

Literally any variation of the phrase “men are scum” or “men are trash” is deemed inappropriate.

Boyz II Men are trash 22f6aThis post appeared on Facebook Jailed, a site dedicated to “exposing Facebook’s double standard when it comes to monitoring hate speech.”

I mean…that post is not hate speech. (Though I strongly disagree with the sentiment. Boyz II Men are amazing.)

Over at Twitter, they’ve deemed the Lez Hang Out podcast ineligible for a Twitter ad, according to the podcast’s Instagram, because it used “inappropriate language.” The language used? “Lesbian,” “queer,” “bisexual,” and “gay.”

And while images and comments depicting violence against women often circulates unmitigated, Facebook’s rules against nudity swiftly removed a cake that somebody (?!?!) thought looked like a nipple. (I have as many questions as you do.) So what was that about social media platforms understanding the difference between “harmless” and “serious”? Whatever Facebook’s logic is in enforcing that rule, it’s not working.  

And according to the Independent, Facebook moderators are instructed to take down content that represents or supports white supremacy, but “white nationalism” is fine. Facebook says this is an important distinction because white supremacists want to dominate non-white peoples, while white nationalists just want segregation. What?!

Facebook and Twitter have historically failed to truly understand hate speech and the power it holds. Facebook’s rules state that any language that targets a specific group is unacceptable—even if that group is a historically privileged and powerful one. Yet somehow, when people posted “women are scum” as an experiment, they weren’t met with punishment. Facebook’s rules claim to protect everyone equally, but in reality, that’s not what happens. 

The fact is, moderating content on social media is complicated and nuanced. But our boys Zuckerberg and Dorsey don’t seem to have a good understanding of that. In a recent Senate hearing, Mark Zuckerberg said that in a few years, artificial intelligence will be able to take care of Facebook’s hate speech problem. But experts in the field say that because of the way hate speech rapidly evolves and changes on the internet, this complicated issue needs humans behind it.

Right now, the humans behind Facebook and Twitter content moderation are not being given the training and support they need. Being a social media moderator is an extremely emotionally and psychologically taxing job. One former moderator for Facebook told The Guardian, “There was literally nothing enjoyable about the job… We were underpaid and undervalued.” Another said that the two-week training course and options for support from Facebook were “absolutely not sufficient.” In addition to the hateful content that any one of us might witness on a daily basis, moderators also have to go through extremely violent images and videos that can include anything from child pornography to gun violence to murder. “Every day people would have to visit psychologists,” the former employee continued. “Some couldn’t sleep or they had nightmares.”

According to the Guardian, psychologists say that looking at this type of violent imagery can take a serious toll, and the people whose job it is to do so need real support from their employers. At other organizations where moderators will be exposed to violent and graphic content, like the UK’s Internet Watch Foundation and the US’s National Center for Missing and Exploited Children, training is six months, potential analysts are assessed for suitability by a psychologist, and care for employees’ mental health begins during the interview process and continues after they leave the organization. Over at Facebook, training is two weeks, and former employees say that the company simply doesn’t offer sufficient support for the traumatic work they do.

So hey, Facebook and Twitter—I get that this is an enormous problem, and you’re working hard to fix it with robots. But until then, I have some pointers: Words like “lesbian” and “gay” are not inappropriate. But using a racial slur and telling someone to kill themselves is. And please treat your employees better. We know you can afford it. 

Top photo: Giuseppe Milo/Wikimedia Commons.  

More from BUST

Kelly Marie Tran Leaves Social Media After Racist, Sexist Harassment; “Star Wars” Cast, Crew, And Others Rally Behind Her

How To Talk About Racism, Sexism And Bigotry With Your Friends And Family

Why Do Men Feel So Threatened By Female Athletes?

You may also like

Get the print magazine.

The best of BUST in your inbox!

Subscribe to Our Weekly Newsletter

About Us

Founded in 1993, BUST is the inclusive feminist lifestyle trailblazer offering a unique mix of humor, female-focused entertainment, uncensored personal stories, and candid reporting that tells the truth about women’s lives.

©2023 Street Media LLC.  All Right Reserved.