How to Recognize and Report Hate Speech on Social Media
Social media is great for connecting people, sharing news and information, and empowering your grandmother to freely express her political views. Unfortunately, its anonymity and broad reach make it easy for people to spread hate, and since Donald Trump’s election in 2016, social media users have noticed an alarming rise in online hate speech.
Islamophobic, anti-Semitic, racist and misogynistic trolls are rampant on Twitter. On Facebook, users report violent memes, groups, and comments targeting people of color, women, and members of the LGBT community. “You are dealing with the dark side of technology,” says Lisa Laplace, senior attorney for the New York Civil Liberties Union. “[Biased behavior] can occur in many contexts – cyberbullying, cyber harassment – and in this situation you often feel powerless.”
Platforms like Twitter and Facebook have tools to allow users to report harassment and hate speech, although site restrictions mean that too many posts tend to go unheeded. However, it is worth the effort if you see threat messages on the Internet. There are also external sources you can turn to if you come across disturbing acts of bias on social media. But first:
What is hate speech?
Hate speech is a kind of amorphous concept. As Susan Benes, founder and director of the Dangerous Speech Project, wrote in an email : “There is no single definition of hate speech. For this reason – and also because any definition of hate speech is highly contextual – there is no consistent or reliable way to identify it on the Internet. ”
However, hate speech “often looks like an attack on people because of their perceived race, color, religion, ethnicity, gender and sexual orientation,” says Evan Feeney, campaign director for the advocacy group’s Media, Democracy and Economic Justice Campaign. civil rights Color of Change. … “Anything that attacks someone because of these unchanging characteristics calls for violence against them, attempts to intimidate or harass, or share names that perpetuate harmful images.”
There are varying degrees of hate speech, and rhetoric can escalate from biased expression to threats of violence. As Zahra Billoo, executive director of the San Francisco Bay Area Council on American Islamic Relations (CAIR) chapter, said, “I hate all Muslims” is a form of hate speech. “I hate all Muslims and I want to kill them” is an escalation. ” When there is a specific threat to a specific person, the stakes go up, Billoo said. “I hate Zahra, I go to her house, I will wait for her outside and beat her with an ax” – this is above hatred. This is a threat, ”she said. Please note that while you can force Twitter or Facebook to block a user who claims to “want to kill” a group of people without specifying any specifications, usually law enforcement will not address such a general threat.
Other examples of hate speech, especially on social media, include ethnic insults, “coded” language (for example, some anti-Semitic groups use the ((( echo ))) symbol around the names of Jewish users or groups), and violent imagery. “Our members report that people are sharing memes and photos from the Jim Crow era, when blacks were lynched,” Feeney said. “They are often used to intimidate people, even if there are no verbal statements. Digging up historical terror still poses a clear threat to someone. “
Is incitement to hatred illegal?
No. Thanks to the protection of free speech, non-threatening police hate speech is virtually impossible. “In the United States, incitement to hatred is not illegal – in fact, it is constitutionally protected speech under the First Amendment,” Benes wrote. Law enforcement can tackle specific threats of violence that go beyond hate speech – “I’m going to kill you,” “I have your address, I’m going to your house,” etc. But non-violent racist or otherwise biased statements however undesirable they may be, they are outside the scope of the law.
However, because platforms like Facebook and Twitter are operated by private companies, they can make certain forms of hate speech in violation of their terms of service. However, there are limitations. Twitter’s compliance policies are notorious for freedom , although they do have: “Twitter reflects real-world conversations going on in the world and sometimes includes points of view that may be offensive, controversial and / or bigoted to others. While we welcome everyone who expresses their opinion on our service, we will not tolerate behavior that harass, threaten, or use fear to drown out the voices of others, ”said spokeswoman Katy Rosborough.
Facebook (and Instagram, which shares Facebook’s policies) defines hate speech “as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religion, sexual orientation, caste, gender, gender, gender. identity, etc., and serious illness or disability. We also provide some protection for immigration status. We define assault as violent or degrading language, statements of inferiority, or calls for exclusion or segregation, ” according to the Platform Community Standards.
But, as a Facebook spokesperson told us, it’s hard to know exactly what constitutes hate speech out of context. According to Community Standards: “Our systems can recognize certain words that are commonly used as hate speech, but not the intentions of the people who use them.” As such, Facebook relies on both technologies that detect hate speech, user reports, and teams that review user reports to determine if something violates their terms of service.
In fact, Catherine Lo, a researcher at the University of California, Irvine who specializes in online moderation, says most platforms rely on user reports to mitigate hate speech. “I can’t think of a single platform that can control all of their content,” she said. “Many of these systems rely on reporting.”
Which brings us to:
How do you report hate speech directly to platforms?
Reporting hate speech on mainstream social media is pretty easy. It is not easy to get a platform to take action against claimed hate speech. But we’ll come back to that in a second.
On Twitter, you can report tweets by clicking the small arrow in the upper right corner of the tweet itself. Click “Report Tweet” and then provide Twitter with some information about the tweet’s offensive content, such as whether it is malicious or offensive. Twitter may also ask you to report additional tweets in the offending user’s timeline.
On Facebook, you can report posts by clicking on the three dots in the upper right corner of the post itself, and then clicking “Give feedback on this post.” You can report this as hate speech.
Here’s what it looks like on Twitter:
And on Facebook:
If Twitter or Facebook looks at posts and decides that the content is in fact in violation of the platform’s terms of use, they may suspend or permanently block the person who posted the message you reported. Sometimes it happens quickly. Mordechai Lightstone, editor of social media at Chabad.org, says he frequently reports anti-Semitic tweets from trolls.
“I found that for tough Nazis, photos of Hitler and the gas chambers, it’s pretty easy and fast,” he wrote in an email. “The problem has more to do with the gray zone – people accuse me of being a fake Jew trying to take control of all of Brooklyn – where it often starts to get complicated.”
This process can also be frustratingly slow. “I just got a message that Twitter has deleted several hateful tweets that I reported more than a month ago,” says Lightstone.
In the meantime, you can block or disable a user you consider offensive. Sometimes this is the only thing you can do. “If you get a message [from the platform] on the first message that says, ‘We didn’t delete them,’ mute or block the user is basically your only way to go,” Lo said.
Note that when it comes to reporting, strength is often expressed in numbers. “If the platform doesn’t take any action, you can do multiple reports,” Billoo said. “If one user is reported multiple times, or if multiple users report the same user, public awareness may prompt the platform to take action if one is not. These are companies that respond to customers. “
For example, last year, Twitter finally blocked Alex Jones and his InfoWars website after a massive outcry over right-wing conspiracy theories circulated by his accounts, including posts deemed violent and offensive. Both Facebook and YouTube previously restricted Jones’ accounts, but Twitter moved late, reportedly due to “numerous user reports of Jones posted a video of him stalking a reporter outside of a hearing with social media executives on Capitol Hill.” , according to Bloomberg.
Should you involve law enforcement?
When it comes to online hate speech, it is difficult to know when to seek outside help. Sometimes blocking a user or downloading them from the platform is enough, but if you feel a real threat, you may have to weigh your options.
“I always hesitate to advise target communities to call law enforcement. But a threat is the kind of thing where someone can decide to involve the local police department, ”Billoo said. “While most of the online trolling is indeed online, sometimes it does appear in real life.”
Notably, Pittsburgh synagogue shooter Robert Bowers posted a series of threatening racist and anti-Semitic comments on Gab – a now closed alternative right-wing social media platform – before allegedly killing 11 people at the Tree of Life synagogue last summer. And Cesar Soyak, accused of mailing homemade bombs to high-profile critics of Trump last fall, has repeatedly (and sometimes brutally) threatened Democrats on social media.
Bill points out that threats must be specific if you plan to involve law enforcement. “Threats of violence make general threats of violence more difficult than specific ones,” she said. “I want to kill all Muslims” doesn’t make sense. But I know that where there is a specific targeted threat, it’s not just an egghead on Twitter. This is what I definitely regard as escalation and report. “
If, for example, someone specifically threatens to kill or harm you or the people you love, or publishes your address, it is worth filing a report with the police. Note that it can still be difficult to take action even if law enforcement is involved. “If someone is directly threatened, you definitely need to contact law enforcement. But there are very few legal remedies for offending content on the Internet, ”Feeney said.
If you report anything to the police, they will most likely ask you to complete a report and then assign detectives to determine if reporting a harassment is indeed a crime. Police departments vary, but some have units that specifically deal with acts of bias – for example, the NYPD has a so-called Hate Crime Task Force that investigates alleged hate crimes. Again, the likelihood that the police will be prosecuted will only continue if there is a real, concrete threat of violence; if you’re just dealing with harassment, there isn’t much they can do.
Are there places to report hate speech outside of the platforms themselves?
If Facebook or Twitter doesn’t reach out to your post, or even if they do, you can also contact other organizations. Civil rights groups such as the South Poverty Law Center , in addition to the aforementioned CAIR and Color of Change , can provide you with resources and support.
Local groups like NYCLU have created tools and portals that people can use to report bias. For example, the Equality Watch NYCLU website encourages New Yorkers to report discrimination and harassment they have experienced or witnessed, including online. “After Trump was elected, even before he was sworn in, we received many calls with reports of bias-motivated bullying,” Laplace said. “If you feel that online services are not responding to insults if you are blocked on a law enforcement Facebook page or an official Facebook page, and you think this is due to some kind of bias against you … on the web “The Equality Watch website will share your information with the appropriate organization.” What happens when you file with these organizations is different – some of them (including NYCLU) will put you in touch with an attorney if you want to bring civil or criminal action.
And if you feel your story deserves media attention, the ProPublica Documenting Hate collects bias-based incident information that journalists can use to find content. “We decided to launch the project after the 2016 elections to better understand where hate arises and who responds to it, and how they respond to it,” said Rachel Glickhouse, Partner Relations Manager at Documenting Hate. , said. “Our collection of clues includes crimes such as vandalism or assault, as well as incidents with an indefinite duration in which a person may not know if it is a crime or not, such as incitement to hatred or harassment.”
Note that Documenting Hate cannot do anything actionable. “In our project, if someone is looking for help on how to deal with it, the journalist will potentially respond to tell your story, but this is not like visiting a place where someone walks into a Facebook office to demand a change in their reportages. politicians, ”said Glickhouse. However, greater media attention to online abuse can help nudge platforms to clarify their policies or respond to incidents more quickly. When you are alone, it is often difficult to get the platforms to move quickly. “I personally saw on Twitter that if I report people using very clear anti-Semitic language or memes, it often takes weeks before I see a response,” Glickhouse said. “It’s purely anecdotal that when I tried to tweet this kind of thing, sometimes it worked, sometimes it didn’t.”
Remember to report responsibly
It is true that social media platforms seem to be pursuing unclear policies when it comes to harassment and hate speech. But excessive or irresponsible reporting can make it difficult for a platform to distinguish real threats from false ones. “This is a really big issue of trust and security with hate speech,” Luo said. “Reports are not a good indicator because many people abuse reporting systems.”
For example, freelance journalist Daniel Corcione was tweeted last year for a joke they tweeted about trans-exclusionary radical feminists (TERF). Corcione tweeted, “My pronouns are ehau,” and then responded to a tweet, “If anyone at TERF likes or retweets this, I’ll stick my foot in your ass.” Korcione later received an email from Twitter stating that their account had been suspended. They suspect the tweet made it to the TERF Reddit forum, making it easy to post it in bulk.
“I didn’t get much backlash at all on [the tweet]. It didn’t even get much attention, it got about 70 likes or so, “Corcione said. “It was an indirect lack of threat and a nod to the 70s Show . “
In fact, as Feeney pointed out, it’s not uncommon to punish the wrong people. “So many people have shared [with Color of Change] screenshots and stories of the full range of hate speech – from direct personal threats and doxing to more general hate statements about people’s race, religion and sexual orientation – where Facebook has not removed this content, although it clearly violated Facebook’s standards, ”Feeney said. “In fact, often people who are more marginalized, blacks and LGBTQ people end up deleting their content.”
So, make sure what you are reporting really looks like hate speech (“Shut up snowflake” and “You libtard asshole” don’t count, although it’s fair to block or disable accounts that post such tweets) and to protect yourself, Think twice before posting anything that might be misunderstood. Even self-referential or ironic language can get you in trouble. As Billoo warned, “People using social media need to be careful about what they post.” In the end, she said, “Once it leaves your lips or your fingertips, it is forever.”