Eliminate Hate Speech

How To Build a Healthy Social Product

Request a Demo

Send us your details in the form below, and we'll be in touch within 24 hrs.

Thank you, your message has been sent successfully. We’ll be in touch within 1 business day.
Uh oh... someone with that email address has already requested a demo in the past.

Why is it so important to keep hate speech out of your online community?

In his book The Harm in Hate Speech, philosopher Jeremy Waldon makes a compelling ethical and legal case for limiting hate speech. He’s worth quoting at length:

“We are diverse in our ethnicity, our race, our appearance, and our religion. And we are embarked on a grand experiment of living and working together despite these sorts of differences… And each person, each member of each group, should be able to go about his or her business, with the assurance that there will be no need to face hostility, violence, discrimination, or exclusion by others… Hate speech undermines this public good.”

There are levels and layers of racism and hate. Hate speech isn’t just limited to sites that deny the Holocaust, or that promote the worst kinds of discrimination. Hate speech can also simply be the use of racial epithets and slurs designed to denigrate a particular group of people based on ethnicity, sexual orientation, or religion.

Preventing hate speech goes well beyond ensuring that all users feel welcome and safe on your platform. Aside from the ethical implications of permitting racism, homophobia, and misogyny in your community, there are a few obvious issues with allowing hate speech to remain on your platform:

  • Hate is contagious. Human beings are inclined to follow the crowd. If users perceive that racial slurs are acceptable, they will be more inclined to use them.
  • It’s connected to cyberbullying. Online bullying and abuse often takes the form of hate speech and insults based on race, sexual orientation, or religion.
  • Hate speech can lead to hate crimes. Hateful rhetoric stokes the flames of violence. The Anti-Defamation League elucidates this in their Pyramid of Hate, in which bias and individual acts of discrimination (which includes hate speech) lead directly to bias-motivated violence. As the actions on the lower levels become normalized, the behaviors on the next level become more acceptable.

When you allow racism, rape threats, and other forms of abusive language in your community, there is also the very real chance that your brand will become associated with hate speech. Twitter recently came under fire for allowing virulently racist and sexist Tweets against actress Leslie Jones to continue unabated for at least 48 hours before the worst of her abusers were banned. When hate speech becomes a trend, it results in bad publicity, an unfriendly social climate, and the loss of influential members of the community. Studies show that it’s five times more expensive to attract a new user than to keep an existing one. It just makes sense — keep your community hate-free, and your best users will stay. When you protect your community, you protect your brand and your bottom line.

The big players recognize this. Hate speech has become such an urgent issue that Facebook, Google, Microsoft, and Twitter’s recently committed to remove hate speech from their platforms within 24 hours.

What if you could keep your community hate-free but still allow users to express themselves in meaningful ways? And what if it could be done, not in 24 hours, but in real-time? It can.

At Community Sift we provide social networks and online games with the tools to prevent the proliferation of hate speech, without limiting users to a restrictive whitelist that limits creativity and expression of language. We’ve done the research and studied the varieties of hate speech and online harassment to identify the unique linguistic patterns that denote hate. We understand the important distinction between a discussion about the Holocaust and the use of anti-Semitic slurs. Our system is built to understand context — key when it comes to hate speech — and classify entire sentences based on levels of risk.

In conjunction with the powerful machine learning that underlies everything we do, our Language & Culture team engages in ongoing research to identify language trends, including emerging slurs and the ever-changing idioms of internet hate. This proactive approach allows us to update our system in real-time and prevent hate speech from infiltrating your social platform before it even reaches your users.

Our powerful analytics identify your most toxic users, and our moderation tools give you the option to take action on those accounts as you see fit. Our staff has years of experience in the field, as community managers and moderators, and provide individual recommendations for dealing with hate speech, based on your community’s demographic and unique userbase.

Eliminating hatred from social networks and online communities matters to us. It’s the heart of what we do, and we are passionate about sharing it with you. We believe that everyone should have the power to share without fear of harassment or abuse. We believe that your users have that right, and that your business will benefit from it.

Send us your details, and we’ll be in touch. We’d love to talk about what matters most — preventing hate speech in your social product, and helping you grow a happy, healthy, hate-free community.

Request A Demo

Want to see what Community Sift can do for your social product? Request a demo.

Get Our Newsletter