Stop Cyber Bullying
How to Keep Harassment off Your Social Platform
Request a Demo
Send us your details in the form below, and we'll be in touch within 24 hrs.
On September 7th, 2015, 15-year-old Amanda Todd posted a video on YouTube in which she used a series of flash cards to tell her heartbreaking story of persistent, years-long cyber bullying and harassment. A month later she killed herself.
On April 19th, 2014, 18-year-old Jessica Cleland committed suicide after her Facebook account was inundated with messages from friends saying they hated her.
On October 27th, 2012, 13-year-old Erin Gallagher took her life after experiencing severe online abuse on a popular social media site. She posted a suicide note on the same site the day before her death.
The effects of cyber bullying are well-documented. Studies find that victims are at a much higher risk for low self-esteem, depression, self-harm, drug and alcohol abuse, and – chillingly, as we see above – suicide.
The numbers speak for themselves. In 2014, the Pew Research Center reported that 73% of adult internet users had witnessed harassment online, and 40% experienced it themselves.
To put that in perspective: Four in ten users could potentially experience harassment and abuse on any platform.
But it’s not just any platform. In the same report, researchers identified the most common online spaces where harassment occurs:
66% of cyber bullying and abuse occurs on social networks. Those are telling numbers, but they’re not surprising. We design social networks for social sharing, and we’re vulnerable when we share, whether it’s an opinion, a selfie, or a poem. That vulnerability leaves users uniquely open to abuse.
At 16%, online games also have a reputation as hotbeds of extreme bullying. Again, it’s no surprise. People are far more likely to say or do things online that they would never do in real life. There’s even a name for it: the online disinhibition effect. Anonymity, invisibility, lack of eye contact, and lack of real-time consequences all contribute to an online gaming culture of potentially toxic disinhibition. You can expect to see more bullying language and behavior in competitive spaces where tension and emotions run high.
So, we know that cyber bullying is more often seen in social networks where users are vulnerable, and in battle games where antagonism is built into the core of the experience. We also know that online bullying can lead to depression, self-harm, and, tragically, suicide. Cyber bullying can even affect a victim’s life well into their future. According to the Pew Research Center, “Online harassment can have long-term effects. In a time when everyone from future employers to future romantic partners can potentially find personal information on others with a simple Google search, online harassment can cast a long shadow.”
Online harassment is a dangerous and inevitable by-product of social platforms. Is it worth allowing online bullying to happen in your product and your community?
There are other advantages to keeping cyber bullying out of your community. According to Lee Resources in a 2010 report, it costs five times more to attract a new user than to keep an existing one. A Riot Games study tells that us that users who experience toxicity (harassment and abuse) are 320% more likely to quit. Combine those two statistics and it’s clear that providing a safe and healthy environments for users is not only good for the community – it’s also crucial to your business.
"Users who experience toxicity are 320% more likely to quit."- Riot Games
If we acknowledge all of these things to be true, do we then have a responsibility to protect users in social networks and online games from language and behavior that has been proven to do long-term, irreparable harm?
We believe the answer is yes. And we can protect your users from cyber bullying.
As a product manager, you understand the complexities of community interactions more than most. You know that bullying is more than a single insult and that the vocabulary of bullies can range from the obvious (you’re a stupid whore, you should drink bleach) to the subtle (do you even have a purpose on earth). We get it too, and we’ve designed our product to serve that important distinction. Community Sift is the world’s smartest content filter, and we stand by that statement. Our Language & Culture team spends hours researching and reviewing online bullying examples, identifying common abusive keywords and phrases, and building a linguistic framework that effectively recognizes both moderate and high-risk bullying content, and classifies it accordingly.
In her YouTube video, Amanda Todd detailed her struggles with depression, cutting, and multiple suicide attempts, two of which resulted in hospitalization.
After drinking bleach in an attempt to take her life, she was taken to the hospital and had her stomach pumped. On her flash cards, she wrote:
After I got home all I saw was on Facebook /
she deserved it, did you wash the mud out of your hair? /
i hope shes dead
We’ve studied cyber bullying patterns like this, and we’ve taught our system to recognize and filter them in real time.
Underlying all this human work is our powerful machine intelligence. Our engineers have spent hours building an engine that supports this human labor and research, and that learns as it grows, identifying with remarkable accuracy new language trends that allow us to stay ahead of the curve.
With Community Sift’s powerful technology as your base, you choose what you filter and what you allow, based on your unique audience. A social network marketed towards under-12 users will likely have a zero-tolerance threshold for bullying, while a combat game will often allow more aggressive content.
Based on our extensive research, we’ve built a system that facilitates taking two complementary approaches to online bullying:
- Identifying the linguistic patterns unique to online bullying, and filtering those phrases before they ever reach their intended victim.
- Identifying the abusers, and taking action in real-time.
These approaches serve a dual purpose. Not only do they prevent bullying from happening on your social platform, but they also help to educate and potentially reform those who would seek to harm vulnerable users.
We’ve done the hard work for you. All you have to do is decide on your community tolerance. Contact us, and our dedicated team of experts will guide you through the rest to ensure that your users are protected from abusive language.
If you don’t think about how you will deal with bullying on your platform, it will happen – just look at the numbers. And look at the effects – on your users, and on your bottom line.
What if Amanda Todd’s abusers never had the opportunity to post their cruel messages on her Facebook account? What if they were told, in real time, that their posts were directed at a real human being? Would they have persisted? Would Amanda have felt so intensely that her peers, and indeed the world, believed that she deserved to die? We can’t know for sure.
But we can work with you and your team to do everything in our power to stop it from ever happening in your social product.
You have a choice. You can give your users a safe space to share without fear of abuse. You can offer them a refuge from offline bullying. You can stop cyber bullying before it ever happens. And you can ensure that your users know where you stand on bullying by providing real-time consequences.
Send us your details, and we’ll be in touch. We would love to talk about the work we’ve done, and how we can partner with you to shape a healthy, happy, and safe community in your social product.