Frequently Asked Questions

Understand what Community Sift can do for your social product

What is Community Sift?

Community Sift is a Software as a Service (SaaS) that easily integrates with social platforms, games, or online communities. Our clients are enterprise-level organizations seeking a customizable and scalable solution to their community management needs.

Executives, engineers, moderators, and community professionals all sign onto our platform to learn about and improve their product’s community dynamics. With data-driven insights at their fingertips, our clients can grow a more active and engaged user-base.

Our service will help clients answer questions like:

  • What are my users talking about?
  • What's trending in my community?
  • How can my team moderate better and faster without hiring more people?
  • How can I prevent trolling, bullying, and harassment without censoring innocent users?
  • How can I ensure the protection of children's online privacy?

Are you a blacklist or a whitelist?

Although our founders helped pioneer whitelist filtering and have worked extensively with blacklists, we believe that model is too restrictive and doesn't allow for freedom of expression. Black/whitelist filters operate on the assumption that a word is either good or bad, creating endless false positives and negatives. The result is an inefficient system and a frustrating user experience. We place phrases on a spectrum somewhere between good and bad, allowing for a much greater level of nuance.

Can this help me with COPPA compliance?

Yes. We designed a set of rules for personally identifiable information specifically with COPPA requirements in mind. During the set-up we will interview you about your COPPA goals and help you tweak the rules to make compliance. We still recommend a COPPA review by a 3rd party to look at your privacy statement, account creation process, and internal storage of data.

Is this safe for kids?

Absolutely. We believe this is the most reliable solution beyond unplugging the computer.

How do you protect freedom of speech?

Consider the movie theatre. While an artist is free to create any type of movie they want, the movie theatre reserves the right to label that content so that patrons know what to expect and can choose not to see it. Our goal is not to prevent people from saying things so much as empower product owners with a greater understanding of the content appearing on their platform. How they decide to manage that content is entirely up to them.

For sites with younger audiences, the simplest explanation is that if someone shows up at a playground wanting to express themselves in a way that threatens the kids, action will be taken to manage this behaviour. There are two conflicting human rights at play: that of free speech, and that of children's safety. Most will agree that the latter deserves priority because there is more at stake.

We can also provide the freedom to listen. With our system, it is possible to let everything through and then let your guests choose what they want to hear. If some are going to be driven away by vulgarity or sexting, then why not let them choose a more refined experience?

Can you filter images?

Yes. Inquire for more info.

What is 'toxicity'?

'Toxicity' is a term used to describe online behavior with a profoundly damaging impact on the community and the well-being of its members. Toxic users demonstrate a lack of self-control more commonly seen online than in real life. Examples include flooding chat rooms with hate speech, deliberately sabotaging the experience of other users, using language that incites violence, or excessive bullying.

Why is it important to deal with toxic users?

Toxic users that find there is little to no accountability for their actions will take up residence on any platform that allows them to continue harassing others without consequence. Unless measures are taken to discourage bad conduct, they are unlikely to reform their behavior. Since users who experience toxicity are 320% more likely to quit, the question becomes one of retention.

Typically only 1-2% of users are causing problems, yet almost all of the filtering tech on the market caters towards blocking them at the harm of real engagement by the other 98%. We realized that if we could identify that 2%, we could give them the stricter filter while allowing more leeway for the rest. The result? Instead of growing toxic, communities grow to be full of positive interactions that are rewarded and esteemed.

How do you stop toxic users from abusing others?

We build a reputation for everyone based on the risk associated with their choice of language. Users who consistently trigger the system become 'untrusted,' while those who use language responsibly earn a 'trusted' status.

Each client chooses the level of risk they want to tolerate within their community. They are also able to consider a range of topics such as bullying, sexting, hate speech, and profanity separately. For untrusted users, clients may choose to tolerate a lower level of risk, knowing that these are the users most likely to cause problems when afforded greater freedoms. The logic reverses for trusted users.

How do you prevent people from gaming the system by earning trust and then being abusive?

Hundreds of thousands of rules back our system. At this point in time, there are very few tricks that we have not seen before. When users start pushing the boundaries, they receive a warning and eventually lose trust and privilege. We fully expect some deliberate attempts to game the system and allow them to gain and lose trust.

We are careful not to reward negative behavior with shocking messages and “easter egg”-like experiences. Instead, users quickly learn that if they are negative, it is harder to say things and get attention, while positive behavior results in engagement and social reward. Most users just want a response from their peers, which they find on their trusted account. Ideally with time, they start making more friends and become the defenders and champions of the community.

Can you handle usernames?

Yes. Usernames are different from chat, so we designed a filter specifically for them which looks at patterns in words.

Can you handle l33t speak and filter bypass tricks?

Yes. Our system automatically looks for hidden letters and numbers and even characters from other languages. We have also built out thousands of unique rules to account for subversive manipulations like the ones below.

For example:

filter-bypass

Can you handle words that need context?

Yes, this is what makes us so different from standard text-filtration systems. Our AI engine is designed to analyze and classify messages in the context they are sent and determine the level of associated risk.

For example:

words-with-context

All four lines contain the same swear word, but the intent behind them is different. Our system would tag the first pair as vulgar and the second pair as both vulgar and bullying, so clients who want to reduce the amount of bullying in their community without having to restrict vulgar language can do so.

Can you handle chat across multiple lines?

y

e

s

Can you detect flooding?

Yes. We can handle both duplicate message flooding (sending the exact same message multiple times in a row) and velocity-based flooding (sending messages really fast).

Can you filter open chat differently from one-on-one chat?

Yes. We help you set up these rules to make your goals.

Can you filter users 13-and-under differently from 13+?

Yes, using the same technique as above.

Can you prioritize work queues?

Yes. We perform triage on all incoming content so your team can deal with the most pressing issues first.

What languages do you support?

At present, English is our strongest focus, followed by French, Spanish, Portuguese, Russian, German, Italian, and Japanese. The next few languages will be by client demand.

We can build new languages on request with a 2-week lead time. If you have a list of 10 million messages in that language, we can create an efficient workflow for you to tune about 95% of all words in your context.

Can I suggest new features?

Yes. We want our tool to empower you to do reach your goals. If there is a feature you need, we want to hear about it.

Can I get custom features and development?

Yes. Often this is included in your package. Contact our sales team to schedule a demo and determine what custom features might work best for you.

What kind of support do you offer?

We pride ourselves in offering premium support to all our clients. Each customer has a support representative that serves as their primary point of contact. This rep understands their goals and helps them tune the system to meet these aims. If you wish, they can meet weekly (or more often when necessary) to go over training, new ideas, questions, best ­practices, and more. We want you to succeed.


Top

Have more questions?

Didn't find the answer you were looking for? Please contact us.

Request A Demo

Want to see what Community Sift can do for your social product? Request a demo.

Get Our Newsletter