Anyone who plans to have a serious go at online community management faces a seemingly innocuous question pretty close to the start: Do we censor?
Depending on the nature of the online community that you’ll be managing, this question immediately becomes much more complicated the more you think about it.
For starters, how will you differentiate between censorship and filtering? Many online community members (management included) use these two terms interchangeably when, in fact, they have two different meanings.
To censor means to remove only the offensive parts of the text. In theory, this doesn’t necessarily sound that bad, but then you get into the idea that removing pieces of the text, however small, can change the nature of what’s written. In the end, you’re telling people what they can and can’t say, and by the time all is said and done, how does that interfere with what they have to say? If they can’t say or write what they want, is there a point to doing it at all?
On the other hand, you’ve got filtering, which generally involves monitoring content and comments and simply not publishing anything that could be offensive. It’s certainly not without merits, but it’s by no means a perfect solution, either. For example, if you decide not to publish someone who feels very strongly about a subject and makes good points and your decision is based solely on the fact that the person uses profanities a few times, then you could be accused of taking that person’s voice away. Why do others get a say when they don’t?
So you see the problem, hopefully. Each of these solutions has some worthy pros, but also some pretty strong cons.
That’s easy, you might think. I just won’t filter or censor. Problem solved.
Well, err… not really.
What will you do with the spammers and trolls? Every online community’s got ‘em. Doing nothing to manage those posts could be detrimental to your community participation. No one’s going to leave comments on your blog or participate in your discussions if they have to wade through a bunch of garbage to find the info they’re looking for.
Next, consider where you want these terms to apply. That is, if your community offers members the ability to interact across several forums, know what you will expect from the members and make those expectations clear. Personally, I believe in uniformity – if you’re going to filter comments on your blog, you should filter them on Facebook, etc.
Finally, consider how you will reply to those who question why they’ve been censored or filtered. Be prepared to defend yourself against Freedom of Speech.
Of course, there’s always this option: State your terms of use very clearly in numerous places so that users know what is expected of them and their behavior. Then monitor your online community’s comments. Filter out only the spam and serious trolls. Allow conversation and debate to take place naturally, and step in if things get out of hand. If users are being offensive or belligerent, give them a warning. If it doesn’t stop, then ban them. Repercussions should follow from the terms they agreed to when they decided to use the site.
So while the question may seem like it’s not all that important, it’s actually one of the most important questions you need to answer for yourself in the role of online community management. Spend time with your team assessing how you want to address this issue and how you plan to be consistent with it. Really consider all of the possible outcomes. And don’t forget to make your terms clear to your users.
How does your online community deal with monitoring comments? Do you have a process in place? Play it by ear? We’d love to hear your thoughts!
photo credit: taste sf
Comments on this article are closed.