GTFO: Moderation in online communities

· 06.12.2014 · etc

? ? ?_? ??

Disruptive behavior threatens both offline and online communities and participatory systems. It is maybe the single greatest source of apprehension in these discussions – what are the possibilities that someone, or a group of anonymous someones, hijacks the system and ruins it for everyone? Or, flip the issue on its head: how can we ensure that participation in a community is productive and positive[1]? It is especially an issue in digital communities – social networks, comments sections, internet forums, online multiplayer video games, etc – where the question of anonymity exacerbates these worries tenfold. The barbaric "solution" here has been simply to disallow anonymity.

Even without anonymity, vast geographical distances, disconnected social circles, communicating from the safe perch of your own home and many other factors still contribute to this online (toxic) disinhibition effect, known less generously as the "Greater Internet Fuckwad Theory": people behave worse in cyberspace than they would in meatspace (obvi). Real names and identities don’t stop respected journalists from engaging in petty Twitter fights.

Clay Shirkey explains: When online, "[t]here’s a large crowd and you can act out in front of it without paying any personal price to your reputation," which "creates conditions most likely to draw out the typical Internet user’s worst impulses."

Behaving Badly

Online communities typically thrive by building very dynamic memberships that can easily be created by anyone. This lets communities to grow quickly, sometimes allowing them to operating at enormous global scales. These qualities create communities ripe for experimentation, which, like science in meatspace, can lead to great good, great evil, and great nothing. On the evil spectrum, some communities become breeding grounds for abusive behaviors.

Without getting into the complex psychosocial roots of abusive behavior, it is clearly a major detriment to communities. It prohibits constructive discussion, intimidates new members and fosters a harmful culture which becomes exclusionary and close-minded. In the popular video game Dota 2, it was found that most new players quit not because they lose games, but because other players were abusive towards them.

So some system of curtailing abusive behavior–often referred to as "toxic" to capture its infectious nature–is necessary in digital communities in order to provide as safe a space as possible for its members. One where opinions, discussion, content, etc can flow freely without fear of persecution. One where anonymity is a tool for safety and expression and not something to be feared.

The most popular approach to curbing abusive behavior is a process that combines moderation and punishment. First, content is "moderated" – deemed abusive (or merely unwanted) and then deleted – and second, the offending user is punished/disciplined, for example by being temporarily or permanently banned from using that service . Conceptually, this process is the simplest and thus the easiest to implement and understand.

Moderation

This moderation step is typically realized through a small group of appointed moderators (or even a singular moderator) who scans for "inappropriate" content or responds to content flagged as such by users. She then makes a decision to punish the user or not to, and executes that decision – with or without discussion with fellow moderators.

Naturally, a justice process which does not directly involve members of its community raises suspicion. Nor does it function particularly well. There is a legacy of moderator abuse, favoritism, and corruption where the very system meant to maintain the quality of a group leads to its own demise. Users feel persecuted or unfairly judged, and there is seldom ever a formal process for appeal. In large communities – Reddit's r/technology has over 5 million users, which has had its share of mod drama – an appeal process may seem impractical to implement. The assurance of the success of such systems is about the same as it is in any where authority is concentrated in one or a few–it's the same as hoping for a kind despot or benevolent dictator, one that happens to have your interests at heart.

One clear major issue with the appointed moderator system is that consolidation of moderation power is often damaging to a community. A mod can harm the very discourse they are moderating simply by moderating it as they see fit, which might not represent the interests of the community’s members.

When designing infrastructure for any community, whether it be a multiplayer video game or an internet forum, the power of moderation must be distributed amongst the users, so that they themselves are able to dictate how the community evolves and grows. In this way, judgements of abusive behavior reflect the actual sentiment of the community as a whole, as opposed to the idiosyncrasies of a stranger, as it often is in far-flung and large digital communities.

Here are two moderation systems which have novel and effective approaches.

? ? ?_? ??

Slashdot

Slashdot takes an interesting approach with a distributed moderation system. In its halcyon days, Slashdot relied on a group of 25 mods who oversaw a proportionally small community that created a manageable amount of abuse. When the site exceeded the capacity of this small team, the number of moderators swelled to 400, and "[i]mmediately several dozen of these new moderators had their access revoked for being abusive."

Bizarrely, the solution was to expand moderation to all of the site's users. Not all at once, but now any participant who satisfies a few very basic criteria can be drafted for a term of moderator duty. Thus each member of the community is given the opportunity to assert his or her vision for its growth. And over time–the assumption goes–the moderation decisions reflect the common will of the group.

In this mass moderation system, a new concern arises: what if a citizen moderator uses his tenure to abuse his privileges? This echoes more general fears that there is a certain kind of person who is best suited for positions of power, those who have the moral aptitude necessary for navigating potentially compromising and difficult decisions. One who possess the understanding of the implications of that power, and the self-restraint to forgo that power when needed.

To curb the abusive potential inherent in this new system, Slashdot introduced a "metamoderation" system, which operates on principles similar to their mass moderation system: anyone satisfying a few more basic criteria can serve as a metamoderator. Metamods judge the fairness or accuracy of the decisions of other moderators, and these decisions are used to calibrate the selection of moderators. Moderators whose decisions are consistently contested have less of a chance of being selected to moderate next time. Conversely, moderators whose decisions are more representative of the community's values have their moderation prospects elevated.

A big question here: Slashdot is a relatively homogenous group when compared to something as massive as Twitter. Does such a system translate to these massive social networks, on which many communities with varying value systems exist?

? ? ?_? ??

League of Legends

League of Legends (LoL) is a massively popular Multiplayer Online Battle Arena (MOBA) game with over 27 million daily active players. MOBAs are games based heavily on team play and cooperation amongst players; prevention of abusive behavior is crucial to the enjoyment of the game because it undermines the fundamental mechanic of teamwork.

I'm not a LoL player myself (I prefer Dota) but LoL's studio, Riot Games, has made great efforts at improving the game's community and player experience (a talk on their behavioral approaches can be found here). Perhaps their most renowned tool has been the Tribunal[2], which allows players to pass judgement ("pardon or punish") on peers who have been repeatedly reported for bad behavior, providing access to context such as chat logs as part of a "case" against the accused. These cases are assembled out of multiple instances of reported abuse so that only players who are consistently disruptive are tried, and those who have the occasional bad day are excused (unless it becomes a habit).

? ? ?_? ??

Riot Games has found the Tribunal program to be successful. An audit of the community's decisions showed that 80% of the time, the players' verdict was aligned with the staff's. Riot has since explored other additional approaches to encouraging positive participation amongst its players.


Both these moderation systems outsource the community's justice process to the community itself, providing them with the tools to determine and enforce their own norms and values. But mass moderation only helps in coming to a consensus about what is problematic. It does not directly deal with, once identified, how such behavior should handled. The approaches to discipline are myriad, and the choice of method affects a community's trajectory just as much as what it considers disruptive.

1. These terms–"productive", "positive", etc–are all subjective and encase biases of the system's creators or managers. It is often the expression of a fear which is more honestly stated as: "What if people don't behave how I *want* them to?" or "What if people don't think the same way I do?". There is always the possibility that a community is used or develops in ways far removed from the creators'/managers' original vision. That is an exciting possibility but not always recognized as such.

2. On the flip side, Riot Games has also implemented an "Honor" system, which is a point-based system allowing players to recognize other players who make positive contributions to the game experience. I can't tell if these Honor points in any way influence participation in the Tribunal system (which would make the system more comparable to Slashdot's metamoderation), but it at least incentivizes good behavior as a visible signifier amongst others.