GTFO: Moderation in online communities

06.12.2014
etc

? ? ?_? ??

Disruptive behavior threatens both offline and online communities and participatory systems. It is maybe the single greatest source of apprehension in these discussions – what are the possibilities that someone, or a group of anonymous someones, hijacks the system and ruins it for everyone? Or, flip the issue on its head: how can we ensure that participation in a community is productive and positive[1]? It is especially an issue in digital communities – social networks, comments sections, internet forums, online multiplayer video games, etc – where the question of anonymity exacerbates these worries tenfold. The barbaric "solution" here has been simply to disallow anonymity.

Even without anonymity, vast geographical distances, disconnected social circles, communicating from the safe perch of your own home and many other factors still contribute to this online (toxic) disinhibition effect, known less generously as the "Greater Internet Fuckwad Theory": people behave worse in cyberspace than they would in meatspace (obvi). Real names and identities don’t stop respected journalists from engaging in petty Twitter fights.

Clay Shirkey explains: When online, "[t]here’s a large crowd and you can act out in front of it without paying any personal price to your reputation," which "creates conditions most likely to draw out the typical Internet user’s worst impulses."

Behaving Badly

Online communities typically thrive by building very dynamic memberships that can easily be created by anyone. This lets communities to grow quickly, sometimes allowing them to operating at enormous global scales. These qualities create communities ripe for experimentation, which, like science in meatspace, can lead to great good, great evil, and great nothing. On the evil spectrum, some communities become breeding grounds for abusive behaviors.

Without getting into the complex psychosocial roots of abusive behavior, it is clearly a major detriment to communities. It prohibits constructive discussion, intimidates new members and fosters a harmful culture which becomes exclusionary and close-minded. In the popular video game Dota 2, it was found that most new players quit not because they lose games, but because other players were abusive towards them.

So some system of curtailing abusive behavior–often referred to as "toxic" to capture its infectious nature–is necessary in digital communities in order to provide as safe a space as possible for its members. One where opinions, discussion, content, etc can flow freely without fear of persecution. One where anonymity is a tool for safety and expression and not something to be feared.

The most popular approach to curbing abusive behavior is a process that combines moderation and punishment. First, content is "moderated" – deemed abusive (or merely unwanted) and then deleted – and second, the offending user is punished/disciplined, for example by being temporarily or permanently banned from using that service . Conceptually, this process is the simplest and thus the easiest to implement and understand.

Moderation

This moderation step is typically realized through a small group of appointed moderators (or even a singular moderator) who scans for "inappropriate" content or responds to content flagged as such by users. She then makes a decision to punish the user or not to, and executes that decision – with or without discussion with fellow moderators.

Naturally, a justice process which does not directly involve members of its community raises suspicion. Nor does it function particularly well. There is a legacy of moderator abuse, favoritism, and corruption where the very system meant to maintain the quality of a group leads to its own demise. Users feel persecuted or unfairly judged, and there is seldom ever a formal process for appeal. In large communities – Reddit's r/technology has over 5 million users, which has had its share of mod drama – an appeal process may seem impractical to implement. The assurance of the success of such systems is about the same as it is in any where authority is concentrated in one or a few–it's the same as hoping for a kind despot or benevolent dictator, one that happens to have your interests at heart.

One clear major issue with the appointed moderator system is that consolidation of moderation power is often damaging to a community. A mod can harm the very discourse they are moderating simply by moderating it as they see fit, which might not represent the interests of the community’s members.

When designing infrastructure for any community, whether it be a multiplayer video game or an internet forum, the power of moderation must be distributed amongst the users, so that they themselves are able to dictate how the community evolves and grows. In this way, judgements of abusive behavior reflect the actual sentiment of the community as a whole, as opposed to the idiosyncrasies of a stranger, as it often is in far-flung and large digital communities.

Here are two moderation systems which have novel and effective approaches.

? ? ?_? ??

Slashdot

Slashdot takes an interesting approach with a distributed moderation system. In its halcyon days, Slashdot relied on a group of 25 mods who oversaw a proportionally small community that created a manageable amount of abuse. When the site exceeded the capacity of this small team, the number of moderators swelled to 400, and "[i]mmediately several dozen of these new moderators had their access revoked for being abusive."

Bizarrely, the solution was to expand moderation to all of the site's users. Not all at once, but now any participant who satisfies a few very basic criteria can be drafted for a term of moderator duty. Thus each member of the community is given the opportunity to assert his or her vision for its growth. And over time–the assumption goes–the moderation decisions reflect the common will of the group.

In this mass moderation system, a new concern arises: what if a citizen moderator uses his tenure to abuse his privileges? This echoes more general fears that there is a certain kind of person who is best suited for positions of power, those who have the moral aptitude necessary for navigating potentially compromising and difficult decisions. One who possess the understanding of the implications of that power, and the self-restraint to forgo that power when needed.

To curb the abusive potential inherent in this new system, Slashdot introduced a "metamoderation" system, which operates on principles similar to their mass moderation system: anyone satisfying a few more basic criteria can serve as a metamoderator. Metamods judge the fairness or accuracy of the decisions of other moderators, and these decisions are used to calibrate the selection of moderators. Moderators whose decisions are consistently contested have less of a chance of being selected to moderate next time. Conversely, moderators whose decisions are more representative of the community's values have their moderation prospects elevated.

A big question here: Slashdot is a relatively homogenous group when compared to something as massive as Twitter. Does such a system translate to these massive social networks, on which many communities with varying value systems exist?

? ? ?_? ??

League of Legends

League of Legends (LoL) is a massively popular Multiplayer Online Battle Arena (MOBA) game with over 27 million daily active players. MOBAs are games based heavily on team play and cooperation amongst players; prevention of abusive behavior is crucial to the enjoyment of the game because it undermines the fundamental mechanic of teamwork.

I'm not a LoL player myself (I prefer Dota) but LoL's studio, Riot Games, has made great efforts at improving the game's community and player experience (a talk on their behavioral approaches can be found here). Perhaps their most renowned tool has been the Tribunal[2], which allows players to pass judgement ("pardon or punish") on peers who have been repeatedly reported for bad behavior, providing access to context such as chat logs as part of a "case" against the accused. These cases are assembled out of multiple instances of reported abuse so that only players who are consistently disruptive are tried, and those who have the occasional bad day are excused (unless it becomes a habit).

? ? ?_? ??

Riot Games has found the Tribunal program to be successful. An audit of the community's decisions showed that 80% of the time, the players' verdict was aligned with the staff's. Riot has since explored other additional approaches to encouraging positive participation amongst its players.


Both these moderation systems outsource the community's justice process to the community itself, providing them with the tools to determine and enforce their own norms and values. But mass moderation only helps in coming to a consensus about what is problematic. It does not directly deal with, once identified, how such behavior should handled. The approaches to discipline are myriad, and the choice of method affects a community's trajectory just as much as what it considers disruptive.

1. These terms–"productive", "positive", etc–are all subjective and encase biases of the system's creators or managers. It is often the expression of a fear which is more honestly stated as: "What if people don't behave how I *want* them to?" or "What if people don't think the same way I do?". There is always the possibility that a community is used or develops in ways far removed from the creators'/managers' original vision. That is an exciting possibility but not always recognized as such.

2. On the flip side, Riot Games has also implemented an "Honor" system, which is a point-based system allowing players to recognize other players who make positive contributions to the game experience. I can't tell if these Honor points in any way influence participation in the Tribunal system (which would make the system more comparable to Slashdot's metamoderation), but it at least incentivizes good behavior as a visible signifier amongst others.


Let them be Orks

06.11.2014
etc

The Orks in W40K: Dawn of War II

The Orks in W40K: Dawn of War II

Within the Warhammer 40K universe are a prominent alien species known as the "Orks", notorious for their infinite dim-wittedness, reflexively aggressive nature, and staggeringly large numbers.

For the upcoming Warhammer 40K MMO, Warhammer 40K: Eternal Crusade, the producers were faced with an issue: most of the players (40.7%) wanted to play as one of the other factions, the Space Marines. There are supposed to be way more Orks than Space Marines in the Warhammer universe. This imbalance would throw the dynamics of the game out of sync with canon.

The game's solution: while the MMO, like many, is pay-to-play, there is one exception: the game is free-to-play if you play as an Ork.

The expected result is that this design choice will cause the game's universe to more accurately reflect the canonical Warhammer 40K universe. There will be staggeringly large numbers of Orks, and they'll be populated by cheap, reprehensible players who will abuse the no-cost system by abusing the other players. That is, they'll also reflect the behavior of Orks in the canonical universe.

The idea of leveraging a fictional world's narrative need for abusive player behavior is brilliant. Will players be more accepting of it?


BitTorrent Chat

06.10.2014
etc

BitTorrent has been making some great spin-offs off of the BitTorrent protocol, spin-offs which are more socially-sanctioned than the protocol's most ubiquitous use.

BitTorrent Sync, for instance, is a decentralized file sharing alternative to services like Dropbox - instead of your files going to a central server owned by another organization, files are synced using the BitTorrent protocol across your various devices. It's quite fast and your files only ever exist on devices you control.

bittorrent chat

The one I'm most excited about is BitTorrent Chat. BitTorrent Chat is more experimental, but uses the protocol for decentralized, anonymous, and encrypted communication.

Once upon a time BitTorrent did require a central server of some kind - a tracker, which keeps track of other peers in the "swarm" so the client knows who to connect to. Pretty much all other chat protocols also require a central server to coordinate the messaging.

But since then an alternative has taken over: distributed hash tables (DHT), where peers are located via other peers (that is, in a decentralized manner). BT Chat uses DHT (updated to support encryption) to altogether remove the need for a central server.

Users are at most identified by their public encryption keys and the service uses forward secrecy; that is, for each communication, a short-term key is derived from the public keys of the chatters so that any future compromise of the original public keys does not compromise past chats.

Basically what this amounts to is:

  • you communicate directly with whoever you're talking to, without going through any machine you don't control (aside from DHT to locate them in the first place).
  • all your communications are encrypted so that no one can snoop your messages as they go.
  • even if some attacker gets a hold of your public key, the use of forward secrecy means that they can't decrypt any of your past communications.

BitTorrent Chat is only in alpha at the moment, but it has potential to be a great new and secure system for online discussions. Really excited to see where it goes.

I'm onboard with BitTorrent's mission and hope to see more applications of their decentralized approach. With the uneasy and increasingly cynical (and resigned) atmosphere around surveillance, these kinds of technologies are really intriguing and valuable.

Although...while such applications provide us alternatives to untrustworthy intermediary servers, we'll still worry about who's on the other end.


The Interlocking Public

06.02.2014
etc

interlocking tori not related

In The Elements of Journalism (Bill Kovach & Tom Rosenstiel, fantastic book btw) there's a bit about what they call the "theory of the interlocking public". The theory challenges this old-guard journalism assumption that "the people need to know about the important issues", which presupposes that there is such a thing as an universally important issue. This outmoded approach underlies the apprehension journalists have about technologies which increasingly cater to esoteric interests and their despair that the public doesn't care about anything important anymore.

But the interlocking public theory posits that when it comes to a particular topic/story/interest, there are possible three levels of engagement for an individual:

  • I don't care at all
  • I'm interested/intrigued
  • I'm very passionate about it

The beauty of the interlocking public is that in the aggregate all our interests cover the "important" issues of the day. I care very strongly about one thing which you don't have any interest in, and you care strongly about something I care little about. Thus our interests are complimentary and things are ok in the end. That's the theory, anyways.

In the authors' words:

The notion that people are simply ignorant, or that other people are interested in everything, is a myth. ... There is an involved public, with a personal stake in an issue and a strong understanding. There is an interested public, with no direct role in the issue but that is affected and responds with some firsthand experience. And there is an uninterested public, which pays little attention and will join, if at all, after the contours of the discourse have been laid out by others. In the interlocking public, we are all members of all three groups, depending on the issue. ... The sheer magnitude and diversity of the people is its strength. – The Elements of Journalism, pp24-25


Neuromancer Network Visualizer

06.01.2014

A year here and he still dreamed of cyberspace, hope fading nightly. All the speed he took, all the turns he'd taken and the corners he cut in Night City, and he'd still see the matrix in his dreams, bright lattices of logic unfolding across that colourless void... – Neuromancer, William Gibson

This weekend I played around in Unity and built an experimental 3d network traffic visualizer. It's very simple now but eventually I'd like to transform network topology into a virtual reality space which can be explored. Packets could be inspected as one would physical objects in the real world, servers would be enormous fortresses safeguarding precious data.

You'd be able to turn your local network topology into a game level for "hacking" and fighting against ICE. It would be awesome to see the Kuang Grade Mark Eleven icebreaker grow around you:

...that content of shipment is Kuang Grade Mark Eleven penetration program. Bockris further advises that interface with Ono-Sendai Cyberspace 7 is entirely compatible and yields optimal penetration capabilities, particularly with regard to existing military systems...

He slotted the Chinese virus, paused, then drove it home.

"Okay," he said, "we're on..."

"Christ on a crutch," the Flatline said, "take a look at this."

The Chinese virus was unfolding around them. Polychrome shadow, countless translucent layers shifting and recombining. Protean, enormous, it towered above them, blotting out the void.

"Big mother," the Flatline said.

The source is available on GitHub along with instructions for running it. The server code is based off of Jonathan Dahan's pagesounds project.

<< >>