Threat Modeling in Digital Communites

04.08.2015 21:07
etc

It’s trolls all the way down

As part of my OpenNews fellowship I’ve recently started working on the Coral Project, a Knight-funded joint venture between the New York Times, the Washington Post, and Mozilla which, broadly, is focused on improving the experience of digital community online. That mission is an catch-all for lots and lots of subproblems; the set I’m particularly drawn to are those issues around creating inclusive and civil spaces for discussion.

Any attempt at this must contend with a variety of problems which undermine and degrade online communities. To make the problem more explicit, it’s helpful to have a taxonomy of these “threats”. I’ll try to avoid speculating on solutions and save that for another post.

Trolls/flamers/cyberbullying

The most visible barrier to discussion spaces are deliberately toxic actors - generally lumped together under the term “trolls”*.

I think most people are familiar with what a troll is, but for the sake of completeness: trolls are the users who go out and deliberately attack, harass, or offend individuals or groups of people.

If you’re interested in hearing more about what might motivation a troll, this piece provides some insight.

Astroturfing

Any mass conglomeration of spending power or social capital soon becomes a resource to be mined by brands. So many companies (and other organizations) have adopted the practice of astroturfing, which is a simulated grassroots movement.

For instance, a company gets a lot of people to rave about their products until you too, just by sheer exposure (i.e. attrition), adopt a similar attitude as your baseline. This is a much more devious form of spam because it deliberately tries to misshape our perception of reality.

This can increase the amount of noise in the network and reduce the visibility/voice of legitimate members.

Sockpuppeting/Sybil attacks

A common problem in ban-based moderation systems is that barriers-to-entry on the site may be low enough such that malicious actors can create endless new accounts with which to continue their harassment. This type of attack is called a Sybil attack (named after the dissociative identity disorder patient).

Similarly, a user may preemptively create separate accounts to carry out malicious activity, keeping deplorable behavior distinct from their primary account. In this case, the non-primary accounts are sockpuppets.

It seems the problem with Sybil attacks is the ease of account creation, but I don’t think a solution to the Sybil attacks is to make barriers-to-entry higher. Rather, you should ask whether banning is the best strategy. Ideally, we should seek to forgive and reform users rather than to exclude them (I’ll expand on this in another post). This solution is dependent on whether or not the user is actually trying to participate in good faith.

Witch hunts

This is the madness of crowds that can spawn on social networks. An infraction, whether it exists or not, whether it is big or small, becomes viral to the point that the response is disproportionate by several orders of magnitude. Gamergate, which began last year and now seems to be permanent part of the background radiation of the internet, is an entire movement that blew up from a perceived - that is, non-existent, nor particularly problematic - offense. In these cases, the target often becomes a symbol for some broader issue, and it’s too quickly forgotten that this is a person we’re talking about.

Eternal September

Eternal September” refers to September 1993, when AOL expanded access to Usenet caused a large influx of new users, not socialized to the norms of existing Usenet communities. This event is credited with the decline of the quality of those communities, and now generally refers to the anxiety of a similar event. New users who know nothing about what a group values, how they communicate, and so on come in and overwhelm the existing members.

Appeals to “Eternal September”-like problems may themselves be a problem - it may be used to rally existing community members in order to suppress a diversifying membership, in which case it’s really no different than any other kind of status quo bias.

To me this is more a question of socialization and plasticity - that is, how should new members be integrated into the community and its norms? How does the community smoothly adapt as its membership changes?

Brigading

Brigading is the practice where organized groups suss out targets - individuals, articles, etc - which criticize their associated ideas, people, and so on and go en masse to flood the comments in an incendiary way (or otherwise enact harmfully).

This is similar to astroturfing, but I tend to see brigading as being more of a bottom-up movement (i.e. genuinely grassroots and self-organized).

Doxxing

Doxxing - the practice of uncovering and releasing personally identifying information without consent - is by now notorious and is no less terrible than when it first became a thing. Doxxing is made possible by continuity in online identity - the attacker needs to connect one particular account to others, which can be accomplished through linking the same or similar usernames, email addresses, or even personal anecdotes posted across various locations. This is a reason why pseudonyms are so important.

Swatting

Swatting is a social engineering (i.e. manipulative) “prank” in which police are called in to a investigate possible threat where there is none. It isn’t new but seems to have had a resurgence in popularity recently. What was once an activity for revenge (i.e. you might “swat” someone you didn’t like) now seems to be purely for the spectacle (i.e. done without consideration of who the target is, just for lulz) - for instance, someone may get swatted while streaming themselves on Twitch.tv.

The Fluff Principle

The “Fluff Principle” (as it was named by Paul Graham) is where a vote-driven social network eventually comes to be dominated by “low-investment material” (or, in Paul’s own words, “the links that are easiest to judge”).

The general idea is that if a piece of content takes one second to consume and judge, more people will be able to upvote it in a given amount of time. Thus knee-jerk or image macro-type content come to dominate. Long-form essays and other content which takes time to consume, digest, and judge just can’t compete.

Over time, the increased visibility of the low-investment material causes it to be come the norm, and so more of it is submitted, and so the site’s demographic comes expecting that, and thus goes the positive feedback loop.

Power-user oligarchies

In order to improve the quality of content or user contributions, many sites rely on voting systems or user reputation systems (or both). Often these systems confer greater influence or control features in accordance with social rank, which can spiral into an oligarchy. A small number of powerful users end up controlling the majority of content and discussion on the site.

Gaming the system

Attempts to solve any of the above typically involve creating some kind of technological system (as opposed to a social or cultural one) to muffle undesirable behavior and/or encourage positive contribution.

Especially clever users often find ways of using these systems in ways contradictory to their purpose. We should never underestimate the creativity of users under constrained conditions (in both bad and good ways!).


Whether or not some of these are problems really depends on the community in question. For instance, maybe a site’s purpose is to deliver quick-hit content and not cerebral long-form essays. And the exact nature of these problems - their nuances and idiosyncrasies to a particular community - are critical in determining what an appropriate and effective solution might be. Unfortunately, there are no free lunches :\

(Did I miss any?)


* The term “troll” used to have a much more nuanced meaning. “Troll” used to refer to subtle social manipulators, engaging in a kind of aikido in which they caused people to trip on their own words and fall by the force of their own arguments. They were adept at playing dumb to cull out our own inconsistencies, hypocrisies, failures in thinking, or inappropriate emotional reactions. But you don’t see that much anymore…just the real brutish, nasty stuff.


Assume Good Faith

04.06.2015 14:16
etc

Throughout the many interviews we’ve been conducting for the Coral Project, the one that has stuck out the most to me was our talk with Jeffrey Lin, Lead Designer of Social Systems at Riot Games. At Riot, he built up a team explicitly designed to address the social and community problems which were common in League of Legends, Riot’s flagship game.

Like most online games, players would regularly have to deal with hostility and toxicity from other players. For most of video gaming history, developers would typically just dismiss these social frictions as a problem beyond their control.

Generally, the impression is that this kind of toxicity comes from a relatively small portion of dedicated malicious actors. One of the key insights the social team uncovered was that - at least for League of Legends, but I suspect elsewhere as well - this was not the case. Yes, there were some consistently bad actors. But by and large regular players ended up accounting for most of the toxicity. Toxicity is distributed in the sense that a lot of it comes from people who are just having a bad day, but otherwise socialize well.

One of the social team’s principles is to acknowledge that players have a good moral compass. The challenge is in designing systems which allow them to express it. If players have to contend with toxic behavior day in and day out, then their general impression will be that toxic behavior is the norm. There is no space for them to assert their own morality, and so they stay quiet.

In group dynamics, this phenomenon is known as pluralistic ignorance - when members of a community privately feel one way about something, but never express that feeling because they perceive the norm of the community to be the opposite. Not only do they not express it, but in some cases they may be excessively vocal in their support for the perceived community norm.

A classic example is the story of The Emperor’s New Clothes - the emperor is tricked into wearing “clothes” which are invisible to the unworthy (in reality, he is wearing nothing). No one is willing to admit they do not see any clothes because they do not want to communicate to others that they are unworthy. Privately, everyone holds the belief that the emperor is not wearing any clothes. But publicly, they cannot admit it. It takes a child - ignorant of the politics behind everyone else’s silence - to point out that the emperor is naked.

A more contemporary example is drinking on college campuses. College drinking is an extremely visible part of our cultural understanding of the college experience (e.g. through movies). As a result, many students have the impression that all of their peers are aligned with this norm, while they are privately less comfortable with it. In reality, many of their peers are also less comfortable with it. This is complicated by the fact that students who do conform or buy into the norm are often very vocal about it, to the point of intimidation - and at this point the norm becomes self-enforcing because there is even more social incentive (driven by insecurity) to publicly conform to the norm (called posturing).

Wikipedia operates on a similar principle, which they call “Assume good faith”:

Assuming good faith is a fundamental principle on Wikipedia. It is the assumption that editors’ edits and comments are made in good faith. Most people try to help the project, not hurt it. If this were untrue, a project like Wikipedia would be doomed from the beginning. This guideline does not require that editors continue to assume good faith in the presence of obvious evidence to the contrary (vandalism). Assuming good faith does not prohibit discussion and criticism. Rather, editors should not attribute the actions being criticized to malice unless there is specific evidence of malice.

Or to put it more succinctly, “give people the benefit of the doubt”.

The key insight to draw from all of this is that moderation systems should geared towards reforming users rather than punishing them. Once we acknowledge that people typically have a decent moral compass, we should reconsider the entire moderator-user relationship. It does not have to be an antagonistic one. Most users are not consistently bad and may just need a nudge or a reminder about the effects of their behavior. Moderation systems should instead be opportunities for a community to express their values and for a user to gain better understanding of them. And they should be designed so that the community’s values reflect the aggregate of its members’ private values rather than a dominant norm which no one really believes in.

This attitude of good faith is refreshing well beyond the scope of the Coral project. So many arguments about important issues seem to devolve into unfair characterizations of “human nature”, which have never held much water for me. The behaviors we observe are only one possible manifestation of a person, guided by the systems in which they operate, and we cannot confidently extrapolate claims about some immutable “human nature” from them.


For further reading, The emperor’s dilemma: A computational model of self-enforcing norms (unfortunately I cannot find a PDF) develops a computational model of pluralistic ignorance.



Apartment-Office

02.22.2015 00:00

I started working on the office layouts recently. There will be three office levels, going from Apartment to Office to Campus (there may be another intermediary between Office and Campus). Here’s the “apartment-office” so far:

I have been extremely frugal with the textures - the texture atlas for this scene only needs a size of about 64x64. I allocated a 256x256 texture for this so I will probably make some of the textures more detailed.

The area is too cramped at the moment - there’s no room for the cone people to move about, so I’ll have to make the space bigger. And since the perks you purchase can manifest in the office environment, there will need to be extra space for that too!


New onboarding and UI

02.02.2015 22:27

This weekend some PubSci friends came over and took a look at the current state of The Founder. There was a lot of really great feedback about improving the onboarding (there wasn’t much of one to speak of) and the UI (which was almost entirely in menus, not very “game-like”).

So the past couple days I’ve taken their suggestions and started implementing them. So far I’m really happy with how they’re turning out.

The onboarding prior to these changes was really just a screen where you could select your co-founder. And then after that there were a bunch of text boxes introducing all of the game’s mechanics and concepts - which there are a lot of.

The onboarding now (below) provides players more flexibility in how they begin the game - they can now select the starting vertical (Information or Hardware) and starting location (Boston/NYC/SF) in addition to their co-founder. So the concepts of vertical and location are more naturally introduced as part of this early game configuration.

For the UI, the general idea was to take it out from these menus and integrate it more directly into the office environment. I went through a few iterations of this today:

Too claustrophobic and disorienting. You lose the sense of the office as a complete space. The perspective limited navigation options on mobile too much.

This is basically the route I ended up taking. It keeps the “god-view” (which is important to the critical aspect of the game) and preserves the player’s freedom to pan/zoom around at will. Office objects can be interacted with directly to bring up relevant menus. You can’t really see it in this gif, but interact-able objects have pulsing colors.

And this is the most recent build, which is more polished and adds in purchasable expansions to the office. It was important to present these purchasable expansions as noticeable gaps in the space so it feels like your office is “filling up” - i.e. real growth is happening :)

<< >>