The Dream of the Internet

06.21.2014
etc

Follow Your Dreams

It is upsetting though unsurprising that, amidst our current internet-driven company bonanza much of the original spirit of the internet has been lost. Yes, most internet companies are founded on networking ideals of connection and communication - the most universally lauded (read: marketed) aspect of internet services - but these manipulatively uplifting appeals overshadow, perhaps intentionally, an equally powerful promise of the internet's architecture.

The internet's architecture is decentralized by nature: it is about individual computers communicating with one another*. But the dominant model is completely antithetical to that: communication between users on the internet are now almost entirely mediated through corporate-controlled and centralized servers (e.g. the servers of social networking services).

The internet services through which most of us find value are typically centralized. When I access Gmail, I am gathering my mail from servers concentrated under Google. When I'm syncing files from Dropbox, those files are coming from servers concentrated under Dropbox. When I send messages to friends on Facebook, those messages are routed through servers concentrated under Facebook.

Centralized internet

If you've been around for the past couple decades then you are well aware that the no-cost distribution and infinite replicability of digital information has undermined many industries founded on the scarcity of their products (i.e. piracy). Anyone appreciative of this unique quality of digital information is likely to wonder: why is this model of infinite replicability and distribution paradoxically absent from services - Google, Dropbox, Facebook, et al - which are digital, born and bred?

It is because the business models of these companies are not about providing digital services. They are about consolidation. Their value is directly derived from being centrally positioned within the network and extracting data (and data == value) from all communication that must pass through it.

This consolidation is created by controlling access. When discussing internet services, we tend to glean over their material foundations. But these companies are about creating a scarcity of service, which is rooted in the concentration of the hardware running the service. Control over the software - that is, the prevention of its distribution and replication - is necessary because it enables control over the hardware as well. Only Facebook, Inc. can administer the Facebook software; thus it will run only on their hardware. If I want to access the service, I must access it on their hardware. Thus my communication must go through their hardware, the wellspring of their value. And because that hardware belongs to them, they control access to it - even if their user policies say otherwise.


The principles of open source software (OSS) is meant to counteract this balkanization of the internet (or the formation of the "Splinternet"). OSS is fundamental to supporting the decentralization spirit of the internet. Anyone can run OSS on their own servers and provide the service to their own community or simply to themselves. I don't have to access the service on an untrusted party's hardware: I can access it on a friend's server or even on my own computer. Open source software enables the freedom to access services on hardware that you or someone you trust controls.

Decentralized internet

Diaspora is an example of an open source, decentralized alternative to conventional social networking services (collectively which are known as "the federated web"). Individuals or organizations can host their own "pods" (servers running the Diaspora software) so that the service is physically distributed across computers that are not concentrated under any one group. However, your identity on the network is portable so that the experience is similar to that of a centralized service: you can access any Diaspora pod without really noticing the difference.

For example, a group of friends decide to host a Diaspora server (pod) and we sign up to the service through it. We're free to interact with each other on it, and your personal data remains on that server, under your jurisdiction. You control the access to it.

If you meet someone who is part of a different pod, that's no problem - you can still communicate with them because the service functions as a cohesive whole.


We could even go a step deeper than the software. Where needed, we should look to the level of collectively defining protocols, or more widespread adoption of those that already exist. A protocol is a set of standardized rules or a "language" which developers can implement in their own software. Provided that everyone adheres to the protocol, different systems can communicate reliably. Thus individuals and communities can run OSS on their own servers, and these servers can communicate amongst each other. Though the hardware is distributed amongst independent hosts, the standardization at the software layer forms a cohesive whole in the final user experience. Thus you can achieve the sensation of a centralized service with the crucial feature that the data is not concentrated under the control of a single entity.

An example of such a protocol are email protocols. There are a few which you may have seen when digging deep into your Gmail settings: SMTP (for outgoing mail), IMAP and POP3 (for incoming mail). There are many, many different email services - Gmail, Hotmail, Yahoo, Fastmail, etc - yet they are all able to communicate with each other because they adhere to these common sets of rules. I can send an email from Gmail and a Hotmail user will receive it without issue. The added benefit here is that the user is not locked into any particular software experience - they can choose from many, or even roll their own, and they will all work so long as it sticks to the protocol.

For example: imagine if you could favorite a tweet through Facebook. You can't right now because they do not share a common standard on how a "favorite" is registered in the software. A favorite on Twitter is not equivalent to a like on Facebook, although what the user is trying to communicate through each action may be equivalent. As a user this becomes inflexible: your activity on one network is not portable at all to another network.

OStatus is an open protocol which attempts to standardize these social interactions. I could host my own social networking service adhering to the OStatus protocol and someone else could have their own service completely independent to mine. So long as their service also implements the OStatus protocol, users will be able to interact across the platforms, and are thus afforded a unique mobility not present in the social networking ecosystem today.


Here's a short list of alternatives to popular, centralized services:

Some of these services still require work and effort; certainly they are at a disadvantage when against capital-laden organizations. But they are worthwhile projects and needed alternatives. The effort is worth it.


Of course, a major appeal of centralized services is their ease of use for non-technical folk. Someone with no experience provisioning servers will have a very hard time deploying a service on their own. It's often a pain even for myself. And it's hard to fully appreciate the decentralization potential of the internet if you have no idea how it works. But these are problems which can be solved with some education and discussion.

It's clear that with tightening grip over the flow of users and their information across networks, the original dream of the internet has been lost, but it is has also become more crucial than ever. The vision of communities running their own servers with open source software, so they have control over access to their own data, is still within reach.

1. It's worth noting that the physical connection between two computers on the internet is typically routed through other hardware controlled by others, such as your ISP. This is where strong encryption practices come into play: while your data travels through many other devices, practically speaking only you and your intended recipient have access to it. Furthermore, initiatives around mesh networking are trying to replace centralized ISP hardware with a distributed model of independently-run nodes.

2. OSS has the additional crucial quality of transparency: anyone can independently audit the code and influence it's development (in theory at least: sometimes you have projects lorded over by a single owner). It's development is more likely to reflect the demands of the community which use and depend on it, as opposed to an external party in what inevitably is an asymmetric relationship of service controller and end user. Not every user of course will be directly involved in it's involvement, but OSS at least allows the possibility.


The Overmind vs the Hivemind

06.17.2014
etc

I was introduced to the Overmind last week, which is a Starcraft (Brood War) AI developed at Berkeley. Starcraft is a "real-time strategy" game, abbreviated as "RTS", where two or more players face-off, competing for resources and building troops to fight and ultimately destroy each other. Last one standing is the victor.

It's an interesting game to develop AI for because it requires a great deal of intricate management, both at the macro level (managing your economy; that is, resource gathering and allocation) and at the micro level (positioning and controlling your individual troops in battle). There's enough complexity that a game can go in many unexpected ways; it is an ideal environment for humans to creatively respond and adapt to - something which computers traditionally have a great deal of trouble with.

The Overmind AI controlling a fleet of Mutalisks (the orange and green flying alien creatures) with terrifying precision

When I think of "AI vs human", I tend to think of it in a sort of Deep Blue vs Garry Kasparov sense. A solitary expert against an expertly programmed machine. The machine proves its superiority by consistently beating the human. The Overmind fits that narrative (though it still had trouble with human professional Starcraft players).

It's interesting to think of the Overmind in contrast to Twitch Plays Pokemon, where masses of participants essentially button-mashed their way through Pokemon Red (and is now making its way through the other titles). Anyone can join the game's chat room and submit a button to press. The system had two modes: "anarchy" and "democracy", the former meaning that every command someone submitted was executed, the latter meaning that there was consensus on what the next command would be. It took 16 days non-stop to complete the game which typically takes a bit more than a day. TPP is the complete antithesis of unitary control by a program.

Twitch Plays Pokemon

Twitch Plays Pokemon (via Wikipedia)

Sci-fi AI narratives (and the AI field's ultimate aspirations) are far more ambitious than these video game bots. In Iain M. Banks' The Culture, general artificial intelligences are organizing galactic societies, thus liberating organic beings from the complexities of deciding policy and the stress of providing for and governing themselves.

This is a huge leap from video game bots-impressive as they are-but it was explored long before the conception of these bots, as early as the mid-20th century, in Soviet Russia. Francis Spufford's Red Plenty details an attempt at integrating computers into the central planning model:

Soviet planners, economists, physicists and mathematicians...persuaded the Soviet leadership that, using cybernetic principles and the newly developed computers, the centralised, planned Soviet economy could at last be made efficient.

Francis Spufford's Red Plenty

These ambitions for AI are far removed from one-on-one video game contests. Rather than substituting for an individual, these programs are meant to replace the collective decision-making capacity of entire societies of people. Naturally, at this scale the evaluation of success is much murkier than a game with simple victory conditions. And when we begin discussing decisions which impact peoples' lives, the particular strategies the AI uses become matters of ethics than merely mechanics.

It feels as if we are still quite a long way off from a managerial AI becoming a reality. But with the frenzy around big data, where companies like Palantir aggregate massive amounts of disparate data to draw and act upon high-level conclusions, the technical feasibility of a computational caretaker (/overlord) will only grow. When will we see organizations managed by "Computer Executive Officers" instead of by people? In The Culture, such artificial intelligences are the foundation for utopia, but they are equally the fodder of nightmarish futures - Skynet, ARIIA, etc... - will we be willing to use them?


Virtual Reality Workspaces & Pop Holo-stars

06.16.2014
etc

Johann showed me a cool demo of Motorcar Compositor, a virtual reality workspace combining the Oculus Rift and Razer's Hydra controller:

I like the idea of a digital working environment feeling more like a physical workspace.


I had first encountered the Oculus and Hydra pairing in this great VR demo with holographic pop star Hatsune Miku:

It shows the pretty ingenious use of the Hydra as a way of emulating finger positioning in binary terms: is a particular finger folded or opened? While a finger is pressing that button, that finger is folded in; otherwise, that finger is extended:

Hydra controller


GTFO: Moderation in online communities

06.12.2014
etc

? ? ?_? ??

Disruptive behavior threatens both offline and online communities and participatory systems. It is maybe the single greatest source of apprehension in these discussions – what are the possibilities that someone, or a group of anonymous someones, hijacks the system and ruins it for everyone? Or, flip the issue on its head: how can we ensure that participation in a community is productive and positive[1]? It is especially an issue in digital communities – social networks, comments sections, internet forums, online multiplayer video games, etc – where the question of anonymity exacerbates these worries tenfold. The barbaric "solution" here has been simply to disallow anonymity.

Even without anonymity, vast geographical distances, disconnected social circles, communicating from the safe perch of your own home and many other factors still contribute to this online (toxic) disinhibition effect, known less generously as the "Greater Internet Fuckwad Theory": people behave worse in cyberspace than they would in meatspace (obvi). Real names and identities don’t stop respected journalists from engaging in petty Twitter fights.

Clay Shirkey explains: When online, "[t]here’s a large crowd and you can act out in front of it without paying any personal price to your reputation," which "creates conditions most likely to draw out the typical Internet user’s worst impulses."

Behaving Badly

Online communities typically thrive by building very dynamic memberships that can easily be created by anyone. This lets communities to grow quickly, sometimes allowing them to operating at enormous global scales. These qualities create communities ripe for experimentation, which, like science in meatspace, can lead to great good, great evil, and great nothing. On the evil spectrum, some communities become breeding grounds for abusive behaviors.

Without getting into the complex psychosocial roots of abusive behavior, it is clearly a major detriment to communities. It prohibits constructive discussion, intimidates new members and fosters a harmful culture which becomes exclusionary and close-minded. In the popular video game Dota 2, it was found that most new players quit not because they lose games, but because other players were abusive towards them.

So some system of curtailing abusive behavior–often referred to as "toxic" to capture its infectious nature–is necessary in digital communities in order to provide as safe a space as possible for its members. One where opinions, discussion, content, etc can flow freely without fear of persecution. One where anonymity is a tool for safety and expression and not something to be feared.

The most popular approach to curbing abusive behavior is a process that combines moderation and punishment. First, content is "moderated" – deemed abusive (or merely unwanted) and then deleted – and second, the offending user is punished/disciplined, for example by being temporarily or permanently banned from using that service . Conceptually, this process is the simplest and thus the easiest to implement and understand.

Moderation

This moderation step is typically realized through a small group of appointed moderators (or even a singular moderator) who scans for "inappropriate" content or responds to content flagged as such by users. She then makes a decision to punish the user or not to, and executes that decision – with or without discussion with fellow moderators.

Naturally, a justice process which does not directly involve members of its community raises suspicion. Nor does it function particularly well. There is a legacy of moderator abuse, favoritism, and corruption where the very system meant to maintain the quality of a group leads to its own demise. Users feel persecuted or unfairly judged, and there is seldom ever a formal process for appeal. In large communities – Reddit's r/technology has over 5 million users, which has had its share of mod drama – an appeal process may seem impractical to implement. The assurance of the success of such systems is about the same as it is in any where authority is concentrated in one or a few–it's the same as hoping for a kind despot or benevolent dictator, one that happens to have your interests at heart.

One clear major issue with the appointed moderator system is that consolidation of moderation power is often damaging to a community. A mod can harm the very discourse they are moderating simply by moderating it as they see fit, which might not represent the interests of the community’s members.

When designing infrastructure for any community, whether it be a multiplayer video game or an internet forum, the power of moderation must be distributed amongst the users, so that they themselves are able to dictate how the community evolves and grows. In this way, judgements of abusive behavior reflect the actual sentiment of the community as a whole, as opposed to the idiosyncrasies of a stranger, as it often is in far-flung and large digital communities.

Here are two moderation systems which have novel and effective approaches.

? ? ?_? ??

Slashdot

Slashdot takes an interesting approach with a distributed moderation system. In its halcyon days, Slashdot relied on a group of 25 mods who oversaw a proportionally small community that created a manageable amount of abuse. When the site exceeded the capacity of this small team, the number of moderators swelled to 400, and "[i]mmediately several dozen of these new moderators had their access revoked for being abusive."

Bizarrely, the solution was to expand moderation to all of the site's users. Not all at once, but now any participant who satisfies a few very basic criteria can be drafted for a term of moderator duty. Thus each member of the community is given the opportunity to assert his or her vision for its growth. And over time–the assumption goes–the moderation decisions reflect the common will of the group.

In this mass moderation system, a new concern arises: what if a citizen moderator uses his tenure to abuse his privileges? This echoes more general fears that there is a certain kind of person who is best suited for positions of power, those who have the moral aptitude necessary for navigating potentially compromising and difficult decisions. One who possess the understanding of the implications of that power, and the self-restraint to forgo that power when needed.

To curb the abusive potential inherent in this new system, Slashdot introduced a "metamoderation" system, which operates on principles similar to their mass moderation system: anyone satisfying a few more basic criteria can serve as a metamoderator. Metamods judge the fairness or accuracy of the decisions of other moderators, and these decisions are used to calibrate the selection of moderators. Moderators whose decisions are consistently contested have less of a chance of being selected to moderate next time. Conversely, moderators whose decisions are more representative of the community's values have their moderation prospects elevated.

A big question here: Slashdot is a relatively homogenous group when compared to something as massive as Twitter. Does such a system translate to these massive social networks, on which many communities with varying value systems exist?

? ? ?_? ??

League of Legends

League of Legends (LoL) is a massively popular Multiplayer Online Battle Arena (MOBA) game with over 27 million daily active players. MOBAs are games based heavily on team play and cooperation amongst players; prevention of abusive behavior is crucial to the enjoyment of the game because it undermines the fundamental mechanic of teamwork.

I'm not a LoL player myself (I prefer Dota) but LoL's studio, Riot Games, has made great efforts at improving the game's community and player experience (a talk on their behavioral approaches can be found here). Perhaps their most renowned tool has been the Tribunal[2], which allows players to pass judgement ("pardon or punish") on peers who have been repeatedly reported for bad behavior, providing access to context such as chat logs as part of a "case" against the accused. These cases are assembled out of multiple instances of reported abuse so that only players who are consistently disruptive are tried, and those who have the occasional bad day are excused (unless it becomes a habit).

? ? ?_? ??

Riot Games has found the Tribunal program to be successful. An audit of the community's decisions showed that 80% of the time, the players' verdict was aligned with the staff's. Riot has since explored other additional approaches to encouraging positive participation amongst its players.


Both these moderation systems outsource the community's justice process to the community itself, providing them with the tools to determine and enforce their own norms and values. But mass moderation only helps in coming to a consensus about what is problematic. It does not directly deal with, once identified, how such behavior should handled. The approaches to discipline are myriad, and the choice of method affects a community's trajectory just as much as what it considers disruptive.

1. These terms–"productive", "positive", etc–are all subjective and encase biases of the system's creators or managers. It is often the expression of a fear which is more honestly stated as: "What if people don't behave how I *want* them to?" or "What if people don't think the same way I do?". There is always the possibility that a community is used or develops in ways far removed from the creators'/managers' original vision. That is an exciting possibility but not always recognized as such.

2. On the flip side, Riot Games has also implemented an "Honor" system, which is a point-based system allowing players to recognize other players who make positive contributions to the game experience. I can't tell if these Honor points in any way influence participation in the Tribunal system (which would make the system more comparable to Slashdot's metamoderation), but it at least incentivizes good behavior as a visible signifier amongst others.


Let them be Orks

06.11.2014
etc

The Orks in W40K: Dawn of War II

The Orks in W40K: Dawn of War II

Within the Warhammer 40K universe are a prominent alien species known as the "Orks", notorious for their infinite dim-wittedness, reflexively aggressive nature, and staggeringly large numbers.

For the upcoming Warhammer 40K MMO, Warhammer 40K: Eternal Crusade, the producers were faced with an issue: most of the players (40.7%) wanted to play as one of the other factions, the Space Marines. There are supposed to be way more Orks than Space Marines in the Warhammer universe. This imbalance would throw the dynamics of the game out of sync with canon.

The game's solution: while the MMO, like many, is pay-to-play, there is one exception: the game is free-to-play if you play as an Ork.

The expected result is that this design choice will cause the game's universe to more accurately reflect the canonical Warhammer 40K universe. There will be staggeringly large numbers of Orks, and they'll be populated by cheap, reprehensible players who will abuse the no-cost system by abusing the other players. That is, they'll also reflect the behavior of Orks in the canonical universe.

The idea of leveraging a fictional world's narrative need for abusive player behavior is brilliant. Will players be more accepting of it?

<< >>