No Monoliths

  • 9th Oct, 2014
  • Francis
  • etc
Image from Tim Jarvis.

The dust has mostly settled around Ello, the new social network that promises no information tracking or selling of user data to advertisers. Ello has been popular and refreshing because of these policies. Of course, there's concern and criticism around whether or not the folks behind Ello can be taken on their word and the long-term business viability of this strategy - their current plan (as of 10/08/2014) for revenue is selling premium features:

We occasionally offer special features to our users. If we create a special feature that you really like, you may choose to support Ello by paying a very small amount of money to add that feature to your Ello account.

And since, it seems, Ello blew up prematurely, their privacy policy has also drawn fire, though they have made promises to tighten things up.

Aside from these problems, which are understandable for any nascent social network, Ello has been a great opportunity to evaluate what we want and expect from social networks, and the inadequacies against the current dominating services: Ello has frequently been contrasted against Facebook (naturally), who's policies are always found problematic by at least some subset of its users. Facebook's selling of personal data and excessive tracking is the main thing Ello positions itself against, and a big part of what has drawn people to the platform.

I don't think Ello is a solution to these problems. Ello only challenges the symptoms of social networks like Facebook, that is, a centrally-controlled social network that, whether intentionally or not, is perceived to be the one platform everyone needs to be on. For social networking services structured in this way, I believe it is inevitable that user data eventually gets collected (and on Ello, it does, ostensibly to improve the site) and sold. It is a feature of this structure that you, as a user, must entrust your data to a third party which you do not know, to which you have no personal relationship with. What happens with that is then at the discretion of that third party.

To really resolve these problems, we must challenge the fundamental structure of these services. An ideal solution is one similar to that which the ill-fated Diaspora pursued, but perhaps taken a step further. I would love to see a service which allows users to self-host their own social network for their friends or community/communities (or for the less technically-oriented, spawn a cloud-hosted version at the click of a button).

Each community then has the opportunity to manage its own data, implement its own policies, make its own decisions about financial support for the network (e.g. have users pay membership fees, run on donations, or even sell data for ads if the users are ok with it).

Each network can run independently on its own social norms, but all the networks are technically interoperable. So if I wanted to I could join multiple networks, post across networks, and so on, all with the same identity.

But this kind of plurality of networks better acknowledges that people have multiple identities for different social contexts, something which monolithic social networks (i.e. one platform for all things social) are not well-suited for. With the latter, all activity is tied to one identity, and then there's a great deal of manual identity management to keep each social sphere properly contained. With this parallel network design, users can - if they want - share an identity across multiple networks, or they can have different identities for different networks, which for the user are all linked to a private master identity (thus making management a bit easier). To others, each identity appear as a distinct user.

It's something worth trying.


The Dream of the Internet

  • 21st Jun, 2014
  • Francis
  • etc

Follow Your Dreams

It is upsetting though unsurprising that, amidst our current internet-driven company bonanza much of the original spirit of the internet has been lost. Yes, most internet companies are founded on networking ideals of connection and communication – the most universally lauded (read: marketed) aspect of internet services – but these manipulatively uplifting appeals overshadow, perhaps intentionally, an equally powerful promise of the internet's architecture.

The internet's architecture is decentralized by nature: it is about individual computers communicating with one another*. But the dominant model is completely antithetical to that: communication between users on the internet are now almost entirely mediated through corporate-controlled and centralized servers (e.g. the servers of social networking services).

The internet services through which most of us find value are typically centralized. When I access Gmail, I am gathering my mail from servers concentrated under Google. When I'm syncing files from Dropbox, those files are coming from servers concentrated under Dropbox. When I send messages to friends on Facebook, those messages are routed through servers concentrated under Facebook.

Centralized internet

If you've been around for the past couple decades then you are well aware that the no-cost distribution and infinite replicability of digital information has undermined many industries founded on the scarcity of their products (i.e. piracy). Anyone appreciative of this unique quality of digital information is likely to wonder: why is this model of infinite replicability and distribution paradoxically absent from services – Google, Dropbox, Facebook, et al – which are digital, born and bred?

It is because the business models of these companies are not about providing digital services. They are about consolidation. Their value is directly derived from being centrally positioned within the network and extracting data (and data == value) from all communication that must pass through it.

This consolidation is created by controlling access. When discussing internet services, we tend to glean over their material foundations. But these companies are about creating a scarcity of service, which is rooted in the concentration of the hardware running the service. Control over the software – that is, the prevention of its distribution and replication – is necessary because it enables control over the hardware as well. Only Facebook, Inc. can administer the Facebook software; thus it will run only on their hardware. If I want to access the service, I must access it on their hardware. Thus my communication must go through their hardware, the wellspring of their value. And because that hardware belongs to them, they control access to it – even if their user policies say otherwise.

The principles of open source software (OSS) is meant to counteract this balkanization of the internet (or the formation of the "Splinternet"). OSS is fundamental to supporting the decentralization spirit of the internet. Anyone can run OSS on their own servers and provide the service to their own community or simply to themselves. I don't have to access the service on an untrusted party's hardware: I can access it on a friend's server or even on my own computer. Open source software enables the freedom to access services on hardware that you or someone you trust controls.

Decentralized internet

Diaspora is an example of an open source, decentralized alternative to conventional social networking services (collectively which are known as "the federated web"). Individuals or organizations can host their own "pods" (servers running the Diaspora software) so that the service is physically distributed across computers that are not concentrated under any one group. However, your identity on the network is portable so that the experience is similar to that of a centralized service: you can access any Diaspora pod without really noticing the difference.

For example, a group of friends decide to host a Diaspora server (pod) and we sign up to the service through it. We're free to interact with each other on it, and your personal data remains on that server, under your jurisdiction. You control the access to it.

If you meet someone who is part of a different pod, that's no problem – you can still communicate with them because the service functions as a cohesive whole.

We could even go a step deeper than the software. Where needed, we should look to the level of collectively defining protocols, or more widespread adoption of those that already exist. A protocol is a set of standardized rules or a "language" which developers can implement in their own software. Provided that everyone adheres to the protocol, different systems can communicate reliably. Thus individuals and communities can run OSS on their own servers, and these servers can communicate amongst each other. Though the hardware is distributed amongst independent hosts, the standardization at the software layer forms a cohesive whole in the final user experience. Thus you can achieve the sensation of a centralized service with the crucial feature that the data is not concentrated under the control of a single entity.

An example of such a protocol are email protocols. There are a few which you may have seen when digging deep into your Gmail settings: SMTP (for outgoing mail), IMAP and POP3 (for incoming mail). There are many, many different email services – Gmail, Hotmail, Yahoo, Fastmail, etc – yet they are all able to communicate with each other because they adhere to these common sets of rules. I can send an email from Gmail and a Hotmail user will receive it without issue. The added benefit here is that the user is not locked into any particular software experience – they can choose from many, or even roll their own, and they will all work so long as it sticks to the protocol.

For example: imagine if you could favorite a tweet through Facebook. You can't right now because they do not share a common standard on how a "favorite" is registered in the software. A favorite on Twitter is not equivalent to a like on Facebook, although what the user is trying to communicate through each action may be equivalent. As a user this becomes inflexible: your activity on one network is not portable at all to another network.

OStatus is an open protocol which attempts to standardize these social interactions. I could host my own social networking service adhering to the OStatus protocol and someone else could have their own service completely independent to mine. So long as their service also implements the OStatus protocol, users will be able to interact across the platforms, and are thus afforded a unique mobility not present in the social networking ecosystem today.

Here's a short list of alternatives to popular, centralized services:

Some of these services still require work and effort; certainly they are at a disadvantage when against capital-laden organizations. But they are worthwhile projects and needed alternatives. The effort is worth it.

Of course, a major appeal of centralized services is their ease of use for non-technical folk. Someone with no experience provisioning servers will have a very hard time deploying a service on their own. It's often a pain even for myself. And it's hard to fully appreciate the decentralization potential of the internet if you have no idea how it works. But these are problems which can be solved with some education and discussion.

It's clear that with tightening grip over the flow of users and their information across networks, the original dream of the internet has been lost, but it is has also become more crucial than ever. The vision of communities running their own servers with open source software, so they have control over access to their own data, is still within reach.

1. It's worth noting that the physical connection between two computers on the internet is typically routed through other hardware controlled by others, such as your ISP. This is where strong encryption practices come into play: while your data travels through many other devices, practically speaking only you and your intended recipient have access to it. Furthermore, initiatives around mesh networking are trying to replace centralized ISP hardware with a distributed model of independently-run nodes.

2. OSS has the additional crucial quality of transparency: anyone can independently audit the code and influence it's development (in theory at least: sometimes you have projects lorded over by a single owner). It's development is more likely to reflect the demands of the community which use and depend on it, as opposed to an external party in what inevitably is an asymmetric relationship of service controller and end user. Not every user of course will be directly involved in it's involvement, but OSS at least allows the possibility.


Virtual Reality Workspaces & Pop Holo-stars

  • 16th Jun, 2014
  • Francis
  • etc

Johann showed me a cool demo of Motorcar Compositor, a virtual reality workspace combining the Oculus Rift and Razer's Hydra controller:

I like the idea of a digital working environment feeling more like a physical workspace.

I had first encountered the Oculus and Hydra pairing in this great VR demo with holographic pop star Hatsune Miku:

It shows the pretty ingenious use of the Hydra as a way of emulating finger positioning in binary terms: is a particular finger folded or opened? While a finger is pressing that button, that finger is folded in; otherwise, that finger is extended:

Hydra controller


GTFO: Moderation in online communities

  • 12th Jun, 2014
  • Francis
  • etc

༼ つ ◕_◕ ༽つ

Disruptive behavior threatens both offline and online communities and participatory systems. It is maybe the single greatest source of apprehension in these discussions – what are the possibilities that someone, or a group of anonymous someones, hijacks the system and ruins it for everyone? Or, flip the issue on its head: how can we ensure that participation in a community is productive and positive[1]? It is especially an issue in digital communities – social networks, comments sections, internet forums, online multiplayer video games, etc – where the question of anonymity exacerbates these worries tenfold. The barbaric "solution" here has been simply to disallow anonymity.

Even without anonymity, vast geographical distances, disconnected social circles, communicating from the safe perch of your own home and many other factors still contribute to this online (toxic) disinhibition effect, known less generously as the "Greater Internet Fuckwad Theory": people behave worse in cyberspace than they would in meatspace (obvi). Real names and identities don’t stop respected journalists from engaging in petty Twitter fights.

Clay Shirkey explains: When online, "[t]here’s a large crowd and you can act out in front of it without paying any personal price to your reputation," which "creates conditions most likely to draw out the typical Internet user’s worst impulses."

Behaving Badly

Online communities typically thrive by building very dynamic memberships that can easily be created by anyone. This lets communities to grow quickly, sometimes allowing them to operating at enormous global scales. These qualities create communities ripe for experimentation, which, like science in meatspace, can lead to great good, great evil, and great nothing. On the evil spectrum, some communities become breeding grounds for abusive behaviors.

Without getting into the complex psychosocial roots of abusive behavior, it is clearly a major detriment to communities. It prohibits constructive discussion, intimidates new members and fosters a harmful culture which becomes exclusionary and close-minded. In the popular video game Dota 2, it was found that most new players quit not because they lose games, but because other players were abusive towards them.

So some system of curtailing abusive behavior–often referred to as "toxic" to capture its infectious nature–is necessary in digital communities in order to provide as safe a space as possible for its members. One where opinions, discussion, content, etc can flow freely without fear of persecution. One where anonymity is a tool for safety and expression and not something to be feared.

The most popular approach to curbing abusive behavior is a process that combines moderation and punishment. First, content is "moderated" – deemed abusive (or merely unwanted) and then deleted – and second, the offending user is punished/disciplined, for example by being temporarily or permanently banned from using that service . Conceptually, this process is the simplest and thus the easiest to implement and understand.


This moderation step is typically realized through a small group of appointed moderators (or even a singular moderator) who scans for "inappropriate" content or responds to content flagged as such by users. She then makes a decision to punish the user or not to, and executes that decision – with or without discussion with fellow moderators.

Naturally, a justice process which does not directly involve members of its community raises suspicion. Nor does it function particularly well. There is a legacy of moderator abuse, favoritism, and corruption where the very system meant to maintain the quality of a group leads to its own demise. Users feel persecuted or unfairly judged, and there is seldom ever a formal process for appeal. In large communities – Reddit's r/technology has over 5 million users, which has had its share of mod drama – an appeal process may seem impractical to implement. The assurance of the success of such systems is about the same as it is in any where authority is concentrated in one or a few–it's the same as hoping for a kind despot or benevolent dictator, one that happens to have your interests at heart.

One clear major issue with the appointed moderator system is that consolidation of moderation power is often damaging to a community. A mod can harm the very discourse they are moderating simply by moderating it as they see fit, which might not represent the interests of the community’s members.

When designing infrastructure for any community, whether it be a multiplayer video game or an internet forum, the power of moderation must be distributed amongst the users, so that they themselves are able to dictate how the community evolves and grows. In this way, judgements of abusive behavior reflect the actual sentiment of the community as a whole, as opposed to the idiosyncrasies of a stranger, as it often is in far-flung and large digital communities.

Here are two moderation systems which have novel and effective approaches.

༼ つ ◕_◕ ༽つ


Slashdot takes an interesting approach with a distributed moderation system. In its halcyon days, Slashdot relied on a group of 25 mods who oversaw a proportionally small community that created a manageable amount of abuse. When the site exceeded the capacity of this small team, the number of moderators swelled to 400, and "[i]mmediately several dozen of these new moderators had their access revoked for being abusive."

Bizarrely, the solution was to expand moderation to all of the site's users. Not all at once, but now any participant who satisfies a few very basic criteria can be drafted for a term of moderator duty. Thus each member of the community is given the opportunity to assert his or her vision for its growth. And over time–the assumption goes–the moderation decisions reflect the common will of the group.

In this mass moderation system, a new concern arises: what if a citizen moderator uses his tenure to abuse his privileges? This echoes more general fears that there is a certain kind of person who is best suited for positions of power, those who have the moral aptitude necessary for navigating potentially compromising and difficult decisions. One who possess the understanding of the implications of that power, and the self-restraint to forgo that power when needed.

To curb the abusive potential inherent in this new system, Slashdot introduced a "metamoderation" system, which operates on principles similar to their mass moderation system: anyone satisfying a few more basic criteria can serve as a metamoderator. Metamods judge the fairness or accuracy of the decisions of other moderators, and these decisions are used to calibrate the selection of moderators. Moderators whose decisions are consistently contested have less of a chance of being selected to moderate next time. Conversely, moderators whose decisions are more representative of the community's values have their moderation prospects elevated.

A big question here: Slashdot is a relatively homogenous group when compared to something as massive as Twitter. Does such a system translate to these massive social networks, on which many communities with varying value systems exist?

༼ つ ◕_◕ ༽つ

League of Legends

League of Legends (LoL) is a massively popular Multiplayer Online Battle Arena (MOBA) game with over 27 million daily active players. MOBAs are games based heavily on team play and cooperation amongst players; prevention of abusive behavior is crucial to the enjoyment of the game because it undermines the fundamental mechanic of teamwork.

I'm not a LoL player myself (I prefer Dota) but LoL's studio, Riot Games, has made great efforts at improving the game's community and player experience (a talk on their behavioral approaches can be found here). Perhaps their most renowned tool has been the Tribunal[2], which allows players to pass judgement ("pardon or punish") on peers who have been repeatedly reported for bad behavior, providing access to context such as chat logs as part of a "case" against the accused. These cases are assembled out of multiple instances of reported abuse so that only players who are consistently disruptive are tried, and those who have the occasional bad day are excused (unless it becomes a habit).

༼ つ ◕_◕ ༽つ

Riot Games has found the Tribunal program to be successful. An audit of the community's decisions showed that 80% of the time, the players' verdict was aligned with the staff's. Riot has since explored other additional approaches to encouraging positive participation amongst its players.

Both these moderation systems outsource the community's justice process to the community itself, providing them with the tools to determine and enforce their own norms and values. But mass moderation only helps in coming to a consensus about what is problematic. It does not directly deal with, once identified, how such behavior should handled. The approaches to discipline are myriad, and the choice of method affects a community's trajectory just as much as what it considers disruptive.

1. These terms–"productive", "positive", etc–are all subjective and encase biases of the system's creators or managers. It is often the expression of a fear which is more honestly stated as: "What if people don't behave how I *want* them to?" or "What if people don't think the same way I do?". There is always the possibility that a community is used or develops in ways far removed from the creators'/managers' original vision. That is an exciting possibility but not always recognized as such.

2. On the flip side, Riot Games has also implemented an "Honor" system, which is a point-based system allowing players to recognize other players who make positive contributions to the game experience. I can't tell if these Honor points in any way influence participation in the Tribunal system (which would make the system more comparable to Slashdot's metamoderation), but it at least incentivizes good behavior as a visible signifier amongst others.


Let them be Orks

  • 11th Jun, 2014
  • Francis
  • etc

The Orks in W40K: Dawn of War II

The Orks in W40K: Dawn of War II

Within the Warhammer 40K universe are a prominent alien species known as the "Orks", notorious for their infinite dim-wittedness, reflexively aggressive nature, and staggeringly large numbers.

For the upcoming Warhammer 40K MMO, Warhammer 40K: Eternal Crusade, the producers were faced with an issue: most of the players (40.7%) wanted to play as one of the other factions, the Space Marines. There are supposed to be way more Orks than Space Marines in the Warhammer universe. This imbalance would throw the dynamics of the game out of sync with canon.

The game's solution: while the MMO, like many, is pay-to-play, there is one exception: the game is free-to-play if you play as an Ork.

The expected result is that this design choice will cause the game's universe to more accurately reflect the canonical Warhammer 40K universe. There will be staggeringly large numbers of Orks, and they'll be populated by cheap, reprehensible players who will abuse the no-cost system by abusing the other players. That is, they'll also reflect the behavior of Orks in the canonical universe.

The idea of leveraging a fictional world's narrative need for abusive player behavior is brilliant. Will players be more accepting of it?


BitTorrent Chat

  • 10th Jun, 2014
  • Francis
  • etc

BitTorrent has been making some great spin-offs off of the BitTorrent protocol, spin-offs which are more socially-sanctioned than the protocol's most ubiquitous use.

BitTorrent Sync, for instance, is a decentralized file sharing alternative to services like Dropbox – instead of your files going to a central server owned by another organization, files are synced using the BitTorrent protocol across your various devices. It's quite fast and your files only ever exist on devices you control.

bittorrent chat

The one I'm most excited about is BitTorrent Chat. BitTorrent Chat is more experimental, but uses the protocol for decentralized, anonymous, and encrypted communication.

Once upon a time BitTorrent did require a central server of some kind – a tracker, which keeps track of other peers in the "swarm" so the client knows who to connect to. Pretty much all other chat protocols also require a central server to coordinate the messaging.

But since then an alternative has taken over: distributed hash tables (DHT), where peers are located via other peers (that is, in a decentralized manner). BT Chat uses DHT (updated to support encryption) to altogether remove the need for a central server.

Users are at most identified by their public encryption keys and the service uses forward secrecy; that is, for each communication, a short-term key is derived from the public keys of the chatters so that any future compromise of the original public keys does not compromise past chats.

Basically what this amounts to is:

  • you communicate directly with whoever you're talking to, without going through any machine you don't control (aside from DHT to locate them in the first place).
  • all your communications are encrypted so that no one can snoop your messages as they go.
  • even if some attacker gets a hold of your public key, the use of forward secrecy means that they can't decrypt any of your past communications.

BitTorrent Chat is only in alpha at the moment, but it has potential to be a great new and secure system for online discussions. Really excited to see where it goes.

I'm onboard with BitTorrent's mission and hope to see more applications of their decentralized approach. With the uneasy and increasingly cynical (and resigned) atmosphere around surveillance, these kinds of technologies are really intriguing and valuable.

Although...while such applications provide us alternatives to untrustworthy intermediary servers, we'll still worry about who's on the other end.