The Dream of the Internet
It is upsetting though unsurprising that, amidst our current internet-driven company bonanza much of the original spirit of the internet has been lost. Yes, most internet companies are founded on networking ideals of connection and communication - the most universally lauded (read: marketed) aspect of internet services - but these manipulatively uplifting appeals overshadow, perhaps intentionally, an equally powerful promise of the internet's architecture.
The internet's architecture is decentralized by nature: it is about individual computers communicating with one another*. But the dominant model is completely antithetical to that: communication between users on the internet are now almost entirely mediated through corporate-controlled and centralized servers (e.g. the servers of social networking services).
The internet services through which most of us find value are typically centralized. When I access Gmail, I am gathering my mail from servers concentrated under Google. When I'm syncing files from Dropbox, those files are coming from servers concentrated under Dropbox. When I send messages to friends on Facebook, those messages are routed through servers concentrated under Facebook.
If you've been around for the past couple decades then you are well aware that the no-cost distribution and infinite replicability of digital information has undermined many industries founded on the scarcity of their products (i.e. piracy). Anyone appreciative of this unique quality of digital information is likely to wonder: why is this model of infinite replicability and distribution paradoxically absent from services - Google, Dropbox, Facebook, et al - which are digital, born and bred?
It is because the business models of these companies are not about providing digital services. They are about consolidation. Their value is directly derived from being centrally positioned within the network and extracting data (and data == value) from all communication that must pass through it.
This consolidation is created by controlling access. When discussing internet services, we tend to glean over their material foundations. But these companies are about creating a scarcity of service, which is rooted in the concentration of the hardware running the service. Control over the software - that is, the prevention of its distribution and replication - is necessary because it enables control over the hardware as well. Only Facebook, Inc. can administer the Facebook software; thus it will run only on their hardware. If I want to access the service, I must access it on their hardware. Thus my communication must go through their hardware, the wellspring of their value. And because that hardware belongs to them, they control access to it - even if their user policies say otherwise.
The principles of open source software (OSS) is meant to counteract this balkanization of the internet (or the formation of the "Splinternet"). OSS is fundamental to supporting the decentralization spirit of the internet. Anyone can run OSS on their own servers and provide the service to their own community or simply to themselves. I don't have to access the service on an untrusted party's hardware: I can access it on a friend's server or even on my own computer. Open source software enables the freedom to access services on hardware that you or someone you trust controls.
Diaspora is an example of an open source, decentralized alternative to conventional social networking services (collectively which are known as "the federated web"). Individuals or organizations can host their own "pods" (servers running the Diaspora software) so that the service is physically distributed across computers that are not concentrated under any one group. However, your identity on the network is portable so that the experience is similar to that of a centralized service: you can access any Diaspora pod without really noticing the difference.
For example, a group of friends decide to host a Diaspora server (pod) and we sign up to the service through it. We're free to interact with each other on it, and your personal data remains on that server, under your jurisdiction. You control the access to it.
If you meet someone who is part of a different pod, that's no problem - you can still communicate with them because the service functions as a cohesive whole.
We could even go a step deeper than the software. Where needed, we should look to the level of collectively defining protocols, or more widespread adoption of those that already exist. A protocol is a set of standardized rules or a "language" which developers can implement in their own software. Provided that everyone adheres to the protocol, different systems can communicate reliably. Thus individuals and communities can run OSS on their own servers, and these servers can communicate amongst each other. Though the hardware is distributed amongst independent hosts, the standardization at the software layer forms a cohesive whole in the final user experience. Thus you can achieve the sensation of a centralized service with the crucial feature that the data is not concentrated under the control of a single entity.
An example of such a protocol are email protocols. There are a few which you may have seen when digging deep into your Gmail settings: SMTP (for outgoing mail), IMAP and POP3 (for incoming mail). There are many, many different email services - Gmail, Hotmail, Yahoo, Fastmail, etc - yet they are all able to communicate with each other because they adhere to these common sets of rules. I can send an email from Gmail and a Hotmail user will receive it without issue. The added benefit here is that the user is not locked into any particular software experience - they can choose from many, or even roll their own, and they will all work so long as it sticks to the protocol.
For example: imagine if you could favorite a tweet through Facebook. You can't right now because they do not share a common standard on how a "favorite" is registered in the software. A favorite on Twitter is not equivalent to a like on Facebook, although what the user is trying to communicate through each action may be equivalent. As a user this becomes inflexible: your activity on one network is not portable at all to another network.
OStatus is an open protocol which attempts to standardize these social interactions. I could host my own social networking service adhering to the OStatus protocol and someone else could have their own service completely independent to mine. So long as their service also implements the OStatus protocol, users will be able to interact across the platforms, and are thus afforded a unique mobility not present in the social networking ecosystem today.
Here's a short list of alternatives to popular, centralized services:
- Google Docs: Etherpad/Etherdraw/Ethersheet combined with plugins
- Gchat: IRC with a "bouncer", such as Tapchat and eventually BitTorrent Chat
- Google Voice: Mumble
- Twitter: OStatus protocol
- Dropbox: BitTorrent Sync
- Facebook: Diaspora
Some of these services still require work and effort; certainly they are at a disadvantage when against capital-laden organizations. But they are worthwhile projects and needed alternatives. The effort is worth it.
Of course, a major appeal of centralized services is their ease of use for non-technical folk. Someone with no experience provisioning servers will have a very hard time deploying a service on their own. It's often a pain even for myself. And it's hard to fully appreciate the decentralization potential of the internet if you have no idea how it works. But these are problems which can be solved with some education and discussion.
It's clear that with tightening grip over the flow of users and their information across networks, the original dream of the internet has been lost, but it is has also become more crucial than ever. The vision of communities running their own servers with open source software, so they have control over access to their own data, is still within reach.
1. It's worth noting that the physical connection between two computers on the internet is typically routed through other hardware controlled by others, such as your ISP. This is where strong encryption practices come into play: while your data travels through many other devices, practically speaking only you and your intended recipient have access to it. Furthermore, initiatives around mesh networking are trying to replace centralized ISP hardware with a distributed model of independently-run nodes.
2. OSS has the additional crucial quality of transparency: anyone can independently audit the code and influence it's development (in theory at least: sometimes you have projects lorded over by a single owner). It's development is more likely to reflect the demands of the community which use and depend on it, as opposed to an external party in what inevitably is an asymmetric relationship of service controller and end user. Not every user of course will be directly involved in it's involvement, but OSS at least allows the possibility.