Salty Bet, or "Salty's Dream Cast Casino!", is an entertainment where an enormous collection of pop-culture-and-then-some computer-controlled fighters brawl it out to the mad shouting of the Twitch chat. You can bet on which character will win using the valueless "Salty Bucks" and accumulate a fortune of them, or gamble your way to destitute and fight your way back up from the "salt mines" (i.e. you get at 10 Salty Buck bailout when you go broke).
The system is built on the MUGEN fighting game engine and the characters are contributed by the community. I like to imagine that it's a training ground for the perfect fighting AI.
I would love to see more open-ended community-driven games like this. Inside jokes and memes will be immortalized as characters within the game. I'm sure they'll all end up as ridiculous pop-culture mush but it would be fresh in different gaming genres.
Reductionism — the foundational (Western) scientific idea that, if you want to understand a complex system, break it down (reduce it) to its component parts. When you understand its individual parts, you will be able to understand the system. There are implicit notions of additive and linear natures to the combination of these parts — that you add the parts together and they will increase in their complexity in a linear manner which leads to the complex system.
With this approach, you could see the starting state of a system and predict its final, mature state/consequence (or work backwards from its final state to its starting state), without having to go step-by-step. That is, extrapolation — using rules to predict similar cases beyond just the present case (where the rules were first observed).
There's also the concept of a "blueprint", that is, there's some expectation of what you expect to see when you extrapolate.
Another important aspect of reductionism is that there is variability. For instance, amongst people, there's an average body temperature but some variability in the specific values across individuals. What do you make of variability? Reductionism takes variability as noise — something to be gotten rid of or avoided. "Instrument" error, where "instrument" ranges from a person's observation to machinery. To mitigate this, reductionism believes that further reduction will reduce this noise. The closer you are, the more you'll be able to see what's actually going on. So, the better you get at getting down to the details — through new techniques or equipment — there will be less variability.
"At the bottom of all these reductive processes, there is an iconic and absolute and idealized norm as to what the answer is. If you see anybody not having [a body temperature of] 98.6, it's because there's noise in your measurement systems — variability is noise, variability is something to get rid of, and the way to get rid of variability is to become more reductive. Variability is discrepancy from seeing what the actual true measure is."
Reductionism has been the driving force behind science — coming up with new techniques or equipment to measure things more closely so we can see how things "truly" work.
For instance — if you want to understand how the body works, you need to understand how organs work, and to understand how organs work, you need to understand how cells work, etc, until you get all the way to the bottom. And then once you understand the bottom, you add everything back up and now you understand the body. But this approach is problematic — biology, and certainly plenty of other systems, don't work that way.
In neurobiology, two major researchers (Hubel and Wiesel) came up with a reductionist explanation of how the cortex works; they found that there were individual cells in the retina which connected directly to neurons in the cortex, that you could trace the starting state (which retina cell has been stimulated) to the ending state (which neuron fires) directly (a point-for-point relationship). In this most basic layer of the cortex, neurons could recognize a dot and one dot only, and it was the only neuron that could recognize it. And in the next layer, those neurons can recognize a particular line at a certain angle, and other neurons would recognize those lines at different angles, etc. So using the same reductionism behind the retina=>neuron=>dot, you can extrapolate to retina=>neuron=>dot=>line. And then the next layer, you have neurons that can recognize a particular curve. If you know what's going on at any of these other levels, you can trace that state to the other states (i.e. what's going on at the other levels). And it was believed that you could continue to extrapolate in this way — that there's a particular neuron responsible for a particular arrangement of sensory information. Above that curve layer would be neurons that recognize particular sets of curves, and so on. Eventually towards the top you'd have a neuron that would recognize your grandmother's face from a particular perspective/at a particular angle.
But no one has ever been able to demonstrate the widespread existence of the " grandmother neuron". There are very few " sparse coding" neurons ("sparse coding" meaning you only need a few neurons to recognize some complex thing) which are similar, i.e. a single neuron that responds to a face, and only a specific type of face — there was a rather bizarre study where these upper levels were studied in Rhesus monkeys, and they found a single neuron which would only respond to pictures of Jennifer Aniston, and if that wasn't strange enough, the only other thing they would respond to was an image of the Sydney Opera House.
If you think about it, the number of necessary neurons at each level increases quite a bit for each level. At the lowest level, you have 1:1 neurons for retina cells (to recognize dots). But then, if you have a neuron for each line at each angle formed by these dots — that's a lot more neurons ("orientation selectivity"; there are many ways for lines to be oriented). And this happens at every level — so there just simply aren't enough neurons in visual cortex (and the brain) to represent everything. So this reductionism falls apart here.
In this case, the new approach involves the idea of neural networks. Complex information isn't encoded in single neurons, but rather in patterns of activation across multiple neurons — i.e. networks of neurons.
Bifurcating Systems
A bifurcating system is a branching system — each branch keeps splitting off into more branches — which is "scale free". Viewing the system at one level looks just like viewing the system at another level (i.e. it is "scale free"). It has the nature of a fractal.
The circulation system is an example of this, so is the pulmonary system, and so are neurons. How does the body code for the creation of a bifurcating system? The reductionist approach might come to theorize that there's some gene that codes, for example, when an aorta bifurcates (splits into two), and then a different gene that specifies the next bifurcation, etc. But again, there aren't enough genes to code this way.
Randomness
In the development of biological systems, there is a degree of chance involved. For instance, Brownian (random) motion at the molecular level causes cells to split with uneven distributions of mitochondria, which interferes with the reductionist characteristic of being able to trace a system from its start to its final state.
There was a study trying to predict the dominance hierarchy that would emerge from a particular starting configuration of fish. First, the dominance hierarchy of a group of fish was established by using a round-robin technique; that is, pairing each fish off with one another, seeing which one dominates, and then generating the hierarchy by inferring that the first that dominated every round is the top fish, the one that dominated the second most is the next in the hierarchy, etc. Then the fish were released together to see how predictive this hierarchy was of natural conditions. The hierarchy had zero predictability for what actually happened. The fish can calculate to some degree logical outcomes of social interactions, and uses this ability to strategize their own behavior. However, this necessitates that they observe these social interactions to begin with, and that is largely left to chance: they could be looking in the wrong direction, and miss out on the information to strategize on.
A Taxonomy of Systems
So there are systems which, after some level, are non-reductive . They are non-additive and non-linear . Chance and sheer numbers of possible outcomes are characteristic of these systems. These systems are "chaotic" systems, and they are beyond this reductionist framework.
There are deterministic systems — those in which predictions can be made.There are periodic deterministic systems, which have rules consistent in such a way that you can directly calculate some later value, without having to calculate step-wise to that value; they are linear. That consistency provides an ease of predictability; you can rest assured that the rule holds in an identical manner for the entire sequence. These systems are reductive; the reductionist approach works here. There is a pattern which repeats (i.e. is periodic).
For example:1,2,3,4,5 has a rule of "+1"; it is very easy to answer: what's the 15th value?In contrast, aperiodic deterministic systems are much more difficult to project in this way. There are rules to go from one step to another, like in periodic deterministic systems, but you need to calculate predictions stepwise, figuring out one value, then applying the rule again to figure out the next, ad nauseum. There aren't repeating patterns — although there's a consistent rule that governs each step, the relationship between each step is inconsistent. Arbitrary values in a sequence, for example, cannot be derived from the rules without manually calculating each preceding value in the sequence.
There are nondeterministic systems, where the steps are random; there is no consistent rule. The nature of one value does not determine the subsequent value. Chaotic systems are often taken to be nondeterministic systems, but they are not.
Chaotic systems, have rules for each step, but the relationship for each step is non-linear, they're not identical. That is, they are aperiodic deterministic systems. There is no pattern which repeats. So the only way to figure out any value in such a sequence is to calculate step-wise to that position. So, you can figure out the state of the system in the future, but it's not "predictable" in the same way that periodic deterministic systems are.
Reductive, periodic deterministic systems can reach a tipping point, when enough force is applied to them, where they break down and become chaotic, aperiodic deterministic systems. Periodic systems are in an equilibrium, such that, if you disturb the system temporarily, it will eventually reset back into that periodic equilibrium. This is called an " attractor". This point of equilibrium would be considered the "true" answer of such a system.
In this image, you can see the circle in the center is the attractor, and the system spirals back to it.
Chaotic systems, on the other hand, have no equilibrium to reset back to. They will oscillate infinitely. So they are unpredictable in this sense. This is called a " strange attractor". There is no "true" answer here because there is no stable settling point that could be considered one. Rather, the entire fluctuation, the entire variability, could be considered the "true" answer — the system itself.
Here you can see the strange attractor, which is never reached, and the system just fluctuates constantly.
But if you're measuring the system, it can be very deceiving. Say you take a measurement and it's on one of these points. Say it's (6, 3). And you want to try and predict what the next point is be. And you calculate the next point, and so on, and then you return to your starting point, and you think, oh — it's not chaotic at all, it's periodic! But if you look closer, you see that this isn't the case. You started on (6, 3.7), and now you've ended up on (6, 3.8). It's not actually the same point.
Or, to give a better example, you actually started on (6.38791587129873918751982739172398179872147), and the second time around, you've actually landed on (6.38791587129873918751982739172398179872148). The numbers are very close, but they are not the same. They are variable. And thus the next point after each is different, and the path after the new point is completely different from the path after the starting point. The tiny little difference gets amplified step by step by step — the " butterfly effect". So these systems become practically unpredictable as a result.
No matter how good your reductive tools are, no matter how accurate your measurements and techniques are, that variability is always there. In this sense, such chaotic systems are scale-free: no matter how closely you look, no matter what level you're looking at, that variability is there, and its effects will be felt. This undermines the reductionist approach which dictates that the closer you look at a system, the less noise you will have, and the better and truer understanding you will have of it. But it's not noise resulting from technique or measurement inadequacy — it is part of the phenomenon, it is a characteristic of the system itself.
A fractal is information that codes for a pattern, a line in particular, which is one-dimensional, but this line has an infinite amount of complexity, such that, if you look closer and closer at the line, you still see that complexity. So it is infinitely long, but in a finite space, and it starts to seem more like a two-dimensional object. But it isn't! A fractal is an object or property that is a fraction of a dimension. It's not quite two-dimensional, but it's also more than one-dimensional — it's somewhere in between.Fractals can also be described more simply as something that is scale-free: at any level you look at it, the variability is the same.
The bifurcating systems mentioned earlier are fractals. And, rather than having their branching coded explicitly by genes, for example, they are governed by a scale-free rule.
Does this matter in practice?
Sapolsky (the lecturer) and a student performed a study where they took some problem in biology — the effect of testosterone in behavior — and gathered every study approach this problem from different levels: at the level of society, the individual, the cell, testosterone, etc. For each study they gathered the results and calculated a coefficient of variation, which is the percentage of your result that variation represents.
For example, if I have a measurement of 100 with +/- 50, then the coefficient of variation is 50%.
Then you can take the average coefficient of variation across each level.
If the reductionist argument holds, then you should see a decreasing coefficient of variation — that is, a decreasing of noise — as you move from broad to narrow. But there is no such trend.
But what if there was noise in their own measurements? That a lot of the papers they surveyed weren't very good, perhaps sloppily measured or something. So they looked at the number of times cited as an indicator of the paper's quality, and re-did the analysis (the 10% papers by this metric). But the results were the same.
It isn't that reductionism is not useful. It is still very useful. It is a lot simpler, and while it isn't completely "accurate", that is, there is room for error, it can still paint broader strokes that is still actionable and reflective of the world.
For instance, if I want to test the efficacy of a vaccine, I don't want to get down to the level of each individual and see how it works, I'd want to be a bit broader and say, ok, this group got the vaccine, this one didn't, how do they compare? And that's very useful information.
I recently finished Robert Jackall's Moral Mazes: The World of Corporate Managers, a book which focuses on the workings of the corporate environment, but has learnings that extend to the nature of bureaucracies in general. I was interested in it not only because it was one of Aaron Swartz's favorite books but also because bureaucracy is an integral part of our social order, and whatever behaviors are cultivated there almost certainly find their way into our daily lives. With the "revolving door" that is characteristic of modern politics, there's a great deal of shoulder-rubbing between the political and corporate domains, and as such the latter culture influences the former. To put it another way, for many Americans the world of corporate politics comes to define the social reality of their entire life. Understanding it, then, seems crucial to understanding our modern society.
At its broadest, Moral Mazes dismantles notions that meritocracy actually exists and dispels the mythos of hard work having any meaning in a corporation. Success in a company, according to Jackall, is accomplished almost exclusively through politics. The formation of relationships such as alliances and patronage arrangements (having a higher-up look out for your career) are critical to success. It isn't that numbers don't matter, but rather: whatever hard work an individual accomplishes, and whatever fruits (or harm) might come as a result, matter little for that individual since blame and credit are shifted and appropriated by those with more clout.
Because of this structure, managing social relationships amongst co-workers, superiors, and even subordinates (since you never know who might be your boss tomorrow) becomes of the utmost importance. The maintenance of these relationships is accomplished through a deeply nuanced etiquette - protecting those in your alliances, knowing how to be a "team player" and understanding the subtle expectations in such a role, and so on. A strange sort of information asymmetry can come into play here, where being "kept in the dark" can be an unnerving and hostile tactic, but at the same time, there is the expectation that superiors be spared from details in order to absolve them of responsibility should things go wrong (130). The result of these complex social relationships is that the strategic presentation of self is crucial to success.
There is an unceasing evaluation of peers, assessing both their capacity for the moral compromise which is often necessary in such environments ("moral fitness") and their position within the social hierarchy of the organization (13). At the same time, individuals are always looking to each other for social cues, to keep a grasp on the current form of the social order, which may arbitrarily shift according to the arrangement of superiors at the top of the company. This unstable landscape creates an environment of inexorable anxiety and uncertainty, which is further intensified by the contradictory facade of workplace harmony: "[these] ongoing conflicts are usually hidden behind the comfortable and benign social ambiance that most American corporations fashion for their white-collar personnel." (39)
Many of these aspects of corporate bureaucracy contribute to expediency being the primary mode of problem-solving. That is, quick solutions that disregard externalities or other potentially harmful long-term effects. We see this approach manifest in attitudes that have become characteristic of the large corporation, such as a flippancy towards environmental concerns. The nature of promotions exacerbate this emphasis on short-term solutions. There is a practice of "outrunning mistakes", where one is promoted to a different position before any damaging long-term effects of one's decisions come to a head (95). The onus falls on someone else, and it's hard to cultivate accountability when no one's around long enough to be held accountable.
In order to rationalize such expedient actions against criticism (or to preempt it), both within the corporation and to the public, "vocabularies of justification" are used (14). Public relations may be used to make controversial actions more palpable, which may be especially effective through the particularly devious practice of establishing "fronts" - official-sounding institutions which present themselves as legitimate, unbiased scientific authorities or representatives of small business owners, when in fact they are more often than not organized and funded by a few large corporations.
These vocabularies are extensively used within the corporation to discuss decisions, strategies, and so on. Their opaqueness and ambiguity further aggravate the social anxiety and uncertainty of the workplace. But one must present oneself in such a way that communicates a mastery of this language.
Because success in the corporate world is accomplished primarily through means of self-presentation, those who seek such success are constantly re-evaluating themselves and mutating their values and morals as necessary, so that they appear flexible enough for the expediency that bureaucracy requires. This requires the "object[ification] of the self with the same kind of calculating functional rationality that one brings to the packaging of any commodity", which Jackall refers to as "psychic asceticism" (218). And here the book concludes rather tragically, ending with a description on how the frustration of self-compromise such objectification requires bleeds into a manager's home life:
On the other hand, over a period of time, psychic asceticism creates a curious sense of guilt, heightened as it happens by narcissistic self-preoccupation. Such guilt, a regret at sustained self-abnegation and deprivation, finds expression principally in one's private emotional life. One drinks too much; one is subject to pencil-snapping fits of alternating anxiety, depression, rage, and self-disgust for willingly submitting oneself to the knowing and not knowing, to the constant containment of anger, to the keeping quiet, to the knuckling under that are all inevitable in bureaucratic life. One experiences great tensions at home because one's spouse is unable to grasp or unable to tolerate the endless review of the social world of the workplace, the rehearsals of upcoming conversations, or the agonizing over real or imagined social slights or perceptions of shifts in power alignments. One wishes that one had spent more time with one's children when they were small so that one could grasp the meanings of their adolescent traumas. Or one withdraws emotionally from one's family and, with alternating fascination and regret, plunges ever deeper into the dense and intimate relationships of organizational circles where emotional aridity signals a kind of fraternity of expediency. Many try at times to escape the guilt with Walter Mitty-like fantasies of insouciant rebellion and vengeful retaliation; but one knows that only if and when one rises to high position in a bureaucratic hierarchy does one have the opportunity to turn the pain of self-repression against one's fellows. (218)
I have been fortunate enough to work in "flat" organizations that manage to avoid the more rigid hierarchies. But that isn't to say there are none. It seems that bureaucracy inevitably emerges, in some form, in any large organization of people - but is it equally inevitable that it be such a destructive force? Are there ways to design its workings to avoid these more harmful consequences?
Jackall, Robert. Moral Mazes: The World of Corporate Managers (Twentieth Anniversary Edition). New York: Oxford University Press, 2010.
Passing by the Brooklyn Public Library's ornate and imposing doors, I was reminded of this bit from P.D. Smith's City: A Guidebook for the Urban Age:
"In the seventeenth century, the Atlantis legend was one of the inspirations for ideal cities, such as Tommaso Campanella's The City of the Sun (1602). A free-thinking Dominican monk imprisoned and tortured for heresy by the Inquisition, Campanella's urban utopia is built on a hill with seven concentric walled circles, the middle ones rising up above the outer rings. The design was influenced by Pieter Bruegel the Elder's famous 1563 painting The Tower of Babel with its seven ascending concentric levels. Just as in Bruegel's painting and in the original ziggurats on which it was based, the City of the Sun has at its centre, on the summit of the hill, a great temple of marvellous workmanship'. The temple is round and its dome is decorated with sparkling star maps, as well as astrological verses. Indeed, the city functions as an encyclopaedia of natural and esoteric knowledge, each circle being decorated with illustrations from the sciences - trees, herbs, metals, as well as real and fantastic animals. This is the city as classroom, where the inhabitants absorb enlightenment by osmosis, as they go about their daily lives." (Smith, P.D., 2012, City: A guidebook for the urban age, Chapter 2, emphasis mine)
Everyone's always fascinated by new modes of (digital) interactions, and there are a lot of interesting and novel ideas around what might be the dominant interaction medium in the future. Touch? Gesture? Voice? Eye-tracking?
Although these modes are what interaction design seems to be trending towards, I want to revisit a hugely efficient if not largely unappreciated mode — sibling in some ways to these new interaction modes — that has been around for ages: keyboard shortcuts.
When seeing discussions around interaction design, I seldom, if ever, see the mention of keyboard shortcuts (I'll be talking about desktop web from here on out, since that's what uses a hardware keyboard). This is maybe because interaction design by and large seems focused on web design, and keyboard shortcuts have been relegated to the realm of desktop software[1] (I'm not sure why they didn't fully carry over). But where they are, they are typically used — interaction designers use them all the time, I'm sure, while using Illustrator, or Photoshop, or Omnigraffle, etc. But, ironically, keyboard shortcuts always seem like an afterthought in the designs generated by these software, if they are thought about at all.
Perhaps keyboard shortcuts are not thought of because design is so focused on immediate intuitiveness and user-friendliness. And to be honest, keyboard shortcuts are not necessarily either of those (well, at first). There is almost always a learning curve to them, and their usage is often associated with only advanced or "power" users. That's a valid concern — you want to entice new users to use your product, and spare them an intimidating or hidden interface. It doesn't have to be that way, especially with the usage of convention to establish a degree of predictability when approaching one of these interfaces. But in general, I'm not advocating keyboard shortcuts as a replacement, but as a supplement to an existing interface, especially for products that people may be using for several hours a day, every day.
Physical-Metaphor Interfaces
Lately I've been captured by the idea of invisible interfaces — interfaces that don't necessarily require visual elements. Why is that a good thing? What makes keyboard shortcuts so great? Well, a lot of interface design is still grounded in physical metaphor. You have to move your mouse cursor (or stylus) to a button, which you then press, and then something happens. This is fairly intuitive in that this is how we interact with things in the real world: I have to make a targeted motion to manipulate something.
In human-computer interaction, Fitt's law describes the inverse relationship between speed and accuracy when working with this type of interface. Smaller or further targets take longer to "acquire", and trying to do so quicker means a sacrifice in accuracy.
On the left, D is the distance from the cursor to the target, and S is the width of the target. Fitt's law is typically expressed as T = a + (b * log2(1 + 2W/S)), where a and b are constants for either mouse, stylus, etc, and T is the time to acquire the target.
On the right, a keypress is a much more direct means to action.
But in the digital world, we have the benefit of much more direct routes between intent and action. I can hit a combination of keys, and immediately an action is executed. No need to waddle my cursor through space and time to get the job done. The intent-action gap is condensed dramatically, and we can effectively circumvent the constraints of Fitt's law.
And furthermore, the interface doesn't necessarily need to take up any space any more. It's "invisible"; it exists in the muscle memory of the user, and actions can be executed impulsively.
Key Expressions
There is, however, something even more powerful than keyboard shortcuts: keyboard expressions.
That is, certain keys or key combinations correspond to certain actions, which can be chained together like words in a sentence, and you can express more complex actions in a few keystrokes.
Vim in action.
Vim is probably the ultimate manifestation of this approach. Vim is a text editor favored by programmers[2] for its extreme efficiency, and notorious for its difficulty to learn. Its steep learning curve can be frustrating, but once you learn it, the amount of time and effort it saves you is seemingly infinite.
In Vim, certain keys are mapped to certain actions, and you can express complex chains of action in a few keystrokes. There are really only a handful of keys and bit of syntax you need to know, but their combinatorial power can be very potent. These expressions make Vim one of the elegant and poetic tools I've ever used.
Say, for a somewhat contrived example, that you're editing a document, you're somewhere in the middle of it and you wanted to delete the first line and then return to the line you're currently on.
In a normal text editor, you'd grab your mouse, move up to that line, select it all, then hit delete, then move the mouse back to the line you were on. This requires a degree of precision, especially if you're moving quickly, to position the mouse over the correct line (if you look closely, when selecting the original line again, I accidentally select the line below at first). We have to worry about Fitt's law here.
In Vim, all you have to do is type:
ggdd``
gg jumps you to the top of the document, dd deletes the line you're on, then `` jumps you back to where you were before. The discreteness of the keystroke — that is, it's pressed or it's not — means we can't accidentally select the wrong line[3]. Here, the `` command will resolutely and absolutely bring you back to the last line you were on; the computer won't accidentally jump you to an adjacent line.
It may not seem like a big difference, but this is just scratching Vim's surface, and if this is something you're doing a lot, it saves you a great deal of time and headache.
The real of power Vim is that these keystroke combinations are a language. You "say" what you want to do. Want to delete the next 10 lines of text? You can just type:
10dd
To break it down, what you're "saying" is:
10 = "10 times..."
dd = "execute the delete line command"
The Invisible Interface
Here's a more realistic example.
Think about some sort of office software, say a presentation creation application. It will have a fairly complex interface due to the sheer amount of actions available — you have certain actions for type, such as changing font size, italicizing, underlining, and other formatting options, and then certain actions for a shape, such as coloring, size, position, stroke size, and so on. To mitigate this onslaught of options, actions are stuffed in menus, and a select few are surfaced as keyboard shortcuts.
What if this application had an invisible interface like Vim's? Say I'm on slide 10, and I want to move this slide's title, "Space and Times", to slide 22. In a traditional interface, I'd have to visually scan for the title, then move the cursor to select it, then hit CTRL+X to cut it out, then move over to the sidebar that lists all the slides, possibly scroll down this sidebar until I see slide 22, then select slide 22, then paste in the title.
With an expressive keystroke language, I could accomplish the same with just:
/Spacxxg22gpp
To break this down:
/ = "start searching for an object starting with the text..."
Spac = "Spac" (matches the text object containing "Space and Times")
<Enter> = (hit Enter) "select this matching object"
xx = "and cut it"
g = "then go to slide..."
22 = "22"
g = (confirm the go to movement)
pp = "and then paste"
This might look like complicated gibberish, but in practice it's very fluid and hard to go back to physical-metaphor interfaces.
Beyond the Keyboard
These ideas can be expanded beyond hardware keyboard inputs to other inputs as well. Broadly speaking, the general idea here is that, with a set of limited, distinguishable inputs, you can craft an interaction "language", expressed through meaningful combinations of input values, vastly expanding the power of the few inputs. This can decrease reliance on visual elements for input, which are often single-purpose (i.e. you click a button and it triggers a single, specific action). Gestural interfaces, in addition to other trending interfaces, might fall into this categorization.
Does this approach make sense for all interfaces? Not necessarily. There are concerns, for instance, of satisficing, where users tend to opt for suboptimal, but low-penalty, behaviors, preferring to settle for less-than-best because the best requires an investment of time and effort. Of course, if your interactions with a particular system are short and infrequent, that strategy makes sense. But even with interfaces where there is repeated and prolonged engagement, people typically continue to satisfice. The initial investment of time and effort is off-putting, and people are terrible at evaluating long-term gains against short-term costs. For example, even though the Dvorak keyboard layout is much more efficient and less damaging than the QWERTY keyboard layout (which is a vestigial pattern from typewriters), hardly anyone uses it because it's too damn inconvenient to learn.
But I believe it's at least important to consider this option. Within these interfaces is a potential for much more fluid and efficient, and even enjoyable (Vim is really fun to use), interactions. And it's interesting to move away from a reliance on visual digital interfaces and start exploring one that we carry with us, one that exists in muscle memory.
1. One exception is Google Docs, which has an extensive set of keyboard shortcuts and is arguably directly modeled off of desktop software.