Humans of Simulated New York

Humans of Simulated New York
Humans of Simulated New York

For our month-long DBRS Labs residency, Fei and I built the beginnings of a tool set for economy agent-based simulation. My gut feeling is the next phase of this AI renaissance will be around simulation. AlphaGo's recent victory over the Go world champion - a landmark in AI history - resulted from a combination of deep learning and simulation techniques, and we'll see more of this kind of hybrid.

Simulation is important to planning, a common task in AI. Here "planning" refers to any task that requires producing a sequence of actions (a "plan") that leads from a starting state to goal state. An example planning problem might be: I need to get to London (starting state) from New York (goal state), what's the best way of getting there? A good plan for this might be: book a flight to London, take a cab to the airport, get on the flight. But there are infinitely many other plans, depending on how detailed you want to get. I could walk and swim to London - it's not a very good plan, but it's a plan nonetheless!

To produce such plans, there needs to be a way of anticipating the outcome of certain actions. This is where simulation comes in. For example, in deciding how to get to the airport, I have to consider various scenarios - traffic could be bad at this particular hour, maybe there's some chance the cab breaks down, and so on. Simulation is the consideration of these scenarios.

So simulation, especially as it relates to planning, is crucial to AI's more interesting applications, such as policy and economic simulation which seeks to understand the implications of policy decisions. Much like machine learning, planning and simulation have a long history and are already used in many different contexts, from shipping logistics to spacecraft. The residency was a great opportunity to begin exploring this space.

First plan

The general idea was to use a simulation technique called agent-based modeling, in which a system is represented as individual agents - e.g. people - that have predefined behaviors. These agents all interact with one another to produce emergent phenomena - that is, outcomes which cannot be attributed to any individual but rather arise from their aggregate interactions. The whole is greater than the sum of its parts.

Originally we aimed to create a literal simulation of New York City and use it to generate narratives of its agents: simulated citizens ("simulants"). We wanted to produce a simulation heavily drawn from real world data, collating various data sources (census data, market data, employment data, whatever we could scrap together) and unpacking their abstract numbers into simulated lives. Data ostensibly is a compact way of representing very nuanced phenomena - it lossy-compresses a person's life. Using simulation to play out the rich dynamics embedded within the data seemed like a good way to (re-)vivify it. It wouldn't reach the fidelity and honesty of lived experience, but might nevertheless be better than just numbers.

Over time though we became more interested in using simulation to postulate different world dynamics and see how those played out. What would the world look like if people behaved in this way or had these values? What would happen to this group of people if the government instituted this policy? What happens to labor when technology is productive enough to replace them? What if the world had less structural inequality than it does now?


This speculative direction had many inspirations:

Past and speculated initiatives around AI and governance were one - in particular, Allende's Cybersyn, the ambitious, way-before-its-time attempt to manage an economy via networked computation under the control of the workers, and the imagined AI-managed civilization of Iain M. Banks' Culture. These reached for utopian societies in which many, if not all, aspects were managed by some form of artificial intelligence, presumably involving simulation to understand the societal impacts of its decisions.

Cybersyn's consoles (via Vanity Fair)
Cybersyn's consoles (via Vanity Fair)

Another big inspiration was speculative social science fiction such as Ursula K. Le Guin's "Hainish Cycle", which explores how differences in fundamental aspects of humans lead to drastically different societies. From these we aspired to create something which similarly carves out a space to hypothesize alternative worlds and social conditions.

Dwarf Fortress (via Ars Technica)
Dwarf Fortress (via Ars Technica)

We also referred to what could be called "generative narrative" video games. In these games, no story is fixed or preordained; rather, only the dynamics of the game world are defined. Things take on a life of their own. Bay 12 Games' Dwarf Fortress is one of the best examples of this - a meticulously detailed simulation of a colony of dwarves which almost always ends in tragedy. Dwarf Fortress has inspired other games such as RimWorld, which follows the same general structure but on a remote planet. Beyond the pragmatic applications of economic simulation, the narrative aspect produces characters and a society to be invested in and to empathize with.

In a similar vein, we looked at management simulation games, such as SimCity/Micropolis, Cities: Skylines (by Paradox Interactive, renowned for their extremely detailed sim games), Roller Coaster Tycoon, and more recently, Block'hood, from which we took direction. These games provide a lot of great design cues for making complex simulations easy to play with.

Micropolis & Block'hood
Micropolis & Block'hood

Game of Life

In both a literal and metaphorical way, these simulation games are essentially Conway's Game of Life writ large. The Game of Life is the prototypical example of how a set of simple rules can lead to emergent phenomena of much greater complexity. Its world is divided into a grid, where each cell in the grid is an agent. Each cell has two possible states: on (alive) or off (dead). The rules which govern these agents may be something like:

  • a cell dies if it has only one or no neighbors
  • a cell dies if it is surrounded by four or more neighbors
  • a dead cell becomes alive if it has three neighbors

How the system plays out can vary drastically depending on which cells start alive. Compare the following two - both use the same rules, but with different initial configurations (these gifs are produced from this demo of the Game of Life).

Game of Life: Small Exploder vs Tumbler
Game of Life: Small Exploder vs Tumbler

This is characteristic of agent-based models: different starting conditions and parameters can lead to fantastically different outcomes.

Navigating complexity

Interactive and well-designed simulation can also function as an educational tool, especially for something as complex as an economy. The dissociation between our daily experience and the abstract workings of the economy is massive. Trying to think about all its moving parts induces a kind of vertigo, and there is no single position from which the whole can be seen. While our project is not there yet, perhaps it may eventually aid in the cognitive mapping Fredric Jameson calls for in Postmodernism: "a situational representation on the part of the individual subject to that vaster and properly unrepresentable totality which is the ensemble of society's structures as a whole". Bureau d´Études' An Atlas of Agendas attempts this mapping by painstakingly notating nauseatingly sprawling networks of power and influence, but it is still quite abstract, intimidating, and disconnected from our immediate experience.

Bureau d´Études's _An Atlas of Agendas_
Bureau d´Études's An Atlas of Agendas

Nick Srnicek calls for something similar in Accelerationism - Epistemic, Economic, Political (from Speculative Aesthetics):

So this is one thing that can help out in the current conjuncture: economic models which adopt the programme of epistemic accelerationism, which reduce the complexity of the world into aesthetic representations, which offer pragmatic purchase on manipulating the world, and which are all oriented toward the political accelerationist goals of building and expanding rational freedom. These can provide both navigational tools for the current world, and representational tools for a future world.

Yet another similar concept is "cyberlearning", described in Optimists’ Creed: Brave New Cyberlearning, Evolving Utopias (Circa 2041) (Winslow Burleson & Armanda Lewis) as:

Today’s cyberlearning–the tight coupling of cyber-technology with learning experiences offering deeply integrated and personally attentive artificial intelligence–is critical to addressing these global, seemingly intractable, challenges. Cyberlearning provides us (1) access to information and (2) the capacity to experience this information’s implications in diverse and visceral ways. It helps us understand, communicate, and engage productively with multiple perspectives, promoting inclusivity, collaborative decision-making, domain and transdisciplinary expertise, self actualization, creativity, and innovation (Burleson 2005) that has transformative societal impact.

We found some encouragement for this approach in Modeling Complex Systems for Public Policies, which was published last year and covers the current state of economic and public policy modeling. In the preface, Scott E. Page writes:

...whether we focus our lens on the forests or students of Brazil or the world writ large, we cannot help but see the inherent complexity. We see diverse, purposeful connecting people constructing lives, interacting with institutions, and responding to rules, constraints, and incentives created by policies. These activities occur within complex systems and when the activities aggregate they produce feedbacks and create emergent patterns and functionalities. By definition, complex systems are difficult to describe, explain, and predict, so we cannot expect ideal policies. But we can hope to improve, to do better. (p. 14)

Early attempts

We had a lot of anxiety in the deciding on the simulation's level of detail. There were a few constraints which prevent us from going too crazy, e.g. computational feasibility (whether or not we can run the simulation in a reasonable amount of time) and sensitivity/precision issues (i.e. all the problems of modeling chaotic systems). These practical concerns were offset by a desire to represent all the facets of life we were interested in, which was too ambitious. This main tension is best described by Borges' On Exactitude in Science, where a map is so detailed that it directly overlays the terrain it is meant to represent. A large part of modeling's value is that it does not seek a one-to-one representation of its referent, generally because it is not only impractical but because the point of a model is to capture the essence of a system without too much noisy detail. For us, the details were important, but we also avoided too literal a model to leave some space for the simulation to surprise us.

Planning agents
Planning agents

First we tried fairly sophisticated simulants, which each had their own set of utility functions. These utility functions determined, for instance, how important money was to them or how much stress bothered them. Some agents valued material wealth over mental health and were willing to work longer hours, while others valued relaxation.

We went way too granular here. Agents would make a plan for their day, hour-by-hour, involving actions such as relaxing, looking for work, seeing friends, going to work, sleeping, visiting the doctor, and so on. Agents used A* search (a powerful search algorithm) to generate a plan they believed would maximize their utility, then executed on that plan. Some actions might be impossible from their current state - for instance, they might be sick and want to visit the doctor, but not have enough money - but agents could set these desired actions as long-term goals and work towards them.

To facilitate developing these kinds of goal-oriented agents, we built the cess framework. Because there is a lot of computation happening here, cess includes support for running a simulation across a cluster. However, even with the distributed computation, modeling agents at this level of detail was too slow for our purposes (we wanted something snappy and interactive), so we abandoned this approach (development of cess will continue separately).

A simple economy

Given that our residency was for only a month, we ended up going with something simple: a conventional agent-based model of a very basic economy, with flexibility in defining the world's parameters. A lot of what we wanted to include had to be left out. A good deal of the final simulation was based on previous work (of which there is plenty) in economic modeling. In particular:

In addition to our simulants (the people), we also had firms, which included consumer good firms, capital equipment firms, raw material firms, and hospitals, and the government. The firms use Q-learning, as described in "An agent-based model of a minimal economy", to make production and pricing decisions. Q-learning is a reinforcement learning technique where the agent is "rewarded" for taking certain actions and "punished" for taking others, so that they eventually know how to act under certain conditions. Here firms use this to learn when producing more or less is a good idea and what profit margins consumers will tolerate.

We still wanted to start from something resembling our world, at least to make the very tenuous claim that our simulation actually proves anything a tiny bit less tenuous. We gathered individual-level American Community Survey data from IPUMS USA and used that to generate "plausible" simulated New Yorkers. At first we tried trendy generative neural net methods like generative adversarial networks and variational autoencoders, but we weren't able to generate very believable simulants that way. In the end we just learned a Bayes net over the data (I hope this project's lack of neural nets doesn't detract from its appeal 😊), which turned out pretty well.

A Bayes net allows us to generate new data that reflects real-world correlations, so we can do things like: given a Chinese, middle-aged New Yorker, what neighborhood are they likely to live in, and what is their estimated income? The result we get back won't be "real" as in the data isn't connected to a real person, but it will reflect the patterns in the original data. That is to say, it could be a real person.

The code we use to generate simulants is available here. This is basically what it does:

>>> from people import generate
>>> year = 2005
>>> generate(year)
    'age': 36,
    'education': <Education.grade_12: 6>,
    'employed': <Employed.non_labor: 3>,
    'wage_income': 3236,
    'wage_income_bracket': '(1000, 5000]',
    'industry': 'Independent artists, performing arts, spectator sports, and related industries',
    'industry_code': 8560,
    'neighborhood': 'Greenwich Village',
    'occupation': 'Designer',
    'occupation_code': 2630,
    'puma': 3810,
    'race': <Race.white: 1>,
    'rent': 1155.6864868468731,
    'sex': <Sex.female: 2>,
    'year': 2005

This is how we spawned our population of simulants. Because we were also interested in social networks, simulants could become friends with one another. We ripped out the parameters from the logistic regression model (the "confidant model" in the graphic below) described in Social Distance in the United States: Sex, Race, Religion, Age, and Education Homophily among Confidants, 1985 to 2004 (Jeffrey A. Smith, Miller McPherson, Lynn Smith-Lovin, University of Nebraska - Lincoln, 2014) and used that to build out the social graph. This social network determined how illnesses and job opportunities spread.

The graphic below goes into the simulation design in more detail:

An overview of the simulation design

We designed the dynamics of the world so it could model some of the questions posed earlier. For instance, we modeled production and productive technology to explore the idea of automation. Say it takes 10 labor to produce a good, each worker produces 20 labor, and equipment adds an additional 10 labor. Now say your firm wants to produce 10 goods, which requires 100 labor. If you have no equipment, you would need five workers (5*20=100). However, if you have equipment for each worker, you only need four workers (4*(20+10)=120). (In our simulation, each piece of equipment requires one worker to operate it, so you can't just buy 10 pieces of equipment and not hire anyone). To model a more advanced level of automation, we could instead say that each piece of equipment now produces 100 labor, and now to meet that product quota, we only need one worker (1*(20+100)=120). Then we just hit play and see what happens to the world.

An early version of the city
An early version of the city

Steering the city

As Ava Kofman points out in "Les Simerables", these kinds of simulations embed their creators' assumptions about how the world does or should work. Discussing the dynamics of SimCity, she notes:

To succeed even within the game’s fairly broad definition of success (building a habitable city), you must enact certain government policies. An increase in the number of police stations, for instance, always correlates to a decrease in criminal activity; the game’s code directly relates crime to land value, population density, and police stations. Adding police stations isn’t optional, it’s the law.

Or take the game’s position on taxes: “Keep taxes too high for too long, and the residents may leave your town in droves. Additionally, high-wealth Sims are more averse to high taxes than low- and medium-wealth Sims.”

The player’s exploration of utopian possibility is limited by these parameters. The imagination extolled by Wright is only called on to rearrange familiar elements: massive buildings, suburban quietude, killer traffic. You start each city with a blank slate of fresh green land, yet you must industrialize.

The landscape is only good for extracting resources, or for being packaged into a park to plop down so as to increase the value of the surrounding real estate. Certain questions are raised (How much can I tax wealthy residents without them moving out?) while others (Could I expropriate their wealth entirely?) are left unexamined.

These assumptions seem inevitable - something has to glue together the interesting parts - but they can be designed to be transparent and mutable. Unlike the assumptions with which we operate daily, these assumptions must be made explicit through code. Keeping all of this in mind, we wanted to make the simulation interactive in such a way that you can alter the fundamental parameters which govern the economy's dynamics. But, while you can tweak numbers of the system, you can't yet change the rules themselves. That's something we'd like to add down the line.

An early idea for player interaction
An early idea for player interaction

We played around with a few ideas for making the simulation interactive. At first we thought to have individuals create their own characters, specifying attributes such as altruism and frugality. Then we would run the player's simulant through a year of their life and generate a short narrative about what happened. How the simulant behaves is dependent on the attributes the player input, as well as data-derived environmental factors. One of our goals was to model structural inequality and oppression, so depending on who you are, you may be, for instance, more or less likely to be hired.

World building was a very important component to us. By keeping all player-created individuals as part of the population for future players, the world is gradually shaped to reflect the values of all the people who have interacted with it. (Unfortunately we didn't have time to implement this yet.)

We didn't quite go that route in the end. Because we'll demo the simulation to an audience, we wanted to design for that format - simultaneous participation. In the latest version, players propose and vote on new legislation between each simulated month and try to produce the best outcome (in terms of quality of life) for themselves and/or everyone.

We found in Modeling Complex Systems for Public Policies that this approach is called "participative simulation":

Participative simulation
Participative simulation

Visualizing disparity

The simulants in the city are meant to visualize structural inequality, as derived from American Community Survey and New York unemployment data (a full list of data sources is available in the project's GitHub repo). We borrowed from Flatland's hierarchy of shapes and made it so that polygon count correlates with economic status. So in our world, pyramids are unemployed, cubes are employed, and spheres are business owners. The simulant shapes are then colored according to Census race categories. It becomes pretty clear that certain colors have more spheres than others.

Differentiating simulants by shape and color
Differentiating simulants by shape and color

The city's buildings also convey some information - each rectangular slice represents a different business, and each color corresponds to a different industry (raw material firms, consumer good firms, capital equipment firms, and hospitals). The shifting colors and height of the city becomes an indicator of economic health and priority - as sickness spreads, hospitals spring up accordingly.

Changing tenants in a building
Changing tenants in a building

For other indices, we fall back to basic line charts. An important one is the quality of life chart, which gives a general indication about how content the citizens are. Quality of life is computed using food satisfaction (if they can buy as much food - the consumer good in our economy - as they need) and health.

In many scenarios, the city collapses under inflation and gradually slouches into destitution. One by one its simulants blink out of existence. It's really hard to strike the balance for a prosperous city. And the way the economy organized - market-based, with the sole guiding principle of expanding firm profits - is not necessarily conducive to that kind of success.

Mere speculation

Players can choose from a few baked-in scenarios along the axes of food, technology, and disease:

  • food
    • a bioengineered super-nutritional food is available
    • "regular" food is available
    • a blight leaves only poorly nutritional food
  • technology
    • hyper-productive equipment is available
    • "regular" technology is available
    • a massive solar flare from the sun disables all electronic equipment
  • disease
    • disease has been totally eliminated
    • "regular" disease
    • an extremely infectious and severe disease lurks
A post-scarcity society is one option
A post-scarcity society is one option

So people can see how things play out in a vaguely utopian, dystopian, or "neutral" (closer to our world) setting. Sometimes these scenarios play out as you'd expect - the infectious disease scenario wipes out the population in a month or two - but not always. Hyper-productive equipment, for instance, can lead to misery, unless other parameters (such as government) are collectively adjusted by players.

Where could this go?

These simulations are promising in domains like public policy - with movements like "smart cities", it seems inevitable that this application will become ubiquitous - but their potential is soured by the reality of how policy and technological decisions are made in practice. Technological products tend to reproduce the power dynamics that produced them. As alluded to in Eden Medina's piece, Cybersyn could have easily been implemented as a top-down control system instead of something which the workers actively participated in and took some degree of ownership over.

So it could go either way, really.

To me the value of these simulations is as a means to speculate about what the world could be like, to see how much better (or worse) things might be given a few changes in our behavior or our relationships or our environment. We seem to be reaching a high-water mark of stories about dystopia (present and future), and it has been harder for me to remember that "another world is possible". Our project is not yet radical enough in the societies it can postulate, but we hope that these simulations can serve as a reminder that things could be different and provide a compelling vision of a better world to work towards.

A big thank you to Amelia Winger-Bearskin and the rest of the DBRS Labs folks for their support and to Jeff Tarakajian for answering our questions about the Census!

Learning about learning

As part of my OpenNews fellowship I have been catching up on the theory behind machine learning and artificial intelligence. My fellowship is ending soon, so today I'm releasing the notes from the past year study to bookend my experience.

To get a good grasp of a topic, I usually went through one or two courses on it (and a lot of supplemental explanations and overviews). Some courses were fantastic, some were real slogs to get through - teaching styles were significantly different.

The best courses has a few things in common:

  1. introduce intuition before formalism
  2. provide convincing motivation before each concept
  3. aren't shy about providing concrete examples
  4. provide an explicit structure of how different concepts are hierarchically related (e.g. "these techniques belong to the same family of techniques, because they all conceptualize things this way")
  5. provide clear roadmaps through each unit, almost in a narrative-like way

The worst courses, on the other hand:

  1. lean too much (or exclusively) on formalism
  2. seem allergic to concrete examples
  3. sometimes take inexplicit/unannounced detours into separate topics
  4. just march through concepts without explaining how they are connected or categorized

Intuition before formalism

In machine learning, how you represent a problem or data is hugely important for successfully teaching the machine what you want it to learn. The same is, of course, for people. How we encode concepts into language, images, or other sensory modalities is critical to effective learning. Likewise, how we represent problems - that is, how we frame or describe them - can make all the difference in how easily we can solve them. It can even change whether or not we can solve them.

One of the courses I enjoyed the most was Patrick Winston's MIT 6.034 (Fall 2010): Artificial Intelligence. He structures the entire course around this idea of the importance of the right representation and for people, the right representation is often stories (he has a short series of videos, How to Speak, where he explains his lecturing methods - worth checking out).

Many technical fields benefit from presenting difficult concepts as stories and analogies - for example, many concepts in cryptography are taught with "Alice and Bob" stories, and introductory economics courses involve a lot of hypothetical beer and pizza. For some reason concepts are easier to grasp if we anthropomorphize them or otherwise put them into "human" terms, anointing them with volition and motivations (though not without its shortcomings). Presenting concepts as stories is especially useful because the concept's motivation is often clearly presented as part of the story itself, and they function as concrete examples.

Introducing concepts as stories also allows us to leverage existing intuitions and experiences, in some sense "bootstrapping" our understanding of these concepts. George Polya, in his Mathematics and Plausible Reasoning, argues that mathematical breakthroughs begin outside of formalism, in the fuzzy space of intuition and what he calls "plausible reasoning", and I would say that understanding in general also follows this same process.

On the other hand, when concepts are introduced as a dense mathematical formalism, it might make internal sense (e.g. "$a$ has this property so naturally it follows that $b$ has this property"), but it makes no sense in the grander scheme of life and the problems I care about. Then I start to lose interest.

Concept hierarchies

Leaning on existing intuition is much easier when a clear hierarchy of concepts is presented. Many classes felt like, to paraphrase, "one fucking concept after another", and it was hard to know where understanding from another subject or idea could be applied. However, once it's made clear, I can build of existing understanding. For instance: if I'm told that this method and this method belongs to the family of Bayesian learning methods, then I know that I can apply Bayesian model selection techniques and can build off of my existing understandings around Bayesian statistics. Then I know to look at the method or problem in Bayesian terms, i.e. with the assumptions made in that framework, and suddenly things start to make sense.


Some learning tips

Aside from patience and persistence (which are really important), two techniques I found helpful for better grasping the material were explaining things to other and reading many sources on the same concept.

Explain things to other people

When trying to explain something, you often reveal your own conceptual leaps and gaps and assumptions and can then work on filling them in. Sometimes it takes a layperson to ask "But why is that?" for you to realize that you don't know either.

Most ideas are intuitive if they are explained well. They do not necessarily need to be convoluted and dense (though they are often taught this way!). If you can explain a concept well, you understand it well.

Read a lot

People conceptualize ideas in different ways, relating them to their own knowledge and experiences. A concept really clicks if someone with a similar background to yours explains it because the connections will be more intuitive.

The same concepts are often present in many different fields. They may be referred to by different names or just have different preferred explanations. Seeing a concept in different contexts and introduced in different ways can provide a really rich understanding of it. If you can explain a concept in many different terms, you probably have a strong multi-perspective understanding of it.

Just add water

  • 11.14.2015
  • etc
No Man's Sky (Rock Paper Shotgun)
No Man's Sky (Rock Paper Shotgun)

The go-to rebuttal to increasing automation tends to be something around how creativity could not be emulated by computers, at least, not for awhile. There's some truth to that statement depending on how you define "creativity", I suppose. The least charitable definition might be synonymous with "content generation", a domain typically exclusive to humans - artists, musicians, writers, and so on - but computers have made some claim to this territory.

The poster child of the automated (i.e. procedural) content generation beachhead is No Man's Sky, which generates a literal universe, complete with stars and beaches and creatures, using interweaving mathematical equations. The universe it generates will reportedly take 5 billion years to explore, and that's just one of those universes. In theory, an infinite amount of universes can be generated by setting different seed values (the value which is used to determine all other values in the game). This interview with Sean Murray goes a bit more into depth on their approach.

No Man's Sky procedural creatures (via)
No Man's Sky procedural creatures (via)

These procedurally-generated universes aren't random, though they are designed to give that appearance. They are completely deterministic depending on the seed value - this is a critical property since there needs to be consistency. If you leave a planet, you should not only be able to come back to that planet, but it should be in a similar state to when you left it.

A big challenge in procedural content generation is figuring out a way of creating things which feel organic and natural - for No Man's Sky, the solution is the impressive-sounding Superformula, developed by John Gielis in 2003. In polar coordinates, with radius $r$ and angle $\varphi$, parameterized by $a, b, m, n_1, n_2, n_3$, the Superformula is:

$$ r\left(\varphi\right) = \left( \left| \frac{\cos\left(\frac{m\varphi}{4}\right)}{a} \right| ^{n_2} + \left| \frac{\sin\left(\frac{m\varphi}{4}\right)}{b} \right| ^{n_3} \right) ^{-\frac{1}{n_{1}}}. $$

The above is the formula for two dimensions, but it is easily generalized to higher dimensions as well by using spherical products of superformulas (cf. Wikipedia).

Some forms resulting from the 2D formula:

2D Superformula-generated shapes (Wikipedia)
2D Superformula-generated shapes (Wikipedia)

Procedural generation is a really beautiful approach to game design. It's not so much about creating a specific experience but rather about defining the conditions for limitless experiences. No Man's Land is far from the first to do this, but it is the first (as far as I know) to have all of its content procedurally generated. The game Elite (from 1984!) also had procedurally-generated universe. Elite's universe was much simpler of course, but used a clever approach using the Fibonacci sequence to simulate randomness:

$$ \begin{aligned} x_0 &= \text{seed} \\ x_n &= x_{n-1} + x_{n-2} \end{aligned} $$

And then taking the last few digits of values generated from this sequence.

For example, take the Fibonacci sequence:

$$ 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, \dots $$

Let's jump at head in the sequence:

$$ 1597, 2584, 4181, 6765, 10946, 17711, 28657, \dots $$

If we just look at the last three digits, the resulting sequence looks quite random:

$$ 597, 584, 181, 765, 946, 711, 657, \dots $$

The great thing about this is that these universes can be shared by sharing seed values.

Elite's procedurally-generated universe (Gamasutra)
Elite's procedurally-generated universe (Gamasutra)

Procedural content generation is interesting, but other games focus on "procedural" stories. These games have (some) manually generated content - creatures, buildings, etc - but focus on designing complex systems which form the bedrock of the game's ability to spawn wild and fantastic stories. Dwarf Fortress and RimWorld are two great examples, which are essentially fantasy and sci-fi world-simulators (respectively) which model things like each individual's mental health, weather patterns, crop growth, and so on. No one writes the stories ahead of time, no one has a premeditated experience for you put in place - it's all off-the-cuff based on the dynamics of the game's rules.

RimWorld (via)
RimWorld (via)

The stories that come out of these games are amazing. Dwarf Fortress has an especially vibrant community based around the often absurd things that happen in game (for example, see r/dwarffortress, or for a more academic treatment, see Josh Diaz's thesis).

With the independent game industry continually expanding, I think we'll see more of these kinds of games. They can be developed with relatively small teams but have the richness and depth (and often to a better degree) of a massively-built studio game.

coral metrics sketch

As part of the Coral Project, we're trying to come up with some interesting and useful metrics about community members and discussion on news sites.

It's an interesting exercise to develop metrics which embody an organization's principles. For instance - perhaps we see our content as the catalyst for conversations, so we'd measure an article's success by how much discussion it generates.

Generally, there are two groups of metrics that I have been focusing on:

  • Asset-level metrics, computed for individual articles or whatever else may be commented on
  • User-level metrics, computed for individual users

For the past couple of weeks I've been sketching out a few ideas for these metrics:

  • For assets, the principles that these metrics aspire to capture are around quantity and diversity of discussion.
  • For users, I look at organizational approval, community approval, how much discussion this user tends to generate, and how likely they are to be moderated.

Here I'll walk through my thought process for these initial ideas.

Asset-level metrics

For assets, I wanted to value not only the amount of discussion generated but also the diversity discussions. A good discussion is one in which there's a lot of high-quality exchange (something else to be measured, but not captured in this first iteration) from many different people.

There are two scores to start:

  • a discussion score, which quantifies how much discussion an asset generated. This looks at how much people are talking to each other as opposed to just counting up the number of comments. For instance, a comments section in which all comments are top-level should not have a high discussion score. A comments section in which there are some really deep back-and-forths should have a higher discussion score.
  • a diversity score, which quantifies how many different people are involved in the discussions. Again, we don't want to look at diversity in the comments section as a whole because we are looking for diversity within discussions, i.e. within threads.

The current sketch for computing the discussion score is via two values:

  • maximum thread depth: how long is the longest thread?
  • maximum thread width: what is the highest number of replies for a comment?

These are pretty rough approximations of "how much discussion" there is. The idea is that for sites which only allow one level of replies, a lot of replies to a comment can signal a discussion, and that a very deep thread signals the same for sites which allow more nesting.

The discussion score of a top-level thread is the product of these two intermediary metrics:

$$ \text{discussion score}_{\text{thread}} = \max(\text{thread}_{\text{depth}}) \max(\text{thread}_{\text{width}}) $$

The discussion score for the entire asset is the value that answers this question: if a new thread were to start in this asset, what discussion score would it have?

The idea is that if a section is generating a lot of discussion, a new thread would likely also involve a lot of discussion.

The nice thing about this approach (which is similar to the one used throughout all these sketches) is that we can capture uncertainty. When a new article is posted, we have no idea how good of a discussion a new thread might be. When we have one or two threads - maybe one is long and one is short - we're still not too sure, so we still have a fairly conservative score. But as more and more people comment, we begin to narrow down on the "true" score for the article.

More concretely (skip ahead to be spared of the gory details), we assume that this discussion score is drawn from a Poisson distribution. This makes things a bit easier to model because we can use the gamma distribution as a conjugate prior.

By default, the gamma prior is parameterized with $k=1, \theta=2$ since it is a fairly conservative estimate to start. That is, we begin with the assumption that any new thread is unlikely to generate a lot of discussion, so it will take a lot of discussion to really convince us otherwise.

Since this gamma-Poisson model will be reused elsewhere, it is defined as its own function:

def gamma_poission_model(X, n, k, theta, quantile):
    k = np.sum(X) + k
    t = theta/(theta*n + 1)
    return stats.gamma.ppf(quantile, k, scale=t)

Since the gamma is a conjugate prior here, the posterior is also a gamma distribution with easily-computed parameters based on the observed data (i.e. the "actual" top-level threads in the discussion).

We need an actual value to work with, however, so we need some point estimate of the expected discussion score. However, we don't want to go with the mean since that may be too optimistic a value, especially if we only have a couple top-level threads to look at. So instead, we look at the lower-bound of the 90% credible interval (the 0.05 quantile) to make a more conservative estimate.

So the final function for computing an asset's discussion score is:

def asset_discussion_score(threads, k=1, theta=2):
    X = np.array([max_thread_width(t) * max_thread_depth(t) for t in threads])
    n = len(X)

    k = np.sum(X) + k
    t = theta/(theta*n + 1)

    return {'discussion_score': gamma_poission_model(X, n, k, theta, 0.05)}

A similar approach is used for an asset's diversity score. Here we ask the question: if a new comment is posted, how likely is it to be a posted by someone new to the discussion?

We can model this with a beta-binomial model; again, the beta distribution is a conjugate prior for the binomial distribution, so we can compute the posterior's parameters very easily:

def beta_binomial_model(y, n, alpha, beta, quantile):
    alpha_ = y + alpha
    beta_ = n - y + beta
    return stats.beta.ppf(quantile, alpha_, beta_)

Again we start with conservative parameters for the prior, $\alpha=2, \beta=2$, and then compute using threads as evidence:

def asset_diversity_score(threads, alpha=2, beta=2):
    X = set()
    n = 0
    for t in threads:
        users, n_comments = unique_participants(t)
        X = X | users
        n += n_comments
    y = len(X)

    return {'diversity_score': beta_binomial_model(y, n, alpha, beta, 0.05)}

Then averages for these scores are computed across the entire sample of assets in order to give some context as to what good and bad scores are.

User-level metrics

User-level metrics are computed in a similar fashion. For each user, four metrics are computed:

  • a community score, which quantifies how much the community approves of them. This is computed by trying to predict the number of likes a new post by this user will get.
  • an organization score, which quantifies how much the organization approves of them. This is the probability that a post by this user will get "editor's pick" or some equivalent (in the case of Reddit, "gilded", which isn't "organizational" but holds a similar revered status).
  • a discussion score, which quantifies how much discussion this user tends to generate. This answers the question: if this user starts a new thread, how many replies do we expect it to have?
  • a moderation probability, which is the probability that a post by this user will be moderated.

The community score and discussion score are both modeled as gamma-Poission models using the same function as above. The organization score and moderation probability are both modeled as beta-binomial models using the same function as above.

Time for more refinement

These metrics are just a few starting points to shape into more sophisticated and nuanced scoring systems. There are some desirable properties missing, and of course, every organization has different principles and values, and so the ideas presented here are not one-size-fits-all, by any means. The challenge is to create some more general framework that allows people to easily define these metrics according to what they value.

Machine Learning 101

  • 11.10.2015
  • etc

This post is an adaptation from a talk given at artsec and a workshop given at a few other places (Buenos Aires Media Party 2015, MozFest 2015). The goal is to provide an intuition on machine learning for people without advanced math backgrounds - given that machine learning increasingly rules everything around us, everyone should probably have some mental model of how it works.

The materials for the workshop are available on GitHub.

What is "machine learning" anyways?

There are plenty of introductory machine learning tutorials and courses available, but they tend to focus on cursory tours of algorithms - here's linear regression, here's SVMs, here's neural networks, etc - without building an intuition of what machine learning is actually doing under the hood. This intuition is obvious to practitioners but not for people new to the field. To get the full value of the other machine learning resources out there, that intuition really helps.

But first, to clarify a bit what machine learning is: a basic working definition could be "algorithms for computers to learn patterns". Pattern recognition is something that people are very good at, but difficult for computers to do.

Here we'll go through walkthrough a very simple machine learning task which is prototypical of many real-world machine learning problems. First we'll go through it by hand, noting where our human superpower of pattern recognition comes into play, and then think about how we can translate what we did into something a computer is capable of executing.

Learning functions by hand

A common machine learning goal is to take a dataset and learn the pattern (i.e. relationship) between different variables of the data. Then that pattern can be used to predict values of those variables for new data.

Consider the following data:

Some data
Some data

If I were to ask you to describe the data by a pattern you see in that data, you'd likely draw a line. It is quite obvious to us that, even though the data points don't fall exactly in a line, a line seems to satisfactorily represent the data's pattern.

But a drawn line is no good - what are we supposed to do with it? It's hard to make use of that in a program.

A better way of describing a pattern is as a mathematical equation (i.e. a function). In this form, we can easily plug in new inputs to get predicted outputs. So we can restate our goal of learning a pattern as learning a function.

You may remember that lines are typically expressed in the form:

$$ y = mx + b $$

As a refresher: $y$ is the output, $x$ is the input, $m$ is the slope, and $b$ is the intercept.

Lines are uniquely identified by values of $m$ and $b$:

Some lines
Some lines

Variables like $m$ and $b$ which unique identify a function are called parameters (we also say that $m$ and $b$ paramterize the line). So finding a particular line means finding particular values of $m$ and $b$. So when we say we want to learn a function, we are really saying that we want to learn the parameters of a function, since they effectively define the function.

So how can we find the right values of $m, b$ for the line that fits our data?

Trial-and-error seems like a reasonable approach. Let's start with $m=12, b=2$ (these values were picked arbitrarily, you could start with different values too).

Trying $m=12, b=2$
Trying $m=12, b=2$

The line is still quite far from the data, so let's try lowering the slope to $m=8$.

Trying $m=8, b=2$
Trying $m=8, b=2$

The line's closer, but still far from the data, so we can try lowering the slope again. Then we can check the resulting line, and continue adjusting until we feel satisfied with the result. Let's say we carry out this trial-and-error and end up with $m=4.6, b=40$.

Trying $m=4.6, b=40$
Trying $m=4.6, b=40$

So now we have a function, $y = 4.6x + b$, and if we got new inputs we could predict outputs that would follow the pattern of this dataset.

But this was quite a laborious process. Can we get a computer to do this for us?

Learning functions by computer

The basic approach we took by hand was just to:

  1. Pick some random values for $m$ and $b$
  2. Compare the resulting line to the data to see if it's a good fit
  3. Go back to 1 if the resulting line is not a good fit

This is a fairly repetitive process and so it seems like a good candidate for a computer program. There's one snag though - we could easily eyeball whether or not a line was a "good" fit. A computer can't eyeball a line like that.

So we have to get a bit more explicit about what a "good" fit is, in terms that a computer can make use of. To put it another way, we want the computer to be able to calculate how "wrong" its current guesses for $m$ and $b$ are. We can define a function to do so.

Let's think for a moment about what we would consider a bad-fitting line. The further the line was from the dataset, the worse we consider it. So we want lines that are close to our datapoints. We can restate this in more concrete terms.

First, let's notate our line function guess as $f(x)$. Each datapoint we have is represented as $(x, y)$, where $x$ is the input and $y$ is the output. For a datapoint $(x, y)$, we predict an output $\hat y$ based on our line function, i.e. $\hat y = f(x)$.

We can look at each datapoint and calculate how far it is from our line guess with $y - \hat y$. Then we can combine all these distances (which are called errors) in some way to get a final "wrongness" value.

There are many ways we can do this, but a common way is to square all of these errors (only the magnitude of the error is important) and then take their mean:

$$ \frac{\sum (y - \hat y)^2 }{n} $$

Where $n$ is the number of datapoints we have.

It also helps to think about this as:

$$ \frac{\sum (y - f(x))^2 }{n} $$

To make it a bit clearer that the important part here is our guess at the function $f(x)$.

A function like this which calculates the "wrongness" of our current line guess is called a loss function (or a cost function). Clearly, we want to find a line (that is, values of $m$ and $b$) which are the "least wrong".

Another way of saying this is that we want to find parameters which minimize this loss function. This is basically what we're doing when we eyeball a "good" line.

When we guessed the line by hand, we just iteratively tried different $m$ and $b$ values until it looked good enough. We could use a similar approach with a computer, but when we did it, we again could eyeball how we should change $m$ and $b$ to get closer (e.g. if the line is going above the datapoints, we know we need to lower $b$ or lower $m$ or both). How then can a computer know in which direction to change $m$ and/or $b$?

Changing $m$ and $b$ to get a line that is closer to the data is the same as changing $m$ and $b$ to lower the loss. It's just a matter of having the computer figure out in what direction changes to $m$ and/or $b$ will lower the loss.

Remember that a derivative of a function tells us the rate of change at a specific point in that function. We could compute the derivative of the loss function with respect to our current guesses for $m$ and $b$. This will inform us as to which direction(s) we need to move in order to lower the loss, and gives us new guesses for $m$ and $b$.

Then it's just a matter of repeating this procedure - with our new guesses for $m$ and $b$, use the derivative of the loss function to figure out what direction(s) to move in and get new guesses, and so on.

To summarize, we took our trial-and-error-by-hand approach and turned it into the following computer-suited approach:

  1. Pick some random values for $m$ and $b$
  2. Use a loss function to compare our guess $f(x)$ to the data
  3. Determine how to change $m$ and $b$ by computing the derivative of the loss function with respect to $f(x)$
  4. Go back to 1 and repeat until the loss function can't get any lower (or until it's low enough for our purposes)

A lot of machine learning is just different takes on this basic procedure. If it had to be summarized in one sentence: an algorithm learns some function by figuring out parameters which minimize some loss function.

Different problems and methods involve variations on these pieces - different loss functions, different ways of changing the parameters, etc - or include additional flourishes to help get around the problems that can crop up.

Beyond lines

Here we worked with lines, i.e. functions of the form $y = mx + b$, but not all data fits neatly onto lines. The good news is that this general approach applies to sundry functions, both linear and nonlinear. For example, neural networks can approximate any function, from lines to really crazy-looking and complicated functions.

To be honest, there is quite a bit more to machine learning than just this - figuring out how to best represent data is another major concern, for example. But hopefully this intuition will help provide some direction in a field which can feel like a disconnected parade of algorithms.