Culture: A Social Network Simulator

This is a proposal for Culture, a social network simulator designed and developed to teach students about bot development.

This proposal was originally developed for a class on "news bots" I was scheduled to teach in the fall of 2017 (I ended up having a conflict and was unable to teach it). I wanted students to not only explore the impact of bots from a theory perspective, but also engage hands-on to see just how radically influential these bots are on social media platforms.

And not only bots. Ideally students would take on the role of other actors in social media ecosystems, such as a "traditional" media publication, or as an advertiser, or as a political candidate, or as a influencer, or even as the platform itself, making decisions around aspects such as the newsfeed algorithm.

Unfortunately, there are a number of challenges that make hands-on experience infeasible with live social networks:

  • Ethical concerns. For example, many bots are meant to deceive and manipulate, and we'd be working with real user data.
  • Issues of access. For example, rate-limiting and limited access to data. For privacy reasons APIs generally don't provide sensitive user data to developers, though some such data may be provided to advertisers. And of course, with live social networks there isn't a way for students to change the newsfeed algorithms for the entire network.
  • Limits of reality. For example, a student can't magically become an influencer on Twitter, but in a simulated setting, they can.

There are also some technical obstacles, namely that students taking the class weren't required to have any programming background and I didn't want to spend too much time on introductory programming lessons. Even if students were fairly experienced in programming, working with bots has a lot of advanced challenges, such as dealing with natural language. A simulated social network can be simplified so that these problems are easier to deal with.

This proposal doesn't really have a strong advertising component. After speaking with Irwin Chen about it, I realized it's a pretty big omission. So an updated proposal will include all of that: selling ads, ad targeting, ad exchanges, etc. It's not an area I know well, so I'd have to speak with some people and do some research before sketching that out.

Overview

Culture will be an agent-based simulation of a simple social network modeled off of Twitter. As such, the simulation will consist of the following (each part is elaborated further below):

  • users communicate in a rudimentary language
  • users have different personalities
  • each user will have a feed of messages from people they follow and include promoted/ad messages
    • here students can potentially design their own news feed algorithms and see how that affects individual/public opinion
  • users can message, post media, block, be blocked, be banned, follow, unfollow
  • messages and media influence users
  • the network responds to and affects outside events

Motivation

So much of our exposure to and understanding of the world beyond our immediate experience is mediated by social networks, which is to say by newsfeed algorithms and other individual users of these networks. Students should develop a stronger literacy in these dynamics if they are to adequately navigate this information ecology.

This literacy is best developed by direct interaction with these social networks, such as Twitter or Facebook, rather than through theory alone. However, working directly with these networks may be impractical in that they are massive, closed-source, and limited in access. For instance, due to API limits it is impossible to survey or conduct analyses of the entire population of the network, or to examine in detail its inner operations.

Furthermore, there is no room for counterfactual speculation in these existing social networks. For example, we can't intervene and change the behaviors of all users and see how information propagation changes as a result. This limits the pedagogical value of working directly with, for example, Twitter or Facebook.

A simulated social network addresses these concerns. It can be designed to model the dynamics of its real counterparts, it can be entirely open in that students have access to all the network's data, and its parameters can be tweaked to see how information propagation evolves under different circumstances. Students can develop bots on this network without worrying about API limits, spam protection, and so on. In contrast to the black-box nature of a real social network, a simulated social network functions more like a sandbox.

Agents

The simulated agents are individual users of the social network. They are randomly generated to have particular personalities and interests (see below). Their generation is part of the simulation's initialization. Students do not directly interact with these agents, but can indirectly interact with them via, for example, ads and bots they create (see below).

Language

Dealing with natural language is difficult even for experienced developers and advanced researchers in the topic. Broadly, the problem of natural language in the context of bots can described in two parts: understanding and generation. Both are very difficult and beyond the scope of the courses that this simulation is designed for, which includes introductory classes.

To avoid dealing with natural language, the simulation will consist of a very basic grammar and a relatively small vocabulary which can be easily expanded as needed. Because of its relatively simplicity, the same natural language processing techniques that are currently used for "real" languages can also be applied, but with greater success, and better yet, simpler heuristics will go a longer way. Thus students will not need to have a deep understanding of, for example, word vectors or TF-IDF, but may develop their own simpler techniques that will still be effective.

This simpler language will consist of verbs, nouns, and modifiers (adjectives and adverbs) (collectively, "terms"). Because the courses are assumed to be taught in English, this language will be reflective of English.

These terms are combined into formal propositional statements, e.g. single-payer-healthcare + country -> < freedom, which expresses the opinion that implementing single payer health care in this country will cause (->) a loss (<) of freedom. (This is just a sketch of the syntax; it's subject to change).

This is a bit limiting; there is no room for poetics, for instance, but will provide a strong starting point that can be expanded on later.

The design of this language will involve developing a network of terms (i.e. defining term associations), such that terms represent mixtures of other terms and values in the simulation (e.g. individuality/collectivism, see "Personalities" below). This term association network is opaque to the students; they do not get to see what these terms mean to the agents in the simulation. As with the real world, they must use algorithms or their own intuition from observing the network to determine what language best communicates their messages.

For example: the term "car" may be connected to the terms "individuality" and "freedom" to establish that the term "car" symbolically evokes these two ideas. We could then imagine ads for "cars" appeal more to agents with personalities that align more with those concepts relative to agents who, for example, align more with "collectivity" and "freedom".

Terms also have sentiment valences, e.g. "bad" may have a valence of -0.5 to express a negative opinion, whereas "terrible" may have a stronger valence of -0.8, and so on.

Ideally, this term association network is not objective but rather subjective; i.e. differs depending on the particular agent. For example, the term "freedom" may be associated with different values for one agent than for another. However, it is likely that this will be computationally infeasible (though some kind of heuristics could be developed to simplify it).

This term association network also changes over time as terms are used in slightly different contexts. This provides a way for the meaning of terms to change or be entirely inverted, e.g. a negative term being co-opted as a positive identifying term for a group.

The language is the part of the simulation that will require most care in designing - it needs to represent important aspects of how language is used in social networks (e.g. to express opinion/judgement, to harass/abuse, to make propositional statements, etc).

Personalities

Simulated agents will have could loosely be described as "personalities"; that is, a set of parameters that determines how the agent interacts with others (e.g. aggressiveness/friendliness, within-bubble/outside-bubble, etc) and what their values are (e.g. conservative/progressive, individualist/collectivist, etc). These personalities will be generated randomly, via a Bayes Net (or some similar probabilistic model) that will be editable in some way. A model like a Bayes Net lets us describe assumed relationships between values (e.g. more collectivist agents are more likely to be friendly).

These personalities also determine who agents tend to interact with (under principles of homophily, i.e. like attracts like) and also what kind of messaging resonates with them (e.g. messages about rugged individuality will resonate more with individualist agents).

Messages

"Messages" are the equivalent of Twitter's tweets. Agents compose their own messages based on their personalities and who they are interacting with. Messages may affect an agent's mood and also their personality (see below).

Media

Text is not the only important part of a social network - memes and other media (news stories, videos, etc) form a crucial part of their information flow.

States

Agents' states include their personalities, in addition to other attributes like mood and use frequency (how often they visit the social network) and post frequency (how often they post messages). Mood may affect, for example, how agents interact with other agents (e.g. with more or less hostility). This can be used to model emotion contagion.

Influence

Based on who they interact with and what other messaging they are exposed to (e.g. targeted ads), the personalities (traits/opinions) of an agent may shift over time. Various social phenomena, e.g. bipolarization, can be modeled here.

Events

Social networks are not closed systems; they do not exist in isolation. The "outside" world affects what goes on in network, just as what goes in the network can spill out and effect the outside world.

Part of the simulation will support external events (also simulated) that affect and can be affected by the social network, such as an election. The outside event(s) affect what are popular topics (i.e. topics that are relevant and agents are more likely to talk about and respond to) and they can be defined to have some relationship to the shape of discourse in the network.

Social Network

In this section, "social network" is used not to refer to the platform itself, but to the actual network of relationships between users (expressed by "following" relationships). Some users may be highly connected (many followers), and students may, for example, as part of their strategy (whether for ads or opinion influence) try to target these opinion leaders.

Visualization

It will likely be too computationally taxing to display all activity on the social network, but students will have access to various views that provide summaries (i.e. mean sentiment towards some topic, number of users talking about a topic, etc). Ideally an API can be provided like a real social network, so that students can build their own visualizations as part of their bot development process, but this may be limited by the size of the simulation.

Bot API

A simple API will be provided for students to develop their own bots that interact with this network. These bots can follow, be followed, message, etc like agents can and will be the primary way students interact with the social network.

What distinguishes bots from simulated agents is that bots are designed and controlled by students, whereas the simulated agents represent "real" users of the network.

Learning Objectives

The network functions as a simplified social landscape for students to understand how ads, bots, and news feed algorithms affect opinion, trends, and discussion on a social network, and how that links up with broader spheres of discourse outside of the network. Some students may, for example, design bots that influence opinion in a certain direction, while others may design bots to influence opinion in a different direction, while still others may design bots that root out these interfering bots. Depending on how the network is designed, some students can be the managers of the social network platform.

The goal is for students to develop a comprehensive mental model about the dynamics of social media and communication in the internet age, to peek "behind the curtain" and develop a critical perspective when using social media and reading the news (i.e. develop social media literacy).

Extensions

In theory this simulated social network can be extended with features that could be present on any social network, such as anonymous accounts, different kinds of blocking and muting functionality, and so on. Thus it can also be a place where students can experiment with new features to see how that affects dynamics on the network.

Variants

Ideally the simulator accommodates students who are comfortable with programming and those who aren't.

For students who aren't, bot templates could be provided which require little to no programming experience, or another layer can be developed where they "purchase" different bot, marketing, and so on services that run automatically.

If there are multiple classes going on, they can all work from the same simulation and take on different roles. If one class is focused on advertising, they can take on roles of the advertising ecosystem, while in another class perhaps they collectively take on the role of the platform. The potential for cross-class interactivity is exciting.


7x7 Cutting Room Floor

I was fortunate enough to participate in this year's edition of Rhizome's Seven on Seven with Sean Raspet. The event pairs an artist and a technologist and gives some limited time for the pair to come up with and implement a concept or project. In previous editions pairs only had a day or so; this time we had about a month.

It was enough time to churn through several ideas that never made it to the final presentation. We landed on producing a white paper proposing leveraging blockchain-based distributed computing to collectively simulate a complete human cell at the atomic level, starting with something relatively simple like a red blood cell. A human cell might have hundreds of trillions of atoms and so simulating one at the atomic resolution is basically infeasible with existing computational resources. But it is more feasible now than it was maybe a decade ago.

Through our research we came across some staggering statistics about the computing power of the Bitcoin network, namely that it is estimated to have an aggregate computing power of 80.7 zettaFLOPS (80.7 million petaFLOPS) as of May 2018. The world's reigning supercomputer, the Sunway TaihuLight, has a theoretical peak of 125 petaFLOPS. The Folding@Home network, which enables people to donate spare computing power for protein folding simulations, had an aggregate power of about 100 petaFLOPS in January 2018. Not bad for a volunteer distributed network, but still far off from the Bitcoin network. There are more details in the white paper, but those numbers stuck out.

Anyways, we went through a few ideas before we landed on this white paper. Our first focus was on the phosphorus commodity market in relationship to "peak phosphorus". This was something Sean had been researching for some time now, and for the past few months I've been poking around the agri-tech scene, so I was naturally drawn to it as a topic. The gist is that phosphorus is a mineral crucial to agriculture, a key component in fertilizers (along with nitrogen and potassium; the history of nitrogen fertilizer is very interesting and troubling one), and is basically a non-renewable resource (some can be recovered from waste but I'm not sure what percentage of it is recoverable). At some point in the relatively near future phosphorus extraction may become too expensive or difficult and that could lead to some serious food security crises. So we were thinking of various ways to represent this issue. Here are a few ideas we played around with.

Global phosphorus simulation

Phosphorus, like any resource-extractive industry, is global. We wanted to be able to convey geopolitical issues like Morocco's occupation of Western Sahara, which is where Morocco mines its phosphorus. The most straightforward way to do something like that is a 4X-style global simulation, so we played around with that first.

I designed a little framework for laying out a hex-based map (similar to the cartog library I created for my Simulation & Cybernetics class, but I wanted to support 3D):

3D hex-based maps
3D hex-based maps

That's about as far as we got in terms of implementation. But the general idea was that we'd model the dynamics of the global phosphorus market, with some shocks and random events, and projections of changes in relevant indicators like growth rates, meat consumption rates, and so on. And somehow you'd see these effects on this map and through changes in the price of commodity phosphorus.

I didn't want this hex map to be the only "output" of the simulation. We wanted to show that the macro-level dynamics of the phosphorus market are intimately connected to the health of individual plants, and so I wanted to setup an automated growing system as a more material visualization. The system would be hydroponic or aeroponic, with a phosphorus nutrient pump that releases more or less phosphorus depending on its simulated price. As peak phosphorus approaches, the plant's health starts to deteriorate as it manifests symptoms of phosphorus deficiency. There were a few issues here, namely that 1) it's a pretty big task to set up such a growing system, and 2) the changes in the plant's health would happen over long time scales relative to the simulation (e.g. one simulation year might run in one real minute, and the impacts on the plant's health might not be visible for a few real days).

Phosphorus deficiency in corn
Phosphorus deficiency in corn

A build on this idea we considered is that we'd reserve some set amount of funds for the plant, and it would actually have to "purchase" phosphorus from the nutrient reservoir on its own.

Commodity traders vs food consumers

For awhile I've wanted to make an asymmetric game which consists of two separate games that are at first glance unrelated. For example, on one side of the room is a relatively innocuous-looking life simulator game where you have to e.g. buy a house and care for your family. On the other side of the room is a stock market game where you just try to earn the highest return on your investments. What isn't apparent at first is that the actions of the player in the stock market game directly affect how difficult the life-simulator game is, for example, by triggering financial crises or affecting house prices.

We briefly considered doing something along these lines. The idea was that when we presented, we'd direct audience members to a website where they could join our phosphorus game. Some audience members would be redirected to the "commodity trader" version of the game, while others would instead be redirected to the "food consumer" version.

The commodity trader game is basically same as the stock market game, except just for phosphorus trading.

Commodity trader interface
Commodity trader interface

The food consumer game is built around a "basket", like a simplified version of a consumer price index focused on products especially affected by phosphorus prices. As a player you'd have some nutritional requirements to meet or some other purchasing obligations and some weekly budget with which to buy food. We didn't really get far enough to thoroughly think through the mechanics.

Food consumer interface
Food consumer interface

I did have a really fun time modeling the food:

Food models
Food models

Plant care Tamagotchi

Riffing off the plant-as-visualization idea, we also toyed around with the idea of some kind of plant-tamagotchi. You'd have to manage its water and phosphorus needs by doing some sort of trading or other gameplay. I can't really remember how far we got with the design.

Plant care
Plant care

I did enjoy making this wilting animation though:

Plant wilting
Plant wilting

Physics-based food thing

I honestly can't remember what the concept was for this. The most I can recall is that we discussed a system where you could rapidly click on some food or raw material objects to create derivative objects (such as beef and milk from a cow) and that somehow we'd connect that to the relative use of phosphorus in these products. For example, a cow requires a lot of feed which requires a lot of phosphorus, which results in a less efficient phosphorus-to-calorie ratio than if you had just eaten the feed grains yourself. I think I was really just excited about making something physics-based.

Picking up objects
Picking up objects
Tapping on objects for derivatives
Tapping on objects for derivatives
Bouncing around
Bouncing around

I'll definitely use this again for a different project.


Scripts I Have Known And Loved

I've been using Linux as my main driver (Ubuntu 14.04, recently and catastrophically upgraded to 16.04, with no desktop environment; I'm using bspwm as my window manager) for about two years now. It's been challenging and frustrating, but ultimately rewarding -- the granular control is totally worth it.

Over these two years I've gradually accumulated a series of Bash and Python scripts to help me work quickly and smoothly. They generally operate by two principles: accessible from anywhere and usable with as few keystrokes as possible.

All the scripts are available in my dotfiles repo in the bin folder.

Screenshots and recordings

A few of the scripts are devoted to taking screenshots (shot) or screen recordings (rec). This is a basic feature in OSX (and many Linux desktop environments, afaik), but something I had to implement manually for my system. The benefit is being able to customize the functionality quite a bit. For example, the rec script will automatically convert the screen recording into an optimized gif (using another script, vid2gif).

The shot script lets me directly copy the image or the path to the screenshot immediately after it's taken, but sometimes I need to refer to an old screenshot. It's a pain to navigate to the screenshot folder and find the one I'm looking for, so I have another script, shots, which lets me browse and search through my screenshots and screengifs with dmenu (which is a menu that's basically accessible from anywhere).

shots
shots

Passwords and security

Entering passwords and managing sensitive information is often a really inconvenient process, but some scripts make it easier.

I use KeePassX to manage my passwords, which means when I want to enter a password, I have to open up KeePassX, unlock the database, search for the password, and then copy and paste it into the password input.

This is a lot of steps, but my keepass script does all this in much fewer keystrokes.

I open it super+p, enter my master password, directly search for my password, and select it. Then it pastes the password into the input form automatically.

It can also create and save new passwords as needed.

keepass
keepass

I use 2FA on sites that support it, which means there's another step after entering a password - opening up the authenticator to get the auth code. I have another script, 2fa, which I open with super+a, that copies the appropriate auth code into my clipboard so I can paste it in straight away.

These two scripts make logging in and good account security way easier to manage.

For local data that I want to encrypt, I have a script called crypt that lets me easily encrypt/decrypt individual files or directories with my GPG key. I use this script in another script, vault, which makes it easy to encrypt/decrypt a particular directory (~/docs/vault) of sensitive information.

Finally, I have a lock script for when I'm away from my computer that pixelates the screen contents and requires my password to unlock (this was snagged from r/unixporn).

lock
lock

Working with hubble

My main driver is a relatively low-powered chromebook (an Acer C720), so for heavy processing I have a beefy personal server ("hubble") that I access remotely. It's not publicly accessible - as in, it's not a box provided by a service like Digital Ocean but a literal computer under my desk. This introduces some challenges in reliably connecting to it from anywhere, so I have a few scripts to help out with that.

hubble can consume quite a bit of power so I don't like to leave it running when it's not in use. It's easy enough to shut off a server remotely (shutdown now) but turning it on remotely is trickier.

There's a really useful program called wakeonlan that lets you send a special packet to a network interface (specifying its MAC address) that will tell its machine to boot up. However, you still need a computer running on the same network to send that packet from.

I keep a Banana Pi running at all times on that network. Its power consumption is much lower so I don't feel as bad having it run all the time. When I need to access hubble, I ssh into the Pi and then run wakeonlan to boot it up.

This Pi isn't publicly accessible either - there's no public IP I can ssh directly into. Fortunately, using the script tunnel, I can create an ssh tunnel between the Pi and my hosting server (where this website and my other personal sites are kept), which does have a public IP, such that my hosting server acts as a bridge that the Pi piggybacks off of.

Finally, sometimes I'll run a web service on hubble but want to access it through my laptop's browser. I can use portfwd to connect a local laptop port to one of hubble's ports, so that my laptop treats it as its own. It makes doing web development on hubble way easier.

Misc.

I have several other scripts that do little things here and there. Some highlights:

  • q: quickly searches my file system, with previews for images.
  • sms: lets me send an arbitrary notification over Signal, so I can, for example, run a long-running job and get texted when it's finished: ./slow_script && sms "done!" || sms "failed!". This also accepts attachments!
  • twitch: lets me immediately start streaming to Twitch.
  • caffeine: prevents the computer from falling asleep. I have it bound to super+c with a eye indicator in my bar.
  • phonesync: remotely sync photos from my phone to my laptop and media from my laptop to my phone (they must be on the same LAN though).
  • emo: emoji support on Linux is still not very good; this script lets me search for emoji by name to paste into an input. Still thinking of a better solution for this...
  • bkup: this isn't in my dotfiles repo but I use it quite a bit -- but it lets me specify a backup system in YAML (example) that is run with bkup <backup name>.
  • office: unfortunately this repo is not yet public (need to clean out some sensitive info) but this is a suite of scripts that automates a lot of freelance paperwork-ish stuff that I used to do manually in InDesign or Illustrator:
    • generate invoices from a YAML file
    • generate contracts from a YAML file
    • generate a prefilled W9
caffeine indicator
caffeine indicator
q
q

Conspiracy Generator

I recently wrote a conspiracy-generating bot for The New Inquiry's Conspiracy issue. The basic premise is that far-fetched conspiracy theories emerge from the human tendency for apophenia - finding meaning in random patterns - and that the problematic algorithm-level aspects of machine learning are instances of a similar phenomenon. The bot leverages how computer models can misidentify faces and objects as similar and presents these perceptual missteps as significant discoveries, encouraging humans to read additional layers of absent meaning.

The bot consists of the following core components:

In this post I'll give a high-level explanation for the basic structure of the bot. If you're interested in details, the full source code is available here.

Sourcing images, the material of conspiracy

For this bot the ideal conspiracy is one that connects seemingly unrelated and distance events, people, and so on not only to each other but across time. If that connection is made to something relevant to the present, all the better. The bot's source material then should combine obscure images with a lot of breadth (encompassing paintings, personal photos, technical drawings, and the like) with immediately recognizable ones, such as those from recent news stories.

Wikimedia Commons is perfect for the former. This is where all of Wikipedia's (and other Wiki sites') images are hosted, so it captures the massive variety of all those platforms. Wikimedia regularly makes full database dumps available, including one of all of the Commons' image links (commonswiki-latest-image.sql.gz). Included in the repo is a script that parses these links out of the SQL. The full set of images is massive and too large to fit on typical commodity hard drives, so the program provides a way to download a sample of those images. As each image is downloaded, the program runs object recognition (using YOLO; "people" are included as objects) and face detection (using dlib), saving the bounding boxes and crops of any detected entities so that they are easily retrieved later.

Extracted faces and objects
Extracted faces and objects

For recent news images the bot uses reality, a simple system that polls several RSS feeds and saves new articles along with their main images and extracted named entities (peoples, places, organizations, and so on). When new articles are retrieved, reality updates a FIFO queue that the bot listens to. On new articles, the bot runs object recognition and face detection and adds that data to its source material, expanding its conspiratorial repertoire.

Each of these sources are sampled from separately so that every generated conspiracy includes images from both.

At time of writing, the bot has about 60,000 images to choose from.

Generating conspiracies

To generate a conspiracy, 650 images are sampled and their entities (faces and objects) are retrieved. The program establishes links between these entities based on similarity metrics; if two entities are similar enough, they are considered related and included in the conspiracy.

Face and object distance matrices
Face and object distance matrices

The FaceNet model used to compute face similarity. For objects, a very naive approach is used: the perceptual hashes of object crops are directly compared (perceptual hashes are a way of "fingerprinting" images such that images that look similar will produce similar hashes).

Links generated by similarity
Links generated by similarity

Because reality saves a news article's text along with its image, we can also search through that text and its extract entities to pull out text-based conspiracy material. The program starts by looking for a few simple patterns in the text, e.g. ENTITY_A is ... ENTITY_B, which would match something like Trump is mad at Comey. If any matches are found, a screenshot of the article's page is generated. Then optical character recognition (OCR) is run on the screenshot to locate one of those extracted phrases. If one is found, a crop of it is saved to be included in the final output.

This network of entity relationships forms the conspiracy. The rest of the program is focused on presentation.

Image layout and annotation

Once relationships between faces and objects are established, the next challenge is to present the implicated images in a convincingly conspiratorial way. This problem breaks down into two parts: image 1) layout and 2) annotation.

1) Layout

Layout is tricky because it's a bin packing problem, which is NP-hard, but fortunately there exists implementations of good solutions, such as rectpack, which is used here. There is no guarantee that all of the selected images will fit, so there is some additional processing to ensure that the images that do get included have a maximal amount of connectivity between them.

The generated network of entity relationships forms a graph of the selected images. Two images have an edge between them if at least one pair of their entities are linked. We start with the image that has the highest degree (i.e. the image that is connected to the most other images). We look at what images its connected to and pick the image with the highest degree out of those, and repeat. The result is a sequence of connected images, descending by degree.

Ordering images based on links
Ordering images based on links

Here's where some style is added. Images are not placed exactly as prescribed by the bin packing algorithm; there is some "shakiness" where they are placed with some margin of error. This gives the layout a rushed, haphazard look, enhancing the conspiracy vibe.

2) Annotation

A conspiracy's entities need to be highlighted and the links between them need to be drawn. While Pillow, Python's de facto image processing library, provides ways to draw ellipses and lines, they are too neat and precise. So the program includes several annotation methods to nervously encircle these entities, occasionally scrawl arrows pointing to them, and hastily link them.

These were fun to design. The ellipse drawing method uses an ellipse function where the shape parameters a, b are the width and height of the entity's bounding box. The function is rotated to a random angle (within constraints) and then the ellipse is drawn point by point, with smooth noise added to give it an organic hand-drawn appearance. There are additional parameters for thickness and how many times the ellipse should be looped.

"Hand-drawn" annotations
"Hand-drawn" annotations

This part of the program also includes computer text, which are randomly sampled from a set of conspiracy clichés, and also writes entity ids on the images.

Text annotations
Text annotations

Further manipulation

Images are randomly "mangled", i.e. scaled down then back up, unsharpened, contrast-adjusted, then JPEG-crushed, giving them the worn look of an image that has been circulating the internet for ages, degraded by repeat encodings.

Image mangling
Image mangling

Assembling the image

Once all the images are prepared, placed, and annotated, the final image is saved and added to the webpage.

There you have it - an algorithm for conspiracies.

Resources


Friends of Tarok, Lord of Guac Chapter I: Party Fortress: Technical Post-Mortem

The highrise interface
The highrise interface

Last weekend Fei, Dan, and I put on our first Party Fortress party.

Fei and I have been working with social simulation for awhile now, starting with our Humans of Simulated New York project from a year ago.

Over the past month or two we've been working on a web system, "highrise" to simulate building social dynamics.

The goal of the tool is to be able to layout buildings and specify the behaviors of its occupants, so as to see how they interact with each other and the environment, and how each of those interactions influences the others.

Beyond its practical functionality (it still has a ways to go), highrise is part of an ongoing interest in simulation and cybernetics. Simulation is an existing practice that does not receive as much visibility as AI but can be just as problematic. It seems inevitable that it will become the next contested space of technological power.

highrise is partly a continuation of our work with using simulation for speculation, but whereas our last project looked at the scale of a city economy, here we're using the scale of a gathering of 10-20 people. The inspiration for the project was hearing Dan's ideas for parties, which are in many ways interesting social games. By arranging a party in a peculiar way, either spatially or socially, what new kinds of interactions or relationships emerge? Better yet, which interactions or relationships that we've forgotten start to return? What relationships that we've mythologized can (re)emerge in earnest?

highrise was the engine for us to start exploring this, which manifested in Party Fortress.

I'll talk a bit about how highrise was designed and implemented and then the living prototype Party Fortress.

Buildings: Floors and Stairs

First we needed a way to specify a building. We started by reducing a "building" to just floors and stairs, so we needed to develop a way to layout a building by specifying floor plans and linking them up with stairs.

Early sketches of highrise
Early sketches of highrise

We wanted floor plans to be easily specified without code, so developing some simple text structure seemed like a good approach. The first version of this was to simply use numbers:

[
    [[2,2,2,2,2,2,2],
     [2,1,1,1,1,1,2],
     [2,1,1,1,1,1,2],
     [2,2,1,1,1,2,2]],
    [[2,2,2,2,2,2,2],
     [2,1,1,1,1,1,2],
     [2,1,1,1,1,1,2],
     [2,2,0,0,0,2,2]]
]

Here 0 is empty space, 1 is walkable, and 2 is an obstacle. In the example above, each 2D array is a floor, so the complete 3D array represents the building. Beyond one floor it gets a tad confusing to see them stacked up like that, but this may be an unavoidable limitation of trying to represent a 3D structure in text.

Note that even though we can specify multiple floors, we don't have any way to specify how they connect. We haven't yet figured out a good way of representing staircases in this text format, so for now they are explicitly added and positioned in code.

Objects

A floor plan isn't enough to properly represent a building's interior - we also needed a system for specifying and placing arbitrary objects with arbitrary properties. To this end we put together an "object designer", depicted below in the upper-left hand corner.

The object designer
The object designer

The object designer is used to specify the footprint of an object, which can then be placed in the building. When an object is clicked on, you can specify any tags and/or key-value pairs as properties for the object (in the upper-right hand corner), which agents can later query to make decisions (e.g. find all objects tagged food or toilet).

Objects can be moved around and their properties can be edited while the simulation runs, so you can experiment with different layouts and object configurations on-the-fly.

Expanding the floor plan syntax

It gets annoying to need to create objects by hand in the UI when it's likely you'd want to specify them along with the floor plan. We expanded the floor plan syntax so that, in addition to specifying empty/walkable/obstacle spaces (in the new syntax, these are '-', ' ', and '#' respectively), you can also specify other arbitrary values, e.g. A, B, , etc, and these values can be associated with object properties.

Now a floor plan might look like this:

[
    "#.#.#. .#.#.#",
    "#. .#.   . .#",
    "#. .#.#.#. .#",
    "#.A. . . . .#",
    "#. .#.#.#.#.#",
    "-.-.-.-.-.-.-"
]

And when initializing the simulation world, you can specify what A is with an object like this:

{
    'A': {
        'tags': ['food'],
        'props': {
            'tastiness': 10
        }
    }
}

This still isn't as ergonomic as it could be, so it's something we're looking to improve. We'd like it so that these object ids can be re-used throughout the floor plan, e.g:

[
    "#.#.#. .#.#.#",
    "#. .#.   .A.#",
    "#. .#.#.#. .#",
    "#.A. . .A. .#",
    "#. .#.#.#.#.#",
    "-.-.-.-.-.-.-"
]

so that if there are identical objects they don't need to be repeated. Where this becomes tricky is if we have objects of footprints larger than one space, e.g.:

[
    "#.#.#. .#.#.#",
    "#. .#.   . .#",
    "#. .#.#.#. .#",
    "#.A.A. . . .#",
    "#.A.#.#.#.#.#",
    "-.-.-.-.-.-.-"
]

Do we have three adjacent but distinct A objects or one contiguous one?

This ergonomics problem, in addition to the stair problem mentioned earlier, means there's still a bit of work needed on this part.

(Spatial) Agents

The building functionality is pretty straightforward. Where things start to teeter (and get more interesting) is with designing the agent framework, which is used to specify the inhabitants of the building and how they behave.

It's hard to anticipate what behaviors one might want to model, so the design of the framework has flexibility as its first priority.

There was the additional challenge of these agents being spatial; my experience with agent-based models has always hand-waved physical constraints out of the problem. Agents would decide on an action and it would be taken for granted that they'd execute it immediately. But when dealing with what is essentially an architectural simulation, we needed to consider that an agent may decide to do something and need to travel to a target before they can act on their decision.

So we needed to design the base Agent class so that when a user implements it, they can easily implement whatever behavior they want to simulate.

Decision making

The first key component is how agents make decisions.

The base agent code is here, but here's a quick overview of this structure:

  • There are two state update methods:
    • entropy, which represents the constant state changes that occur every frame, regardless of what action an agent takes. For example, every frame agents get a bit more hungry, a bit more tired, a bit more thirst, etc.
    • successor, which returns the new state resulting from taking a specific action. This is applied only when the agent reaches its target. For example, if my action is eat, I can't actually eat and decrease my hunger state until I get to the food.
  • actions, which returns possible actions given a state. E.g. if there's no food at the party, then I can't eat
  • utility, which computes the utility for a new state given an old state. For example, if I'm really hungry now and I eat, the resulting state has lower hunger, which is a good thing, so some positive utility results.
    • Agents use this utility function to decide what action to take. They can either deterministically choose the action which maximizes their utility, or sample a distribution of actions with probabilities derived from their utilities (i.e. such that the highest-utility action is most likely, but not a sure bet).
    • This method also takes an optional expected parameter to distinguish the use of this method for deciding on the action and for actually computing the action's resulting utility. In the former (deciding), the agent's expected utility from an action may not actually reflect the true result of the action. If I'm hungry, I may decide to eat a sandwich thinking it will satisfy my hunger. But upon eating it, I might find that it actually wasn't that filling.
  • execute, which executes an action, returning the modified state and other side effects, e.g. doing something to the environment.

Movement

Agents also can have an associated Avatar which is their representation in the 3D world. You can hook into this to move the agent and know when it's reached it's destination.

Agent movement
Agent movement

Multi-floor movement was handled by standard A* pathfinding:

A* pathfinding (Wikipedia)
A* pathfinding (Wikipedia)

Each floor is represented as a grid, and the layout of the building is represented as a network where each node is a floor and edges are staircases. When an agent wants to move to a position that's on another floor, they first generate a route through this building network to figure out which floors they need to go through, trying to minimize overall distance. Then, for each floor, they find the path to the closest stairs and go to the next floor until they reach their target.

Building network example
Building network example

There are some improvements that I'd really like to make to the pathfinding system. Currently each position is weighted the same, but it'd be great if we held different position weights for each individual agent. With this we'd be able to represent, for instance, subjective social costs of spaces. For example, I need to go to the bathroom. Normally I'd take the quickest path there but now there's someone I don't want to talk there. Thus the movement cost of those positions around that person are higher to me than they are to others (assuming everyone else doesn't mind them), so I'd take a path which is longer in terms of physical distance, but less imposing in terms of overall cost when considering this social aspect.

Subjective path weight example
Subjective path weight example

Those are the important bits of the agent part. When we used highrise for Party Fortress (more on that below), this was enough to support all the behaviors we needed.

Party Fortress

Since the original inspiration for highrise was parties we wanted to throw a party to prototype the tool. This culminated in a small gathering, "Party Fortress" (named after Dwarf Fortress), where we ran a simulated party in parallel to the actual party, projected onto a wall.

Behaviors

We wanted to start by simulating a "minimum viable party" (MVP), so the set of actions in Party Fortress are limited, but essential for partying. This includes: going to the bathroom, talking, drinking alcohol, drinking water, and eating.

The key to generating plausible agent behavior is the design of utility functions. Generally you want your utility functions to capture the details of whatever phenomena you're describing (this polynomial designer tool was developed to help us with this).

For example, consider hunger: when hunger is 0, utility should be pretty high. As you get hungry, utility starts to decrease. If you get too hungry, you die. So, assuming that our agents don't want to die (every simulation involves assumptions), we'd want our hunger utility function to asymptote to negative infinity as hunger increases. Since agents use this utility to decide what to do, if they are really, really hungry they will prioritize eating above all else since that will have the largest positive impact on their utility.

Hunger utility function
Hunger utility function

So we spent a lot of time calibrating these functions. The more actions and state variables you add, the more complex this potentially gets, and makes calibration much harder. We're still trying to figure out a way to make this a more streamlined process involving less trial-and-error, but one helpful feature was visualizing agents' states over time:

Agent state charts
Agent state charts

Commitment

One challenge with spatial agents is that as they are moving to their destination, they may suddenly decide to do something else. Then, on the way to that new target, they again may decide to something else. So agents can get stuck in this fickleness and never actually accomplish anything.

To work around this we incorporated a commitment variable for each agent. It feels a bit hacky, but basically when an agent decides to do something, they have some resolve to stick with it unless some other action becomes overwhelmingly more important. Technically this works out to mean that whatever action an agent does has its utility artificially inflated (so it's more appealing to continue doing it) until they finally execute it or the commitment wears off. This could also be called stubbornness.

Conversation

Since conversation is such an important part of parties we wanted to model it in higher fidelity than the other actions. This took the form of having varying topics of conversation and bestowing agents with preferences for particular topics.

We defined a 2D "topic space" or "topic matrix", where one axis is how "technical" the topic is and the other is how "personal" the topic is. For instance, a low technical, low personal topic might be the weather. A high technical but low personal topic might be the blockchain.

Conversation topic space
Conversation topic space

Agents don't know what topic to talk about with an agent they don't know, but they a really basic conversation model which allows them to learn (kind of, this needs work). They'll try different things and try to gauge how the other person responds, and try to remember this.

Social Network

As so far specified, our implementation of agents don't capture, explicitly at least, the relationships between individual agents. In the context of a social simulation this is obviously pretty important.

For Party Fortress we implemented a really simple social network so we could represent pre-existing friendships and capture forming ones as well. The social network is modified through conversation and the valence and strength of modification is based on what topics people like. For example, if we talk about a topic we both like, our affinity increases in the social graph.

Narrative

It's not very interesting to watch the simulation with no other indicators of what's happening. These are people we're supposed to be simulating and so we have some curiosity and expectations about their internal states.

We implemented a narrative system where agents will report what exactly their doing in colorful ways.

Agents talking and thinking
Agents talking and thinking

Closing the loop

Our plan for the party was to project the simulation on the wall as the party went on. But that introduces an anomaly where our viewing of the simulation may influence our behavior. We needed the simulation itself to capture this possibility - so we integrated a webcam stream into the simulation and introduced a new action for agents: "gawk". Now they are free to come and watch us, the "real" world, just as we can watch them.

Agents talking and thinking
Agents talking and thinking

We have a few other ideas for "closing the loop" that we weren't able to implement in time for Party Fortress I, such as more direct communication with simulants (e.g. via SMS).

TThhEe PPaARRtTYY

We hosted Party Fortress at Prime Produce, a space that Dan has been working on for some time.

We had guests fill out a questionnaire as they arrived, designed to extract some important personality features. When they submitted questionnaire a version of themselves would appear in the simulation and carry on partying.

Questionnaire
Questionnaire

There were surprisingly several moments of synchronization between the "real" party and the simulated one. For instance, people talking or eating when the simulation "predicted" it. Some of the topics that were part of the simulation came up independently in conversation (most notably "blockchain", but that was sort of a given with the crowd at the party). And of course seeing certain topics come up in the simulation spurned those topics coming up outside of it too.

The Party
The Party

Afterwards our attendees had a lot of good feedback on the experience. Maybe the most important bit of feedback was that the two parties felt too independent; we need to incorporate more ways for party-goers to feel like they can influence the simulated party and vice versa.

It was a good first step - we're looking to host more of these parties in the future and expand highrise so that it can encompass weirder and more bizarre parties.