Three.js + Blender Game Dev

Yesterday I started working on a new game, tentatively titled "Trolley Problems: A Eulogy for Social Norms". This will be my second game with Three.js and Blender (though the first, The Founder, is still not totally finished) and I thought it'd be helpful to document this process.

The game isn't very far at this point. This video shows its current state:

The basic process was: 1) create the chair model in Blender and 2) load the chair into Three.js.

Blender

I won't give a Blender modeling tutorial here (this was suggested as a good resource to start), but the theater chair model was quickly made by extruding a bunch from a cube.

Theater chair in Blender
Theater chair in Blender

A lot of the model's character comes out in the texturing. I like to keep textures really simple, just combinations of solid colors. The way I do it is with a single texture file that I gradually add to throughout a project. Each pixel is a color, so the texture is basically a palette.

In Blender I just select the faces I want to have a particular color, use the default UV map unwrapping (while in Edit Mode, hit U and select Unwrap) and then in UV/Image Editor (with the texture file selected in the dropdown, see image below) I just drag the unwrapped faces onto the appropriate colors.

UV mapping/applying the texture. Each color block is one pixel
UV mapping/applying the texture. Each color block is one pixel

There is one thing you have to do to get this texture rendering properly. By default, Blender (like almost every other 3D program) will try to scale your textures.

In the texture properties window (select the red-and-white checker icon in the Properties window, see below), scroll down and expand the Image Sampling section. Uncheck MIP Map and Interpolation, then set Filter to Box (see below for the correct settings). This will stop Blender from trying to scale the texture and give you the nice solid color you want.

The texture properties window. Here's where you set what image to use for the texture
The texture properties window. Here's where you set what image to use for the texture
The texture image sampling settings
The texture image sampling settings

Here's a Blender render of the outcome:

Render of the theater chair
Render of the theater chair

Exporting to Three.js

One of the best things about Three.js is that there is a Blender-to-Three.js exporter. Installation is pretty straightforward (described in the repo here).

To export a Blender object to JSON, select your object (in Object Mode) and select from the menu File > Export > Three (.json).

The export options can be a bit finicky, depending on what you want to export the model for. For a basic static model like this chair the following settings should work fine:

Blender-to-Three.js export settings
Blender-to-Three.js export settings

That's all that Blender's needed for.

Three.js

This next section involves a bunch of code. I won't reproduce everything here (you can check out the repo to see the full working example) but I'll highlight the important parts.

Loading the model

Three.js provides a JSONLoader class which is what loads the exported Blender model. You could just use that and be done with it, but there are a few modifications I make to the loaded model to get it looking right.

const meshLoader = new THREE.JSONLoader();
meshLoader.load('assets/theater_chair.json', function(geometry, materials) {
    // you could just directly use the materials loaded from JSON,
    // but I like to make sure I'm using the proper material.
    // we extract the texture from the loaded material and pass it to
    // the MeshLambertMaterial here
    var texture  = materials[0].map,
        material = new THREE.MeshLambertMaterial({map: texture}),
        mesh = new THREE.Mesh(geometry, material);

    // material adjustments to get the right look
    material.shininess = 0;
    material.shading = THREE.FlatShading;

    // basically what we did with blender to prevent texture scaling
    material.map.generateMipmaps = false;
    material.map.magFilter = THREE.NearestFilter;
    material.map.minFilter = THREE.NearestFilter;

    // increase saturation/brightness
    material.emissiveIntensity = 1;
    material.emissive = {r: 0, g: 0, b: 0};
    material.color = {
        r: 2.5,
        g: 2.5,
        b: 2.5
    };

    mesh.position.set(0,5,0);
    scene.add(mesh);
});

The above code won't work until we create the scene. I like to bundle the scene-related stuff together into a Scene class:

class Scene {
    constructor(lightColor, groundColor, skyColor, fogColor, fogDistance) {
        this.camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 1, 1000);
        this.scene = new THREE.Scene();
        this.scene.fog = new THREE.Fog(fogColor, 0, fogDistance);
        var light = new THREE.HemisphereLight(lightColor, 0x777788, 0.75);
        light.position.set(0.5, 1, 0.75);
        this.scene.add(light);

        // setup floor
        var geometry = new THREE.PlaneGeometry(2000, 2000, 100, 100);
        geometry.rotateX(-Math.PI/2);
        var material = new THREE.MeshLambertMaterial({color: groundColor});
        var mesh = new THREE.Mesh(geometry, material);
        this.scene.add(mesh);

        this.renderer = new THREE.WebGLRenderer();
        this.renderer.setClearColor(skyColor);
        this.renderer.setPixelRatio(window.devicePixelRatio);
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        document.body.appendChild(this.renderer.domElement);

        window.addEventListener('resize', this.onWindowResize.bind(this), false);
    }

    render() {
        requestAnimationFrame(this.render.bind(this));
        this.renderer.render(this.scene, this.camera);
    }

    add(mesh) {
        this.scene.add(mesh);
    }

    onWindowResize() {
        this.camera.aspect = window.innerWidth/window.innerHeight;
        this.camera.updateProjectionMatrix();
        this.renderer.setSize(window.innerWidth, window.innerHeight);
    }
}

It can be used like this:

var scene = new Scene();
scene.render();

And the previous code for loading the chair should place the chair into the scene.

First-person interaction

So far you won't be able to look or walk around the scene, so we need to add some first-person interaction. There are two components to this:

  1. movement (when you press WASD or the arrow keys, you move forward/backwards/left/right)
  2. looking around (when you move the mouse)

Movement

In the Scene class the onKeyDown and onKeyUp methods determine, based on what keys you press and release, which direction you should move in. The render method includes some additional code to check which directions you're supposed to be moving in and adds the appropriate velocity.

The velocity x value moves you right (positive) and negative (negative), the y value moves you up (positive) and down (negative), and the z value moves you forward (negative) and backwards (positive). It's important to note that the z value is negative in the forward direction because this confused me for a while.

We also keep track of how much time elapsed since the last frame (delta) so we scale the velocity appropriately (e.g. if the last frame update was 0.5s ago, you should move only half as far as you would if it had been 1s ago).

You'll notice that you can walk through objects which is probably not what you want...we'll add simple collision detection later to fix this.

Looking around

The key to looking around is the browser's pointer lock API. The pointer lock API allows you to capture your mouse cursor and its movement.

I'd never done this before, but the Three.js repo includes an example that shows the basic method. So I gutted that and moved the important bits into the Scene class. The full code is available here), but I'll explain some of it here.

In the Scene class the important method is setupPointerLock, which sets up the pointer lock event listeners. It is pretty straightforward, but basically: there's an instructions element that, when clicked on, locks the pointer and puts the game into fullscreen.

The onPointerLockChange method toggles the pointer lock controls, so that the controls are only listening when the pointer lock is engaged.

The meat of the actual pointer movement is handled in PointerLock.js. This is directly lifted from the Three.js example implementation. It's also pretty sparse; it adjusts the yaw and pitch of the camera according to how you move your mouse.

We have to properly initialize these controls though. In the Scene's constructor the following are added:

    // ...
    this.controls = new THREE.PointerLockControls(this.camera);
    this.scene.add(this.controls.getObject());
    // ...

Collision detection

So the last bit here is to prevent the player from walking through stuff. I have a terrible intuition about 3D graphics so this took me way too long. Below are some of my scribbles from trying to understand the problem.

Ugghhh
Ugghhh

The basic approach is to use raycasting, which involves "casting" a "ray" out from a point in some direction. Then you can check if this ray intersects with any objects and how far away those objects are.

Below are an example of some rays. For example, the one going in front of the object points to (0,0,1). That sounds like it contradicts what I said earlier about the front of the object being in the negative-z direction...it doesn't quite. This will become important and confusing later.

Some of the casted rays (in blue)
Some of the casted rays (in blue)

The rays that I'm using are:

const RAYS = [
  new THREE.Vector3( 0,  0, -1), // back
  new THREE.Vector3( 1,  0, -1), // back-right
  new THREE.Vector3(-1,  0, -1), // back-left
  new THREE.Vector3( 0,  0,  1), // forward
  new THREE.Vector3( 1,  0,  1), // forward-right
  new THREE.Vector3(-1,  0,  1), // forward-left
  new THREE.Vector3( 1,  0,  0), // right
  new THREE.Vector3(-1,  0,  0), // left
  new THREE.Vector3( 0, -1,  0), // bottom
];

Note that the comments in the example on GitHub are incorrect (they have right and left switched...like I said, this was very confusing for me).

Every update we cast these rays and see if they intersect with any objects. We check if those objects are within some collision distance, and if they are, we zero out the velocity in that direction. So, for instance, if you're trying to move in the negative-z direction (forward) and there's an object in front of you, we have to set velocity.z = 0 to stop you moving in that direction.

That sounds pretty straightforward but there's one catch - the velocity is relative to where you're facing (i.e. the player's axis), e.g. if velocity.z is -1 that means you're moving in the direction you're looking at, which might not be the "true" world negative-z direction.

These rays, unlike velocity, are cast in directions relative to the world axis.

This might be clearer with an example.

Say you're facing in the positive-x direction (i.e. to the right). When you move forward (i.e. press W), velocity.z will be some negative value and velocity.x will be zero (we'll say your velocity is (0,0,-1)). This is because according to your orientation, "forward" is always negative-z, even though in the context of the world your "forward" is technically positive-x. Your positive-x (your right) is in the world's negative-z direction (see how this is confusing??).

Now let's say an object is in front of you. Because our raycasters work based on the world context, it will say there's an obstacle in the positive-x direction. We want to zero-out the velocity in that direction (so you're blocked from moving in that direction). However, we can't just zero-out velocity.x because that does not actually correspond to the direction you're moving in. In this particular example we need to zero-out velocity.z because, from your perspective, the obstacle is in front of you (negative-z direction), not to your right (positive-x direction).

Orientation problem example
Orientation problem example

The general approach I took (and I'm not sure if it's particularly robust, but it seems ok for now) is to take your ("local") velocity, translate it to the world context (i.e. from our example, a velocity of (0,0,-1) gets translated to (1,0,0)). Then I check the raycasters, apply the velocity zeroing-out to this transformed ("world") velocity (so if there is an obstacle in the positive-x direction, I zero out the x value to get a world velocity of (0,0,0)), then re-translate it back into the local velocity.

Ok, so here's how this ends up getting implemented.

First I add the following initialization code to the Scene's constructor:

    // ...
    this.height = 10;
    this.collidable = [];
    this.collisionDistance = 15;
    this.raycaster = new THREE.Raycaster(new THREE.Vector3(), new THREE.Vector3(0, -1, 0), 0, this.height);
    // ...

Whenever you add a mesh and the player shouldn't be able to walk through it, you need to add that mesh to this.collidable.

Then I add a detectCollisions method onto the Scene class:

detectCollisions() {
    // the quaternion describes the rotation of the player
    var quaternion = this.controls.getObject().quaternion,
        adjVel = this.velocity.clone();

    // don't forget to flip that z-axis so that forward becomes positive again
    adjVel.z *= -1;

    // we detect collisions about halfway up the player's height
    // if an object is positioned below or above this height (and is too small to cross it)
    // it will NOT be collided with
    var pos = this.controls.getObject().position.clone();
    pos.y -= this.height/2;

    // to get the world velocity, apply the _inverse_ of the player's rotation
    // effectively "undoing" it
    var worldVelocity = adjVel.applyQuaternion(quaternion.clone().inverse());

    // then, for each ray
    _.each(RAYS, (ray, i) => {
      // set the raycaster to start from the player and go in the direction of the ray
      this.raycaster.set(pos, ray);

      // check if we collide with anything
      var collisions = this.raycaster.intersectObjects(this.collidable);
      if (collisions.length > 0 && collisions[0].distance <= this.collisionDistance) {
        switch (i) {
          case 0:
            // console.log('object in true back');
            if (worldVelocity.z > 0) worldVelocity.z = 0;
            break;
          case 1:
            // console.log('object in true back-left');
            if (worldVelocity.z > 0) worldVelocity.z = 0;
            if (worldVelocity.x > 0) worldVelocity.x = 0;
            break;
          case 2:
            // console.log('object in true back-right');
            if (worldVelocity.z > 0) worldVelocity.z = 0;
            if (worldVelocity.x < 0) worldVelocity.x = 0;
            break;
          case 3:
            // console.log('object in true front');
            if (worldVelocity.z < 0) worldVelocity.z = 0;
            break;
          case 4:
            // console.log('object in true front-left');
            if (worldVelocity.z < 0) worldVelocity.z = 0;
            if (worldVelocity.x > 0) worldVelocity.x = 0;
            break;
          case 5:
            // console.log('object in true front-right');
            if (worldVelocity.z < 0) worldVelocity.z = 0;
            if (worldVelocity.x < 0) worldVelocity.x = 0;
            break;
          case 6:
            // console.log('object in true left');
            if (worldVelocity.x > 0) worldVelocity.x = 0;
            break;
          case 7:
            // console.log('object in true right');
            if (worldVelocity.x < 0) worldVelocity.x = 0;
            break;
          case 8:
            // console.log('object in true bottom');
            if (worldVelocity.y < 0) worldVelocity.y = 0;
            break;
        }
      }
    });

    // re-apply the player's rotation and re-flip the z-axis
    // so the velocity is relative to the player again
    this.velocity = worldVelocity.applyQuaternion(quaternion);
    this.velocity.z *= -1;
}

This is working for me so far. The code can probably be more concise and I suspect there's a much more direct route (involving matrix multiplication or something) to accomplish this. But I can wrap my head around this approach and it makes sense :)


Social Simulation Components: Social Contagion

The idea of syd is that it will be geared towards social simulation - that is, modeling systems that are driven by humans (or other social beings) interacting. To support social simulations syd will include some off-the-shelf models that can be composed to define agents with human-ish behaviors.

One category of such models are around the propagation and mutation of ideas and behaviors ("social contagion"). Given a population of agents with varying cultural values, how do these values spread or change over time? How do groups of individuals coalesce into coherent cultures?

What follows are some notes on a small selection of these models.

Sorting & peer effects

Two primary mechanisms are sorting, or "homophily", the tendency to associate with similar people, and peer effects, the tendency to become more like people we are around often.

Schelling's model of segregation may be the most well-known example of a sorting model, where residents move if too many of their neighbors aren't like them.

Schelling's model source
Schelling's model source

A very simple peer effect model is Granovetter's model. Say there is a population of $N$ individuals and $n$ of them are involved in a riot. Each individual in the population has some threshold $T_i$; if over $T_i$ people are in the riot, they'll join the riot too. Basically this is a bandwagon model in which people will do whatever others are doing when it becomes popular enough.

Granovetter's model does not incorporate the innate appeal of the behavior (or product, idea, etc), just how many people are participating in it. One could imagine though that the riot looks really exciting and people join not because of how many others are already there, but because they are drawn to something else about it.

A simple extension of Granovetter's model, the Standing Ovation model, captures this. The behavior in question has some appeal or quality $Q$. We add some additional noise to $Q$ to get the observed "signal" $S = Q + \epsilon$. This error allows us to capture a bit of the variance in perceived appeal (some people may find it appealing, some people may not, the politics around the riot may be very complex, etc). If $S > T_i$, then person $i$ participates. After assessing the appeal, those who are still not participating then have the Granovetter's model applied (they join if enough others are participating).

There are further extensions to the Standing Ovation model. You could say that an agent's relationships affects their likelihood to join, e.g. if I see 100 strangers rioting I may not care to join, but if 10 of my friends are then I will.

Axelrod's culture model

Axelrod's culture model is a simple model of how a "culture" (a population sharing many traits, beliefs, behaviors, etc) might develop.

  • each individual is described by a vector of traits
  • the population is placed in some space (e.g. a grid or a social network)
  • an individual interacts with a neighbor with some probability based on their trait similarity
  • if they interact, pick one trait and match with the neighbor's
A social network version of Axelrod's cultural disseminiation model source
A social network version of Axelrod's cultural disseminiation model source

This model can be extended with a consistency rule: sometimes the value of one trait is copied over to another trait (in the same individual), this models the two traits becoming consistent. For example, perhaps the changing of one belief causes dependent beliefs to change as well, or requires beliefs it's dependent on to change.

Traits can also randomly change as well due to "error" (imperfect imitation or communication) or "innovation".

Modeling opinion flow with Boids

A really interesting idea is to apply the Boids flocking algorithm to the modeling of idea dissemination (if you're unfamiliar with Boids, Craig Reynolds explains here).

The original boids source
The original boids source

Here agents form a directed graph (e.g. a social network), where edges have two values: frequency of communication and respect one agent holds for the other. For each possible belief, agents have an alignment score which can have an arbitrary scale, e.g. -3 for strong disbelief to 3 for strong belief.

The agent feels a "force" that causes them to change their own alignment. This force is the sum of force from alignment, force from cohesion, and force from separation.

  • force from alignment is computed by the average alignment across all agents - this is the perceived majority opinion.
  • force from cohesion: each agent computes the average alignment felt by their neighbors they respect (i.e. respect is positive), weighted by their respect for those neighbors.
  • force from separation: like the force from cohesion, but computed across their neighbors they disrespect (i.e. respect is negative), weighted by their respect for those neighbors.

This force is normalized and is used to compute a probability that the individual changes their alignment one way or the other. We specify some proportionality constant $\alpha$ which determines how affected an agent is by the force. For force $F$ the probability of change of alignment is just $\alpha F$. It's up to the implementer how much an agent's alignment changes.

Memetics

A meme is an idea/belief/etc that behaves according to the rules of memetics, which models the spread of ideas by applying evolutionary concepts:

  1. phenotypes & alleles
    • the "offspring" of a meme vary in their "appearance"
    • a meme contains characteristics ("alleles"), some of which are transmitted to their child
    • variability in allele combinations is responsible for variability at the phenotypic level
  2. mutation
    • idea mutation may be random
    • or it may happen for a reason; ideas can change -- to solve problems, for instance (these mutations are essentially innovations advocated by a change agent)
  3. selection
    • some ideas are more likely to survive than others
    • an idea's survival is based on how "fit" it is
    • an idea's measure of fitness is the likelihood of its offspring surviving long enough to produce their own offspring, compared to other memes
  4. Lamarckian properties
    • unlike biological evolution, members can be modified, activated or deactivated within a generation (people can adapt their ideas to deal with new information, for example)
  5. drift
    • if multiple finite-sized populations exist, beginning w/ the same set of initial conditions & operate according to the same mechanisms/constraints, completely different sets of ideas can emerge b/w the populations
    • this drift is due to sampling error when a parent meme produces offspring (random allele heritage)

Memetic transmission may be horizontal (intra-generational) and/or vertical (intergenerational).

The primary mechanism for memetic transmission is imitation, but other mechanisms include social learning and instruction.

Note that the transmission of an idea is heavily dependent on its own characteristics.

For example, there are some ideas that have "mutation-resistant" qualities, e.g. "don't question the Bible" (though that does not preclude them from mutation entirely).

Some ideas also have the attribute of "proselytism"; that is part of the idea is the spreading of the idea.

The Cavalli-Sforza Feldman model is a memetics model describing how a population of beliefs evolve over time.

There is a transmission function:

$$ p = 1 - |1 - g|^{n \mu_t} $$

where:

  • $p$ is the probability that an individual's belief state will be transformed (i.e. imitate another's) after $n$ contacts
  • $g$ is the probability of transformation after each contact
  • $\mu_t$ is the proportion of people the individual can come into contact with who already have the target belief state

There's also a selection function:

$$ \mu_t' = \frac{\mu_t (1+s)}{1 + s \mu_t} $$

where:

  • $\mu_t'$ is the proportion of beliefs that survive selection for a single generation
  • $\mu_t$ is the proportion of beliefs before selection
  • $s$ is the degree of fitness

"The popular enforcement of unpopular norms"

The models presented so far make no distinction between public and private beliefs, but it's very clear that people often publicly conform to ideas that they may not really feel strongly about. This phenomenon is called pluralistic ignorance: "situations where a majority of group member privately reject a norm but assume (incorrectly) that most others accept it".

It's one thing to publicly conform to ideas you don't agree with, but people often go a step further and try to enforce others to comply to these ideas. Why is that?

The "illusion of transparency" refers to the tendency where people believe that others can read more about their internal state than they actually can. Maybe something embarrassing happened and you feel as if everyone knows, even if no one possibly could. In the context of pluralistic ignorance, this tendency causes people to feel as though others can see through their insincere alignment with the norm, so they take additional steps to "prove" their conviction.

The Emperor’s Dilemma: A Computational Model of Self‐Enforcing Norms proposes the following model for this phenomena:

  • each agent $i$ has a binary private belief $B_i$ which can be 1 (true believer) or -1 (disbeliever).
  • true enforcement is when a true believer/disbeliever enforces others to comply/oppose
  • false enforcement is when a false believer enforces others to comply

An agent $i$ choice to comply with the norm is $C_i$. If $C_i=1$ the agent chooses to complex, otherwise $C_i=-1$. This choice depends on the strength of the agent's convictions $0 < S \leq 1$.

A neighbor $j$'s enforcement of the norm is represented as $E_j=1$; if they enforce deviance instead then $E_j=-1$. Thus we can compute $C_i$ as:

$$ C_i = \begin{cases} -B_i & \text{if } \frac{-B_i}{N_i} \sum_{j=1}^{N_i} E_j > S_i \\ B_i & \text{otherwise} \end{cases} $$

We assume that true believers ($B_i = 1$) have maximal conviction ($S_i = 1$) and so are resistant to neighbors enforcing deviance.

The enforcement of an agent $i$ is computed:

$$ E_i = \begin{cases} -B_i & \text{if } (\frac{-B_i}{N_i} \sum_{j=1}^{N_i} E_j > S_i + K) \land (B_i \neq C_i) \\ +B_i & \text{if } (S_i W_i > K) \land (B_i = C_i) \\ 0 & \text{otherwise} \end{cases} $$

where $0 < K < 1$ is an additional cost of enforcement for those who also comply (it is $K$ more difficult to get someone who does not privately/truly align with the belief to enforce it).

$W_i$ is the need for enforcement, which is the proportion of agent $i$'s neighbors whose behavior does not confirm with $i$'s beliefs $B_i$:

$$ W_i = \frac{1- (B_i/N_i) \sum_{j=1}^{N_i} C_j}{2} $$

Agents can only enforce compliance or deviance if they have complied or deviated, respectively.

The model can be extended by making it so that true disbelievers can be "converted" to true believers (i.e. their private belief changes to conform to the public norm).

References


Social Simulation Components: Cultural Values

The agents in syd will need to be parameterized in some way that meaningfully affects their behavior. Another way to put this is that the agents need some values that guide their actions. In Humans of Simulated New York Fei and I defined individuals along the axes of greed vs altruism, lavishness vs frugality, long-sightedness vs short-sightedness, and introversion vs extroversion. The exact configuration of these values are what made an agent an individual: a lavish agent would spend more of their money, an extroverted agent would have a larger network of friends (which consequently made finding a job easier), greedy agents would pay their employees less, and so on.

HOSNY value dimensions
HOSNY value dimensions

The dimensions we used aren't totally comprehensive. There are many aspects of human behavior that they don't encapsulate. Fortunately there is quite a bit of past work to build on - there have been many past attempts to inventory a value spectrums that defines and distinguishs cultures. The paper A Proposal for Clustering the Dimensions of National Culture (Maleki, A., de Jong, M, 2014) neatly catalogues these previous efforts and proposes their own measurements as well.

The authors propose the following cultural dimensions:

  • individualism vs collectivism
  • power distance: "the extent to which hierarchical relations and position-related roles are accepted"
  • uncertainty avoidance: "to what extent people feel uncomfortable with certain, unknown, or unstructured situations"
  • mastery vs harmony: "competitiveness, achievement, and self-assertion versus consensus, equity, and harmony"
  • traditionalism vs secularism: "religiosity, self-stability, feelings of pride and, consistency between emotion felt and their expression vs secular orientation and flexibility"
  • indulgence vs restraint: "the extent to which gratification of desires and feelings is free or restrained"
  • assertiveness vs tenderness: "being assertive and aggressive versus kind and tender in social relationships"
  • gender egalitarianism
  • collaborativeness: "the spirit of 'team-work'"

We can (imprecisely) map the dimensions we used in HOSNY to these:

  • greed vs altruism -> individualism vs collectivism and collaborativeness
  • lavishness vs frugality -> indulgence vs restraint
  • long-sightedness vs short-sightedness -> indulgence vs restraint
  • introversion vs extroversion -> assertiveness vs tenderness (?)

It doesn't feel very exact though.

All of the previously defined dimensions are worth a look too:








Simulation Designer Sketches

The other day I wrote a bit about the backend architecture of syd; here I'll talk a bit about the tentative design plans. While the backend part of syd is supposed to make writing agent-based simulations easier, you still need to know implement these simulations with code. A typical process for developing such simulations involves design phases, e.g. pen-and-paper sketches or diagramming with flowcharting software where the high-level structure of the system is laid out. Causal loop diagrams are often used in this way.

A causal loop diagram
A causal loop diagram

Ideally the process of designing and sketching this high-level system architecture is the same as implementing the system simulation. This isn't a new idea; it's the approach software like Vensim uses. The point of syd, however, is to appeal to people with little to no systems thinking background; existing systems modeling software is intended for professionals in academia and industry in addition to being closed source and expensive.

Node-based interface

The general idea is to use a node-based interface. With node-based interfaces you compose causal graphs which are a natural and intuitive representation of systems. As an added bonus, if you design it so that nodes can be composed of other nodes, the interface lends itself to modularity as well.

A node-based interface (quadrigram)
A node-based interface (quadrigram)

System and agent views

In the syd interface there are two view levels:

  • the system level (i.e. the "environment" or the "world"). This works like conventional system dynamics software such as Vensim; i.e. you can define stocks and flows and connect them together.
  • the agent level. In this view you design the internals of a type of agent (e.g. a person, a firm, etc).

Since the system level view is quite similar to conventional system dynamics software (and also because I haven't fully thought it through yet) I won't go into much detail there. Basically the system level supports the creation of stocks (quantities), flows (changes in quantities that can link between stocks), and outputs (e.g. graphs and other visualizations). This gif of TRUE gives a good sense of it:

TRUE
TRUE

For example, a flow may be some arbitrary function that other outputs and inputs can link to.

Arbitrary function node
Arbitrary function node

You can also take a bunch of flows and stocks and group them into a module node, which "black boxes" the internals so you avoid spaghetti:

Node-based spaghetti
Node-based spaghetti

You can also visualize aggregate statistics for agent types as well.

Agent aggregations
Agent aggregations

The system view is also where you spawn populations of agents.

Designing agents

Agents are defined as types (e.g. a person, or a firm, or a car driver, etc) and are composed of state variables and behaviors.

State variables have a name and may be of a particular type (e.g. discrete/categorical or continuous, perhaps collections or references as well). Depending on its type it may have additional parameters that can be set (e.g. if it is a continuous state variable, it may have minimum and maximum values).

State variables may be instantiated with a hardcoded value, which is identical across all agents of that type, or they may be instantiated with some function (e.g. a random distribution or a distribution learned from data), which may cause its value to vary for each individual agent. Note that a limitation here is that at instantiation state variables are treated as independent; i.e. we can't (yet) instantiate them using conditional distributions (conditioned on other state variables, for instance).

State variable instantiations
State variable instantiations

State variables are changed over the course of the simulation through behaviors. Behaviors are isolated components that are attached to an agent.

A behavior
A behavior

So far behaviors are black-boxes that read from some state variables and write to other state variables. A warning is raised if a required state variable is not present; perhaps there should be an option to map the expected state variable name to an existing state variable (e.g. if a behavior expects the state variable foo but my agent has bar, I can tell the behavior to use bar instead).

A warning will also be raised if there are multiple behaviors that need to write to the same state variable. So far I haven't thought of a way for the user to easily specify how such write conflicts should be resolved.

Behavior warning
Behavior warning

A special kind of behavior is a metabehavior, which, instead of modulating state variable values, adds or removes other behaviors based on conditions. I haven't yet figured out the best way to represent this in the interface.

Final notes

There are a few additional features that would be great to include:

  • export models as JSON (or some other nice interchange format)
  • hierarchical simulations; i.e. take another model and include it as a node in another simulation. For instance, maybe you have a simulation about the internal workings of a company (modeling employees and so on) and you want to use that as a sub-simulation in a model of a national economy.
  • level-of-depth (LOD) and multiscale simulations

These extra features and some other aspects of the interface as described here (especially agent behaviors) require re-architecting of some of the backend, so I don't know when we'll be able to prototype the interface. I'm not totally confident that this approach will be general/flexible enough for more complex simulations, but we'll see when we start to prototype and use it.


Big Simulation Architecture

Game of Life with syd
Game of Life with syd

I've been interested in extending the work on Humans of Simulated New York into a more general agent-based simulation framework, one that is both accessible to non-technical audiences and powerful enough for more "professional" applications as well. We are all so ill-equipped to contend with the obscenely complex systems we're a part of, and we end up relying on inadequate (and often damaging) heuristics that cause us to point blame at parties that either have little do with our problems or are similarly victimized by them. Maybe if we had tools which don't require significant technical training to help us explore these causal rat's nests, to make this insane interconnectedness more intuitive and presentable, we could think through and talk about these wicked problems more productively.

Recently Fei and I started working on a project, tentatively called "system designer" (syd for short), which we hope will provide a foundation for that kind of tooling. syd is currently functional but its features are provisional - I've included some examples of visualizations built on top of it, demoing simple models, although it is capable of more sophisticated agent-based models as well.

From an engineering perspective, the goal is to make it easy to write agent-based simulations which may have massive amounts of computationally demanding agents without having to deal with the messiness of parallel and distributed computing.

From a design perspective, the goal is to provide interfaces that make defining, running, visualizing, and analyzing such simulations an exploratory experience, almost like a simulation video game (e.g. SimCity).

In both the design and engineering cases there are many interesting challenges.

I'm going to discuss the engineering aspects of the project here and how syd is approaching some of these problems (but bear in mind syd is still in very early stages and may change significantly). At another point I'll follow-up with more about the design and interface aspects (this is something we're still sketching out).

syd is built on top of aiomas which handles lower-level details like inter-node communication, so I won't discuss those here.

(note that at time of writing, not all of what's discussed here has been implemented yet)

3D Schelling model with syd
3D Schelling model with syd

The demands of simulation

If you're conducting a relatively lightweight simulation, like cellular automata, in which agents are composed of a few simple rules, you can easily run it on a single machine, no problem. The update of a single agent takes almost no time.

Unfortunately, this approach starts to falter as you get into richer and more detailed simulations. In our first attempts at Humans of Simulated New York, Fei and I designed the agents to be fairly sophisticated - they would have their own preferences and plan out a set of actions for each day, re-planning throughout the day as necessary. Even with our relatively small action space (they could work, look for work, relax, or visit a doctor), this planning process can take quite awhile, especially when it's executed by hundreds or thousands of agents.

Here you could turn to parallel or distributed methods: you'd take your population of agents, send them to a bunch of different threads or processes or computers (generally referred to as "multi-node" architecture), and then update them in parallel. For example, if you run your simulation across two machines instead of just one, you can almost double the speed of your simulation.

Normally to convert a single-node simulation to a multi-node one you'd have to change up your code to support communication across nodes and a laundry list of other things, but syd abstracts away the difference. You simply instantiate a ComputeSubstrate and you pass in either a single host or a list of hosts. If you pass a single host, the simulation runs in the local process; if a list of hosts, syd transparently runs it as a multi-node simulation:

from syd import ComputeSubstrate

single_node = ComputeSubstrate(('localhost', 8888))

multi_node = ComputeSubstrate([
        ('localhost', 8888),
        ('localhost', 8889),
        ('192.168.1.3', 8888),
        ('192.168.1.3', 8889)])

Determining where agents go

That's great, but it doesn't come for free. Consider a simulation in which each agent must consult a few other agents before deciding what to do. For a single-node (here I'm using a "node" to refer to a single serial process) simulation this communication would happen quickly - all agents are in the same process so there's basically no overhead to get data from one another.

As soon as we move to the multi-node case we have to worry about the overhead that network communication introduces. The computers we distribute our population across could be on separate continents, or maybe we just have a terrible internet connection, and there may be considerable lag if an agent on one node needs a piece of data from an agent on another node. This network overhead can totally erase all speed gains we'd get from distributing the simulation.

The typical way of managing this network overhead is to be strategic about how agents are distributed across the nodes. For example, if we're simulating some kind of social network, maybe agents really only communicate with their friends and no one else. In this case, we'd want to put groups of friends in the same process so they don't have to go over the network to communicate, and we'd effectively eliminate most of the network communication.

The problem here (maybe you're starting to see a pattern) is that there is no one distribution strategy that works well for all conceivable agent-based simulations. It really depends on the particular communication patterns that happen within the simulation. In the literature around engineering these agent-based systems you'll see mention of "spheres of influence" and "event horizons" which determine how to divide up a population across nodes. Agents that are outside of each other's spheres of influence or beyond each other's event horizons are fair game to send to different nodes. Unfortunately what exactly constitutes a "sphere of influence" or "event horizon" varies according to the specifics of your simulation.

In syd, if you create a multi-node substrate you can also specify a distribution strategy (a Distributor object) which determines which agents are sent to which nodes. So far there are only two strategies:

  • syd.distributors.RoundRobin: a naive round robin strategy that just distributes agents sequentially across nodes.
  • syd.distributors.Grid2DPartition: if agents live in a grid, this recursively bisects the grid into "neighborhoods" so that each neighborhood gets its own node. This is appropriate when agents only communicate with adjacent agents (e.g. cellular automata). Network communication still happens at the borders of neighborhoods, but overall network communication is minimized.

You can also define your own Distributor for your particular case.

from syd import ComputeSubstrate, distributors

multi_node = ComputeSubstrate([
        ('localhost', 8888),
        ('localhost', 8889),
        ('192.168.1.3', 8888),
        ('192.168.1.3', 8889)],
        distributor=distributors.RoundRobin)

There is yet another problem to consider - in some simulations, agents may be mobile; for example, if we're simulating a grid world, we may distribute agents according to where in the grid they are (e.g. with the Grid2DPartition distributor), but what if they move to another part of the grid? We might want to move them to the node that's running that part of the grid. But if this happens a lot, now we've introduced a ton of overhead shuffling agents from node to node.

As another example - if the topology of the simulation is a social network instead of a grid, such that they are communicating most with their friends, what happens if those relationships change over time? If they become friends with agents on another node, should we re-locate them to that node? Again, this will introduce extra overhead.

I haven't yet come up with a good way of handling this.

Social network SIR model with syd
Social network SIR model with syd

Race conditions and simultaneous updates

There is yet another problem to consider. With a single-node simulation, agents all update their states serially. There is no fear of race conditions: we don't have to worry about multiple agents trying to simultaneously access a shared resource (e.g. another agent) and making conflicting updates or out-of-sync reads.

This is a huge concern in the multi-node case, and the way syd works around it is to separate agent updates into two distinct phases:

  • the decide phase, a.k.a. the "observation" or "read" phase, where the agent collects ("observes") the information it needs from other agents or the world and then decides on what updates to make (but it does not actually apply the updates).
  • the update phase, a.k.a. the "write" phase, where the agent applies all queued updates.

So in the decide phase, agents queue updates for themselves or for other agents as functions that mutate their state. This has the effect of agents simultaneously updating their states, as opposed to updating them in sequence.

Unfortunately, this structure does not entirely solve our problems - it is possible that there are conflicting or redundant updates queued for an agent, and those usually must be dealt with in particular ways. I'm still figuring out a good way to manage this.

Node failures

Another concern is the possibility of node failure. What if a machine dies in the middle of your simulation? All the agents that were on that machine will be lost, and the simulation will become invalid.

The approach that I'm going to try is to snapshot all agent states every $n$ steps (to some key-value store, ideally with some kind of redundancy in the event that one of those nodes fail!). If a node failure is detected, the simulation supervisor will automatically restart the simulation from the last snapshot (this could involve the spinning up of a new replacement machine or just continuing with one less machine).

Abstract the pain away

Long story short: parallel and distributed computing is hard. Fortunately for the end-user, most of this nasty stuff is hidden away. The plan is to include many "sensible defaults" with syd so that the majority of use-cases do not require, for example, the implementation of a custom Distributor, or any other messing about with the internals. You'll simply indicate that you want your simulation to run on multiple computers, and it'll do just that.

References

  • Antelmi, A., Cordasco, G., Spagnuolo, C. & Vicidomini, L. (2015). On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. European Conference on Parallel Processing.
  • Bharti, R. (2016). HIVE - An Agent Based Modeling Framework. Master's Projects.
  • Chenney, S. (2001). Simulation Level-of-Detail. Game Developers Conference.
  • Chenney, S. Simulation Culling and Level-of-Detail. IEEE Computer Graphics and Applications.
  • Chenney, S., Arikan, O. & Forsyth, D. (2001). Proxy Simulations for Efficient Dynamics. EUROGRAPHICS.
  • Chris Rouly, O. (2014). Midwife: CPU cluster load distribution of Virtual Agent AIs. Complex, Intelligent and Software Intensive Systems (CISIS).
  • He, M., Ruan, H. & Yu, C. (2003). A Predator-Prey Model Based on the Fully Parallel Cellular Automata. International Journal of Modern Physics C.
  • Holcombe, M., Coakley, S., Kiran, M., Chin, S., Greenough, C., Worth, D., Cincotti, S., Raberto, M., Teglio, A., Deissenberg, C., van der Hoog, S., Dawid, H., Gemkow, S., Harting, P. & Neugart, M. (2013). Large-Scale Modeling of Economic Systems. Complex Systems, 22.
  • K. Bansal, A. (2006). Incorporating Fault Tolerance in Distributed Agent Based Systems by Simulating Bio-computing Model of Stress Pathways. Proceedings of SPIE, 6201.
  • K. Som, T. & G. Sargen, R. (2000). Model Structure and Load Balancing in Optimistic Parallel Discrete Event Simulation. Proceedings of the Fourteenth Workshop on Parallel and Distributed Simulation.
  • Kim, I., Tsou, M. & Feng, C. (2015). Design and implementation strategy of a parallel agent-based Schelling model. Computers, Environment and Urban Systems, 49(2015), pp. 30-41.
  • Kubalík, J., Tichý, P., Šindelář, R. & J. Staron, R. (2010). Clustering Methods for Agent Distribution Optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
  • Lees, M., Logan, B., Oguara, T. & Theodoropoulos, G. (2004). Simulating Agent-Based Systems with HLA: The Case of SIM_AGENT - Part II. International Conference on Computational Science.
  • Logan, B. & Theodoropoulos, G. (2001). The Distributed Simulation of Multi-Agent Systems. Proceedings of the IEEE, 89(2).
  • Lysenko, M. & M. D'Souza, R. (2008). A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units. Journal of Artificial Societies and Social Simluation, 11(4/10). http://jasss.soc.survey.ac.uk/11/4/10.html.
  • Márquez, C., César, E. & Sorribes, J. (2015). Graph-Based Automatic Dynamic Load Balancing for HPC Agent-Based Simulations. European Conference on Parallel Processing.
  • Navarro, L., Flacher, F. & Corruble, V. (2011). Dynamic Level of Detail for Large Scale Agent-Based Urban Simulations. Proceedings of 10th International Conference on Autonomous Agents and Multiagent Systems, pp. 701-708.
  • Oey, M., van Splunter, S., Ogston, E., Warnier, M. & M.T. Brazier, F. (2010). A Framework for Developing Agent-Based Distributed Applications. BNAIC 2010: 22rd Benelux Conference on Artificial Intelligence.
  • Pleisch, S. & Schiper, A. (2000). Modeling Fault-Tolerant Mobile Agent Execution as a Sequence of Agreement Problems. Reliable Distributed Systems.
  • Richiardi, M. & Fagiolo, G. (2014). Empirical Validation of Agent-Based Models.
  • Rousset, A., Herrmann, B., Lang, C. & Philippe, L. (2015). A communication schema for parallel and distributed Multi-Agent Systems based on MPI. European Conference on Parallel Processing.
  • Rubio-Campillo, X. (2014). Pandora: A Versatile Agent-Based Modelling Platform for Social Simulation. SIMUL 2014: The Sixth International Conference on Advances in System Simulation.
  • Scheutz, M. & Schermerhorn, P. (2006). Adaptive Algorithms for the Dynamic Distribution and Parallel Execution of Agent-Based Models. Journal of Parallel and Distributed Computing.
  • Sunshine-Hill, B. (2013). Managing Simulation Level-of-Detail with the LOD Trader. Motion in Games.
  • Tsugawa, S., Ohsaki, H. & Imase, M. (2012). Lightweight Distributed Method for Connectvity-Based Clustering Based on Schelling's Model. 26th International Conference on Advanced Information Networking and Applications Workshops.
  • Vecchiola, C., Grosso, A., Passadore, A. & Boccalatte, A. (2009). AgentService: A Framework for Distributed Multi-Agent System DEvelopment. International Journal of Computers and Applications, 31(3), pp. 204-210.
  • Dubitzky, W., Kurowski, K., & Schott, B, eds. (2012). Large-Scale Computing Techniques for Complex System Simulations.