# Fugue Devlog 3: Dialogue Editor and Manager

04.07.2021

Dialogue is a key part to Fugue, so I'm taking care to design those systems well from the start. The challenge is trying to imagine all the potential scenarios I might want to play out with the dialogue system beforehand. I'm probably going to get it wrong but I tried my best to design for maximum flexibility/minimum regret.

There are three main pieces that go into the system:

• The schema: how is the data that describes a dialogue/conversation encounter (a "script") structured?
• The editor: if I'm going to be writing and tweaking a lot of these scripts, I need a tool that's robust, quick, and intuitive.
• The manager: the system that handles running scripts in the game, e.g. figuring out where/how to render the text, handle choice selection, etc.

But first, what features am I looking for in a dialogue system?

• Branching and dynamic dialogue: choices in dialogue and factors outside that specific encounter influence the course of a conversation.
• Choices that depend on other variables (either hidden or shown but not selectable until the criteria is met)
• Variable substitution
• Pick up at different points depending on previous conversations
• Time-limited decisions, especially because managing time will be an important part of the game
• Rich formatting: colors, bold, italic, and whatever else I can get
• Entity agnostic: Conversations can be had with both objects and NPCs (without needing to classify objects as NPCs or anything hacky like that)
• Flexible in triggering: A conversation can be triggered by player choice (e.g. approaching an object/NPC and interacting), by entering the proximity of something, or after some other action is taken
• Integrate into broader scenes: triggering other actions/animations and events, capture the cadence and rhythm of a conversation with pauses (delays and timeouts) and by revealing the text over time (the "typing" effect)
• Feels well integrated into the surrounding ambient environment: less "we are locked into having a conversation now"
• Handle multiple simultaneous speakers: For example, to convey the feeling of everyone talking over each other in a large group

In terms of developing the game and editing dialogue, there are a couple other quality-of-life features, like making it easy to attach a script to an object, NPC, or trigger area and supporting validation/tests to minimize bugs.

## The script schema

This is the schema that's currently in place.

A dialogue script has two top-level keys:

• root: The root note that determines how the dialogue starts. It's just an array of "Outcomes" (see below)
• events: An array of "Events", which are the basic unit of a dialogue script. This is a flat array, though represents and is parsed into a graph.

An "Event" has the following structure:

• id: Used to keep track of event relationships. Only needs to be unique to its parent dialogue.
• type: There are two types of Events:
• thought: An internal dialogue statement, italicized, and has no associated speaker
• verse: A spoken dialogue statement, spoken by a speaker
• text: The actual statement that's shown. Can use BBCode, which means colors and other styles can be applied.
• speaker: An optional speaker name to show with the rendered text.
• delay: Optional delay in seconds before the next event is rendered. For pacing a conversation.
• timeout: Optional timeout in seconds the player has to make a choice or to auto-progress the dialogue. If there are choices, letting the time run out is a "null" choice.
• signal: Optional signal name (signals are Godot's way of having nodes communicate with each other without direct references) to emit when this event starts. This can be used to trigger things like other actions/animations in the environment (I think, I haven't tested it yet).
• outcomes: An array of "Outcomes". An Outcome is a link to another Event, with zero or more conditions attached to it.
• The order of the array matters. Outcomes have their conditions evaluated in the array order; the first to evaluate to true (or to have no conditions) is selected as the next Event.
• An Outcome with no conditions is the "default" Outcome; there can be only one.
• An Outcome has:
• ids: The next events to load if this Outcome is selected. Something I'm thinking through now is whether this should only be a single id or multiple ids (the current implementation); the relevance is for the simultaneous speakers feature. Not sure how to do that yet without making the progression of the conversation hard to anticipate.
• conditions: An array of Conditions that must evaluate to true for the Outcome to be selected
• choices: An array of "Choices". When selected a Choice sets a local variable called choice; Outcomes can condition on this variable (i.e. a Choice can lead to a specific Outcome but more complex behaviors are also supported). A choice consists of:
• id: This is what the choice variable is set to if the Choice is selected
• required: An array of Conditions that have to be satisfied for this Choice to be selectable
• show_required: An array of Conditions that have to be satisfied for this Choice to be visible (e.g. for secret choices)
• text: The text displayed for the Choice. Supports BBCode, so colors and other styles can be applied.

The other piece are Conditions, which have the following recursive schema broken into two types:

• Comparison:
• variable: The variable name for the left side of the comparison
• value: The value or variable name for the right side of the comparison
• type: Indicates if value is a "value" or a "variable"
• comparator: One of ==, !=, <, <=, >, >=, for comparing the left and right sides
• JointComparison:
• op: An and or or operation
• a: A Condition
• b: Another Condition

Thus JointComparisons can contain more JointComparisons and so on.

## The dialogue editor

I shouldn't be editing dialogue scripts by hand but through an editor that keeps things valid where possible. This is implemented as a "main screen" EditorPlugin for Godot using its built-in GraphEdit node and other UI elements. I was surprised at how much can be done with just the built-in components, though it was a struggle at times. I learned a lot in the process but some of Godot's UI behavior is unusual coming from frontend web development.

An additional feature is a validator. It runs through the script and identifies common errors, checking that:

• In the script root:
• There's one default entrypoint. That is, the script has to have some default starting event.
• Each entrypoint is connected to an event.
• Each entrypoint eventually leads to a terminal event (i.e. no conversations that loop forever).
• For all Conditions:
• All values and variables are defined.
• All variables reference existing global state variables or choice.
• For each Event:
• The text is not empty.
• Must have a parent (which can be the root).
• Each Outcome must be connected to another event.
• Has one default Outcome.
• Has one default Choice, if it has any Choices.

There are some other small quality-of-life features, like highlighting all events a given event is connected to. The editor will also automatically layout nodes, but it's not very good at the moment. It also gets very dense, very quickly given how many properties there are for an events. I want to figure out how to make that representation more compact and support faster free-flow writing.

## The dialogue manager

The dialogue manager is what reads a dialogue script and plays it out in-game. So it needs to render and position the text, render the choices and handle their interactions, etc. So far it's relatively simple (if the schema does its job well, the dialogue manager shouldn't have to do much). But it will probably get more complicated with more advanced features like speaker position tracking, simultaneous dialogue, and ambient dialogue.

I won't have any voice acting (bad voice acting is worse than no voice acting!) but I want the talking that does happen to still feel like ambient sound and conversation. This video on game design that accommodates for deaf people or people with hearing difficulties mentions sound cue indicators that can be enabled in Fortnite:

Sound cues won't be important in Fugue, but maybe something like this can give a sense of ambient snippets of conversation happening around you. In general I want conversations to feel less like you're fixed in a place with a big block of text at the bottom of the screen and more weaved into the environment, which I think this helps with.

I need to implement and experiment with this kind of approach. It might get way too cluttered or be otherwise overwhelming. One way I could approach that is not showing snippets of speech as the visual cue except when you're close enough where you'd be able to make out what they're saying. At further distances I could group further conversations away into a more abstract representation of speech happening off-screen.

## Next steps

Working out and implementing the rest of the dialogue system is enough to keep me occupied for awhile. Figuring out the ambient dialogue system, a better way to do simultaneous dialogue, and dynamically positioning dialogue boxes based on the speaker position are the next challenges. Then testing everything, fixing any issues, and feeling confident in its robustness and expressiveness.

After that, I want to try building an exterior environment and work on player movement/scene transitions.

Bigger tasks off the top of my head: an inventory system and building out more object interaction, then thinking through some of the more specialized systems. Right now that includes: a card game, a legal system, and character ability puzzles. But what of those remain and what they ultimately need to do depends on figuring out the rest of the world and story in more detail.

# Fugue Devlog 2: Interior Environments and Interaction

04.04.2021

A lot of progress this weekend. I put together some larger objects, such as this pharmacy cabinet:

And built out an interior test environment to figure out the built-in physics system (mainly collisions) and develop player movement/control and object interaction. Not sure if I have the best practices down there (e.g. the walls are separate planes, when I should maybe use an inverted cube. But I'm happy with out it turned out:

The player movement and control still need a lot of tweaking. I'm not sure of the best relationship between the player direction and camera direction...this is easier to figure out on console because you have joysticks to control each separately, but not so clear on PCs.

The object interaction using raycasting (key for object interaction/talking to NPCs) was very straightforward, as was object proximity detection (for triggering events when you come near something, for example).

In general I'm getting more familiar with Godot and the workflow with Blender. So far I'm really enjoying Godot; a few small snags here and there but overall it's been intuitive and powerful. I haven't yet felt like I needed to do any clunky or hacky; there's always been a clean solution.

I originally anticipated using Rust as the foundation for the game (was looking at bevy originally as the framework for the game). I even set up an integration between Godot and Rust (using godot-rust). But I've found GDScript to be really nice, and probably performs well enough for my needs. The game logic is straightforward; any bottlenecks would probably come from graphics/rendering/etc which the scripting language wouldn't help much with. Anyways, I'm using relatively small textures and simple models, so I'm not worried about that.

I'm currently sketching out the dialogue system. There are many different scenarios that it needs to handle; I'm hoping to design it so that it basically can run "cutscenes" (really just sequences of dialogue, animations, audio cues, etc) in addition to a more conventional choice-driven system. So I'm taking care to design a system and schema for representing dialogue scripts that will avoid painful re-writes in the future.

I'm using one of my favorite bitmap fonts here, UW ttyp0. Unfortunately it doesn't get much bigger than this, so I'll eventually need to find an alternative.

# Fugue Devlog 1: Framework, Style, and Workflow

03.31.2021

For the past few months I've been sketching out the world and rough game mechanics/experience for a new game I'm working on called Fugue. I have a big dump of notes and memos to turn into something more coherent, but because the world, game mechanics/experience, and story are all going to be constrained by development considerations, I want to start getting that whole infrastructure and process in place before going any further.

There are two main fundamental and related constraints:

• Time: I have a job and other obligations, so anything that can smooth out workflows, minimize clicking and pausing and looking for the right folder and so on is ideal.
• Expressiveness: Ideally whatever I set up now is enough to cover whatever ideas I might come up with down the line, and to allow me to express them quickly and intuitively.

Right now I'm thinking the core development pieces are relatively simple and robust once in place. Most of them time will be creating assets (modeling, processing textures, animating, etc) and writing the story and dialogue. I'm really just putting together a custom authoring system for this particular game.

## The game framework and other decisions

Probably the biggest starting decision is what game development framework to use. I initially was looking at Bevy, which is a Rust ECS (Entity-Component-System) framework that looks very promising. Unfortunately it's still in development, which would likely lead to a lot of headaches down the road. In the meantime they suggest Godot, which is well-supported, open source, and mature, something like to the game world what Blender is to the 3d modeling world. I was a little hesitant at first, based on my previous experience with this sort of game engine. When developing The Founder I initially used Unity, but had such a frustrating time with it that I ended up developing it as a web game. Godot fortunately looks better; after playing around with it a bit I feel somewhat confident that it's the right choice.

The other big decision was what modeling software to use. I'm already familiar with Blender so it was a no-brainer to keep using that (I did take a brief look at picoCAD, which looks great). I did hit a snag because I was using Blender 2.7 and upgraded to 2.9, which has a huge set of changes (which I think were mostly introduced in 2.8). Though it took a bit of re-adjusting to the UI, overall some of the annoyances I had with Blender feel more or less resolved in the update. I did struggle with the program crashing and locking up my system frequently. It seems like that might have been due to an older version of Mesa; after upgrading it things might be working properly (fingers crossed).

There were some other important housekeeping decisions to make: like folder hierarchies, how to keep process notes so that if I have to hit pause on the project for a few weeks (extremely likely), I can easily get pick up where I left off, and so on. These are likely to change as the project starts to pick up and I have a better sense of what the needs are, so I'm not too worried about getting them exactly right at the start.

## Art style

The other major set of decisions to make are around the art style. A few years ago while prototyping some ideas I used a very quick-and-dirty modeling and texturing approach that gave really good results. It's basically a low-poly approach and uses internet-sourced images as textures:

It's close to the sweet spot of visually interesting but not too labor intensive. But I want Fugue's graphics to be a bit "crunchier", achieved through lower-res graphics (more pixelated) and no anti-aliasing. That would kind of locate it in PS1 graphics nostalgia, which is also appropriate because the game draws from FF7 and FF8 (or at least the mood those evoked). Most of the PS1-inspired games I've seen are horror, so they typically have very dark and gloomy atmosphere and more of an industrial aesthetic. In terms of color I'm drawing more from Chinatown, Buddhist thangkas, and Ghibli/Miyazaki, so in general brighter, more saturated colors and more evenly-lit environments, so the end result should (hopefully) be more visually distinctive.

I have no idea how well this will work without trying it. So as a pilot I modeled and textured a maneki-neko (lucky cat) to work out the process in more detail.

## The texture processing pipeline

The general pipeline is:

1. Find a suitable image. Can be really time-consuming, because it's a combination of licensing requirements and if the photo's lighting/angles are good.
2. Process the image. This includes cropping the image and then color-correcting and adjusting the saturation, contrast, etc. It also includes the pixelation ("crunching" the image).

Ideally this pipeline is as automated as possible, since this can be very time consuming. Cropping isn't the worst, it can be done quickly by hand, and because of the pixelation, the cropping doesn't need to be especially tight: the texture sizes are small enough that leftover background in images doesn't take up too much space. But I did explore image segmentation models to see if it could be automated at all. Unfortunately the output quality just isn't reliable enough.

There are a lot of different ways to crunch the image. In all cases they require some post-processing to bring out features that might have been lost in the crunch.

Here are the approaches I tried:

imagemagick: The simplest is just to downscale the image to a very small size (e.g. to a max dimension of 32px) using nearest-neighbors. I'm applying this technique with the following bash script:

convert -auto-gamma -auto-level +contrast -modulate 100,150 -interpolate Nearest -filter point -scale $1x$1 "$2" "../${2%.*}.png"


This also takes care of some of the color correction and contrast/saturation adjustments. This is the fastest; I just run the script on an image after it's downloaded.

pixelator: This one looked promising, but sadly there's an incompatibility with newer versions of pango so I wasn't able to try it. I didn't want to go through all the trouble of getting a set of older libraries to run it.

pixatool: This one works pretty well. The page there says "Windows only", but there is in fact a Linux (and Mac) version in the zip that you download after purchasing it. The dev says it's too much effort to support Linux and Mac, which it probably is! But I appreciate that the binaries for Linux are still included. So far I haven't had any issues with it.

For touchups, I used pixelorama which is a nice, straightforward pixel editor. It looks like you can work very quickly in it once you get accustomed to the shortcuts.

Here's a comparison of the two crunching methods (with minimal post-processing):

It's hard for me to judge which of these textures is better from just this. They are fairly similar except that the pixatool one is more saturated.

Playing around with the models in Godot (after a very confusing import process) led me to stick with the imagemagick one:

It looks like Godot does not enable anti-aliasing by default, which is great.

It looks great with this PS1 post-processing shader, which fills in "detail" where ever the texture is itself not especially pixelated: