Fugue Devlog 16: Motion Capture Tests

08.12.2022

An example rendering from the CMU MoCap database.

After a few busy weeks I've found a bit of time to continue work on Fugue. One of the last major questions content production, after writing, textures, and music, is character animation (this still leaves object and character modeling as the other two problem areas). While I believe I can get away with lower-poly models and crunchier photo-textures, I don't think that's the case with low-quality animation—it's too jarring. So I want to figure out a way to produce fairly good, realistic motion on the cheap.

There are a number of deep learning-based projects available for motion capture without a sophisticated tracking setup. Some are even monocular, requiring only one camera. There are some commercial offerings (such as deepmotion.com) but I want to see how far I can get with open source options. I'll be able to modify those as I need and they'll be easier to integrated into a more automated process than commercial options.

The open source projects are usually research projects, so they aren't polished/are somewhat janky and probably don't generalize very well. And their licenses often restrict usage to non-commercial purposes. For example, MocapNET, EasyMocap, and FrankMocap are all non-commercial uses only. I did find MotioNet which does allow commercial usage (under its BSD-2 license) and requires only one camera, so that was promising.

One alternative to the deep learning approach is to just use existing motion capture data and hope that covers all the animations I'd need. A great resource is the CMU Graphics Lab Motion Capture Database, which has generously been converted to .bvh by Bruce Hahne for easy usage in Blender. The collection encompasses 2,500 motions and is "free for all uses". The range of motions is expansive enough (including things like pantomiming a dragon) that it's possible it will have everything I need.

Still, I wanted to try out the deep learning approaches, in part because I was curious.

One note here is that these models typically output motions as .bvh files. These contain motion instructions addressed to a particular skeleton (where, for example, the left leg bone might be named LeftLeg). I used Mixamo's auto-rigger to rig my character and the resulting skeleton has a different naming system. Fortunately there is a Blender addon, "BVH Retargeter", that remaps a .bvh to a differently-named skeleton. It doesn't include a mapping for Mixamo by default, but I set one up myself (available here, it goes into the known_rigs directory).

On this note, there is also this Deep-motion-editing project which has a much more sophisticated retargeter:

Deep-motion-editing retargeter

I don't know yet if I'll have a need for this, but good to know it's there!

On to the tests:

I'm using a Kiros Seagill model (from FF8) for these tests.

Even though the MocapNET license is not what I need, I decided to try it anyways:

MocapNET test

It looks ok, a little janky and all over the place though. And the hands aren't animated.

MotioNet

MotioNet looked promising but unfortunately did not have very good output. The output pose is upside-down for some reason (this is a known issue), which seems like an easy enough fix, but the joint movement is stiff and incorrect.

CMU MoCap

The CMU motion looks great of course, as it's actually captured properly. Again, the only concern here is that it doesn't have a wide enough range of motions.

The last software I tried is FreeMoCap, which is still in very early stages of development, but there's enough to try it out. It was quite a bit more complicated to set up as it works best with multiple cameras (they can still be fairly cheap, e.g. $20-30, webcams), and requires a charuco board for calibration, which I printed off at Kinko's. That cost me about $30 to get it on poster board, but you can probably make something cheaper with an old cardboard box and printing on large-sized computer paper. In total I spent ~$100 on equipment.

The most important thing is to get webcams that work for the size of your recording room, so get your full body in frame for all of them (which may require wide-angle cameras). Then you need to make sure that your charuco board is large enough that its patterns are clear on the webcams—the further you position the webcams, the larger the charuco board you'll need and the lower the resolution you record at, the larger the charuco board you'll need. Note that there's also a resolution/frame-rate trade-off: when running 3 cameras at 1080p I get about 14-15fps, but I needed to run at that size for my charuco board to render clearly. And as another note, the FPS can be bottlenecked if you use a USB hub to run your cameras through (some of the cameras may not even work in that case); I ended up plugging each camera into its own port for best performance.

Getting the program to work was tricky, which wasn't a surprise given the project is in an alpha state. I had to make a few changes to get it to run properly (mainly switching from multithreading to multiprocessing, since the threads were blocked on my system, and manually setting the FPS for my cameras, which for some reason would limit to 5FPS otherwise).

Below is an example of the program's output. My recording environment is less than ideal (my camera setup was super janky, attached to books or shelves in a haphazard way), but the output looks decent. It's jittery, and you'll notice the pose skeleton and camera footage are swapped in the first and last videos. I'm not sure if that's just a bug with this visualization or if it's happening deeper in the program, in which case it may be why there's that jitteriness and the skeleton angle is off.

FreeMoCap output

The program can also output Blender files:

FreeMoCap Blender output

Here the issues are more apparent: the hands are especially all over the place. But even the limbs are too finicky. The demo video (above) has good limb motion, so maybe my setup is lacking (though the hands are also jittery).

FreeMoCap is a really promising project, but unfortunately it's at too early of a stage to be consistent and reliable. For now I'll probably develop the game using the CMU motion data, and then later, when I'm ready and FreeMoCap is likely much more mature I can go through and replace or refine with custom motions. Though at the rate development is going, there's a good chance that FreeMoCap will be much further along by the time I'm ready to start really working on character animations!


Fugue Devlog 15: Divine Demons, In-Game Game, and Seamless Textures

06.21.2022

Modeling demonic divinities

Growing up I was very fond of Buddhist art and I would make an effort to visit various Buddhist temples whenever my family would go to China to see my grandparents. I found the paintings and sculptures of terrifying figures to be strangely calming, and only later did I learn that these fearsome beings are in fact benevolent, wrathful "protectors".

Demonic Divine, Rob Linrothe, Jeff Watt

While visiting Kira's parents last week I had the opportunity to read through the catalogue for one of the Rubin Museum's opening exhibitions, "Demonic Divine" which was exactly on this topic. I've wanted to find some way to draw from this tradition of "divine demons" for Fugue and reading through the catalogue helped me figure out how I could do that.

Without going into too much detail, one fundamental Fugue world-element is the phenomenon of "hauntings", which are mental blocks and fears that characters experience. The character progression system is based around confronting and resolving these hauntings (perhaps in addition to a simpler experience-based system, not really sure yet). The way I've pictured this is that progression happens along a path (the character's "道"/"way" in a sense) and these hauntings are literal roadblocks, represented by these demons, along this path. So a key question is how exactly does this process of resolving hauntings1 work?

One other feature I want to include is an in-game game (tentatively called "Gem"), sort of like Triple Triad in Final Fantasy VIII. Including something like that lets you do a lot of interesting things: explore its cultural significance, show how different places and people interpret the game (perhaps with different house rules and variations), explore all the infrastructure that comes up around a relatively simple game once it becomes a major phenomenon (rules committees, judges, player associations, its role in national conflict, etc), and in general be a means of showing different value systems, thought processes, philosophies, etc, like Azad in Iain M. Banks' The Player of Games. I'm still working out Gem's design, but so far it's like some combination of chess, checkers, and Chinese checkers, with the variability of a card game—i.e. many different pieces with different abilities, and players can build a "deck" of pieces to play with like they might build a deck in Magic: The Gathering.

My current thinking is that to get through a demonic roadblock you challenge the demon to a game of Gem, in a "chess with death" style. The demon itself is represented as a piece on the game board that you need to remove, sort of like the king in chess. However the goal of the game is not necessarily to "defeat" the demon. Each demon represents a mental block/fear that has a complementary virtue; the point is not to excise the demon from your mind but to recognize its value as an ally. In the context of Gem, this means capturing the demon instead of destroying it. If I design Gem right, then capturing the demon will be comparatively difficult to destroying it. If you're able to capture it (maybe "convert" is a better term) then you have access to that piece for future games of Gem and perhaps some kind of bonus "blessing" for the character outside of Gem.

I've been working on modeling the first demon, based on Chitipati. Chitipati is two skeletons depicted with a frame of flames which acts as a memento mori and more generally represent endless change:

Chitipati, from Himalayan Art Resources

The first part I tried to model is the flame halo. Similar features are common in wrathful diety iconography so this model would be useful for other demons too. I made several attempts at modeling these flames using only Blender's geometry nodes with the hope of generating them procedurally, but they never came out well (maybe because of my inexperience with geometry nodes). In the end I hand-modeled five flames and use geometry nodes to distribute them in an arch, which I'm happy with:

Flame arch

When I moved on to modeling Chitipati I started with a fair amount of detail. It did not look very good, so I gave it another shot from scratch using a rougher approach, and it looked a lot better (though the hands look TERRIBLE...I'll give them more attention later maybe, to organize them as particular mudras). Relying on textures for detail rather than capturing it in the mesh makes a lot more sense for this game, especially if I'm using photo-textures. And because I'm rendering out at a lower resolution, finer mesh detail won't really show up anyways.

A side note: I'm modeling this demon as only one skeleton, rather than Chitipati's two.

Test render

This test render's resolution is high enough where the lack of mesh detail is apparent (pretty boxy looking in some parts). When rendered in-game the low detail looks more appropriate:

In-game

Infilling and seamless tiling with GIMP

One idea for the flame halo was to use content-aware fill or a similar infilling algorithm to generate a full image of the flame background and then some how turn that into a 3d model. That route didn't really work—I couldn't make it look good; it just looked flat and janky—but I did find an infilling plugin for GIMP called Resynthesizer that is really promising. It didn't work very well for the flame frame (perhaps because it's an illustration or because there isn't a lot of content to reference for the infilling) but it works much better with texture photography. Here's an example using an image of some overgrowth:

The original image

After applying the Resynthesis plugin (using Filters > Map > Resynthesize) and checking "Make horizontally tileable" and "Make vertically tileable":

The filled-in/resynthesized image

That result looks great, but it wasn't actually seamlessly tileable. Fortunately there is another filter, Filters > Map > Tile Seamless, that handles that:

The seamless tiling image

It looks really good tiled:

The seamless tile in use (2x2)

This texture has a lot going on so it may be an easier example. If you look closely at the seamless tile version you can see some ghost leaves overlapping other leaves, which might be more noticeable in a sparser texture.

It's more apparent in these textures:

You can see some mushiness/blurriness/choppiness in the patterns from the overlapping areas. It's not terrible, especially for textures you won't look closely at and in the context of the game's downsampled resolution. Again, part of the game's aesthetic is about giving a big margin of error for quick-and-dirty assets, whether through low poly modeling or iffy automated tools.

When I have more time I want to see about integrating this directly into the Texture Editor from the last post; it would be nice to not have to open up GIMP every time to process images this way.

As a side note, it's been awhile since I've used GIMP and I'm impressed by this latest version! Feels like it had a big glow up like Blender did.


  1. "Exorcisms", sort of line with how they work in Buddhism, though I don't know that I will call them that in-game. 


Fugue Devlog 14: Authoring Tools

06.05.2022

Wow, it's been almost a year since I last updated this blog.

I haven't had time to work on Fugue until a month or so ago. Since then I've been chipping away at more tooling. Once the core game mechanics/systems are in place, I'm expecting that most of the time will be spent creating content for the game: writing, modeling, environment design, etc. So I'm building out tooling and figuring out strategies to streamline all of these processes.

Godot makes it easy to develop editor plugins that integrate relatively seamlessly. It's not without its challenges and frustrations but those are more to do with Godot in general than specifically about their plugin development process (see below).

Writing

The game will play out mostly through characters saying and doing things, and these actions need to be specified in a way where I don't need to meticulously program each one. Previously the game's narrative elements used "Dialogue" as the main organizing element, focusing on spoken dialogue, and let me write "scripts" of spoken dialogue lines with a playback system to have the appropriate characters say their lines in order. That ended up being too limiting because I want to write not only dialogue but to specify various actions/stage directions to write scripts that basically describe entire scenes, including character movement and animation, sound and environmental cues, and so on. So I restructured that whole system around "Sequence" as the main organizing element, with "Dialogue" as a sub-component.

A Sequence is composed of "Actions", which include dialogue lines, choice prompts, animation triggers, game variable setting, character movement and rotation, etc. At the time of writing the following actions are available:

  • Line (L): A line of dialogue, spoken by a single Actor.
  • Decision (%): A set of choices that the player must choose from.
  • VoiceOver (V): A line of voice-over dialogue. The difference between this and Line is that it does not require the speaking actor to be present and shows in a fixed position on screen.
  • Prompt (?): Basically a combination of Line and Decision. A line of dialogue is displayed with the decision's choices.
  • Pause (#): A blocking pause in the sequence
  • SetVar (=): Set a state variable to the specified value (strings only). There are a number of targets
    • Global: Set it on the global state
    • Sequence: Set it on the local state (local to the current sequence). These values persist through multiple executions of the same sequence (i.e. they aren't reset whenever the sequence is run again).
    • An Actor: Set it on the local state (local to a specific actor).
  • PlayAnimation (>): Play an animation with the specified for the specified Actor
  • MoveTo (->): Move the Actor to the specified target
    • You can use this to "bounce" the player character if they enter somewhere they aren't supposed to.
  • LookAt (@): Have the Actor look at the specified target
  • ToggleNode (N): Toggle the visibility of the specified node. Can fade it in/out (but looks quite janky)
  • RepositionNode (>N): Move a node to the position and rotation of the specified target. This happens instantaneously...so you could use it for teleportation; but more likely you'd use it to rearrange a scene while it's faded out.
  • TogglePortal (P): Enable/disable the specified portal.
  • AddItem (+): Add an item to the player's inventory
  • PlaySound ())): Play a sound
  • Parable (~): Start or end a Parable (Quest)
  • Fade (F): Complete fade the scene in or out
  • ChangeScene (>S): Change the scene. Because sequences are associated with one scene, this will end the sequence!

Sequences may be triggered in one of three ways: the player interacting with an object (such as talking to an NPC), the player entering a zone/area, or they automatically start when a scene loads ("ambient" sequences).

A "Sequence Script" is a graph of two types of nodes: "Verses", which are lists of actions, and "Forks", which include one or more "Branches" that each have a set of conditions. If a branch's conditions are true then its child verses are executed.

Sequence Editor

Sequences are associated with only one scene. Generally multiple sequences will be related in some way: the might be part of the same narrative arc, for example. So Sequences can be further organized into "Stories" which is basically just a grouping of Sequences, without any significant additional functionality.

The Sequence and Story Editors both make it very easy to quickly sketch out scripts and later refine them. They both have built-in validators to ensure that scripts are correctly specified, i.e. they don't refer to any objects that aren't in the scene, aren't missing any required data, etc.

Sequences and Stories are just stored as relatively simple JSON so they can be further processed/analyzed outside of Godot easily.

I expect that as the game's writing and development continues more actions will be needed. But for now this set has been comprehensive enough.

Example script showing different sequence actions

Textures

Finding source images that fit my licensing requirements and then editing them into textures is a very tedious process. I built a web tool that makes it much easier to find public domain and CC source images (vastly simplified by Openverse), cut out clippings from them and pack those clippings into textures or generate seamless textures by wrapping and blending their edges. It tracks where the clips were sourced from so that attribution is much easier to manage.

Texture Editor: Search

Texture Editor: Clipping

Texture Editor workflow

Music

I'm not at the point where I've given a ton of thought to the game's music, but I do have one tool, dust, that I developed to help sketch out musical ideas. I didn't develop it specifically for this game but it'll be useful here too. It's a chord progression generator/jammer that outputs MIDI so it can be used as an input to most DAWs (tested with Bitwig Studio and Live 11). It helps to get around blank-canvas-syndrome by giving you a chord base to start working with.

dust

Miscellaneous

Items

I've started working on the item system, which is very simple at the moment (and hopefully will stay that way). To manage the items I created an Item Editor, which, though a lot simpler than the Sequence Editor, is just as useful.

Item Editor

Blender scripts and templates

Blender's also been nice to work with because of its support for Python scripts. It's a little clunky to get things integrated, but can be powerful once you're going. In my case I'm mostly using a "quick export" script that helps me avoid the tedious work of keeping exported files organized (navigating to the correct folder, setting the filename, etc) and double-checking my export settings are correct. In the case of items, which require a static icon to show in the UI, the export script automatically exports a properly-cropped render of the item to the item icons folder so I don't have to bother with that at all.

Another small but immensely helpful thing is having a specific template for Fugue modeling work, with materials, cameras, and what not preconfigured. My material settings change very infrequently; I'm usually just swapping out textures, so this saves me a lot of time configuring materials over and over again.

Dialogue Layout System

Not really a tool, but something I've been refining for awhile now. This is the system that determines where dialogue boxes are placed on screen. Many games have a fixed dialogue box, e.g. at the center bottom of the screen, but I want it to feel more spatial, especially as there won't be any voiced lines in the game (too expensive/too much work/difficult to change and iterate on) so there won't be any 3d audio to offer that auditory depth.

Dialogue from Breath of the Wild

As far as I know there is no easy or reliable way to layout rectangles in a 2d space to guarantee that there are no overlaps. Not only should there be no overlaps, but each dialogue box should be reasonably close to its "host" (the actor that's speaking) so that it's clear who's speaking. I have a reasonable expectation/constraint for myself that something like five at most actors should be speaking at once and the game has a minimum viewport size to ensure there's a reasonable amount of space. That is, I'm not expecting that overlaps will be impossible, only that they are unlikely given these constraints.

The approach I'm using now is using a fixed set of anchors for each object and a quadtree to detect collisions. We just try placing a box at one of an object's anchors, and if it collides with an existing dialogue box or object, try the next anchor.

Dialogue layout prototype

As you can see from the prototype above (on-screen objects are beige, off-screen objects are grey, and dialogue boxes are black), it's not perfect. The box 8 at the top overlaps a bit with object 2 — this is due to how dialogue boxes for off-screen objects are handled, which could be refined, but I'm treating as an acceptable edge case for now.

Another shortcoming is how 2d bounding boxes are calculated from 3d objects. Basically I compute the bounding rectangular prism around the object and project that to 2d space. Depending on the shape of the object that may work well or it may end up placing an anchor far-ish from the object's mesh. You can kind of see it in the screenshot below, the "I'm on the move" dialogue box is meant to accompany the smaller NPC, but it's kind of far away. Tighter bounding boxes might be possible but I'm worried about the overhead they'd require. Something to look more into.

Dialogue layout system in action

Unit Testing

Godot doesn't have its own unit testing framework but there are two popular third-party options: gdUnit3 and Gut. They both seem fairly powerful but Gut felt a bit clunky and I couldn't get gdUnit3 to work properly (compile errors, which I chalk up to Godot's weird stochastic-feeling nature, more on that below). I ended up writing my own very simple testing framework instead. It lacks basically all of the advanced features present in other testing frameworks (spies, mocks, etc), but for my needs it's working great.

Tester

Things not covered here

There are still a few key content areas that I don't have a good approach for:

  • Character animation. This is something I'm not very good at and a huge factor in the visual quality of the game. Crunchy textures and low-poly models are a lot more forgiving than terrible animations. There are now deep learning motion capture tools that might work with a commodity web camera, but I haven't tried them yet so I don't know if their output is good and what the workflow is like.
  • Mapping textures. Taking a texture and then adjusting a model's UV map so that it doesn't look warped is also really, really tedious. No idea how to streamline that.
  • Object modeling. This is harder to streamline/automate because there's so much variation. Some categories like buildings and plants could be streamlined through procedural generation via Blender's geometry node system. Fortunately I enjoy modeling so I don't really mind doing this, it'll just be very time consuming. One more general possibility is to figure out a decent processing pipeline for taking free models and converting them into an polygon count that matches everything else. But finding an approach that is robust enough seems unlikely.
  • Character modeling. To make the world feel lively I'd like to have many, many background characters and a fair amount of more important NPCs. This might be doable with some kind of procedural/parameter character variation system (i.e. creating a few key archetype models, then having a script to vary some vertices, scales, etc) alongside with a procedural texture generation system (for clothing, etc). Again, this might be doable with Blender's geometry node system.

Thoughts on working with Godot

I've spent a far amount of time with Godot while working on all of this. My only reference point is Unity, which was very unpleasant to work with. Everything felt so fragile. Small changes can break tons of other things, with no easy way to undo the damage. Kind of like how in Microsoft Word adding a space can mess up your whole document's layout.

Godot has overall felt better than this, but it still has a similar, if reduced, fragility. I've felt discouraged to experiment with new ideas out of the fear that I will just break a bunch of existing code/scenes and have to manually fix everything. I've found that even just opening a scene file can alter its contents—not yet in a way that has caused me trouble, but it's still very different than other programming work I've done, where things really only change if you change them. It's like trying to build a house on moving ground. Version control is a bit of a safety blanket but its effectiveness depends on what changes I can revert to.

GDScript has been a surprisingly pleasant language. It still lacks many features I'd like like sets, first class functions, and iterators (all of which I believe are coming in Godot 4) but usually those haven't been an issue. What has been very frustrating is how Godot parses/compiles scripts. If you have a syntax error in one file it ends up breaking a bunch of other files that depend on it (which is to be expected) but it reports these issues as obscure errors that don't point to the originating error. I'll be inundated with messages like The class "YourClass" couldn't be fully loaded (script error or cyclic dependency). (it will not point you to what this error might be) or mysterious errors like Cannot get class '_'. repeating several times. Then it requires a painful process of trying to figure out where the syntax error actually is by opening scripts one-by-one until I stumble upon it.

This is less likely to happen if you're using Godot's built-in script editor because you're more likely to catch the syntax error before it causes too much trouble. However Godot's built-in editor is really lacking, mainly because you can only have one script open at a time and if you need to edit multiple files at once it requires a very tedious process of manually jumping between files one at a time. So I use an external editor, which does have a language server integration with Godot—so it does catch syntax errors, but sometimes I don't catch them in time, and then these dizzying cascading errors happen.

I've also noticed that sometimes there will be compile errors that are fixed by reloading the project. It feels like these happen because of an unusual (sometimes seemingly random) parse order for scripts, like classes are found as undeclared when in fact they are. I haven't looked into it too much.

That all being said, Godot has been wonderful to work with overall. These frustrating experiences are infrequent, and it sounds like many of them are being addressed in Godot 4. I've enjoyed it way more than Unity and it's an amazing privilege to have access to such a powerful open-source project!


Fugue Devlog 13: More World Modeling

05.14.2021

Very busy with some other things so not much progress as of late. I've mostly been modeling more assets for the world map, which has forced me to think more thoroughly on what each city might look like. I don't feel totally ready to commit to anything I've made yet though.

Modeling and texturing are very time consuming processes, even with the low-res look I'm going for. I'm convinced that 99% of the time spent working on the game will be modeling and texturing.

I find modeling very meditative and enjoyable, though for the map the things I've been modeling are much bigger (e.g. buildings and cities), and for some reason that's a lot more daunting. It might be because larger objects need more details to look convincing. Modeling smaller objects is a lot nicer.

One thing that hasn't helped is that Blender (2.92) constantly crashes. I'm not sure what the cause is because it doesn't save a crash log.

Texturing is the slowest part of the process. Building the textures can be time-consuming: collecting the images, processing them and then assembling them into a single texture, editing parts to be seamless, etc. Most if not all of this can't really be automated or streamlined much further than they already are. One thing I'm trying to keep in mind is that because of the low-res style I can usually get away with low-resolution textures, which makes searching for appropriate ones a lot quicker (in particular: stacking islands together—kind of baffling that this isn't a part of Blender).

I'm also still learning workflow tips for faster UV editing. Blender's built-in UV editing tools are also kind of lacking, but I learned of TexTools which has helped make some aspects of it a lot quicker.

I'm also experimenting with how much I want to rely on free 3d models from elsewhere. For the megaflora on the map (see below) I'm using this model of a borage flower (organic shapes are harder for me to do quickly) but processing and editing it to better fit into the game also takes a decent amount of time.

Megaflora on the map

This city, inspired by the Ganden Sumtsenling Monastery that I visited many years ago, was so tedious to texture, mostly because I was selecting and positioning faces in a really clumsy way:

Tiantai on the map

This pagoda was pretty quick to model, mostly because the UV editing is relatively simple, but also because I'd started using TexTools:

Dagu Pagoda

I'll spend some time watching videos on more UV editing tips to see if I can make the process less tedious.

I also wrote a bit about some of the thinking behind the game for the Are.na blog.


Fugue Devlog 12: The World, the Story, and the Game Mechanics

05.10.2021

The last week has mostly been a lot of waffling about game mechanics. Should characters have skills/attributes? Should there be any combat, and if so, what should it be like? Are there any "skill games" (like the hacking mini-games that are so prevalent in games)? I originally threw around the idea of there being these kinds of mini-games for different character skills, like repairing machinery or cooking food.

I'm leaning towards just sticking to the dialogue system as the main "mechanic" and seeing how far I can stretch that. If there's any combat, it could be interesting to use the dialogue system for that—in games like Final Fantasy 7, combat is basically through a set of menus, which isn't all that different than the dialogue system I've set up. Or combat happening through dialogue choices as skill checks, like in Disco Elysium (h/t Matt). This is just an example—in practice, there will be very little combat if any in the game. Other skill games/mini-games could take place through the dialogue system too. I like this approach because it gives me a constraint (and so makes the task of coming up with mechanics a bit less daunting) and also lets me hone the dialogue system further.

I'm not totally confident in that decision; I have a weird premature regret about not including other mechanics because I'm worried the play experience will feel lacking. At the same time, I know that plenty of games that are basically just dialogue are really, really good. There are a couple of systems, like law and organizational resource management, I want to include in the game, but these don't necessarily translate into new mechanics (i.e. they can probably be expressed through the dialogue system). I think I just have to stick with this decision for now and be open-minded about something changing later.

One reason I'm hesitant about introducing more mechanics is that the branching narrative will already introduce a lot of complexity, requiring a lot more dialogue and scenes and what not for each branching path. It might be too much as-is: I've also spent some time trying to think through the world and narrative to have a better feel for how much branching and different scenes there'll be, and it's shaping up to be a lot.

I also started laying out the geography of the world. The various regions are developed according to a few priorities: the aesthetic priority/what feeling that landscape evokes; its implications on the geopolitics and history of the world; how geologically feasible they are. For the latter point everything is inspired by real formations/environments, but the spatial arrangement needs to be feasible, like where should the mountain ranges be? What biomes should be near them?

To help answer these questions I read a bit about how mountains, rivers, and so on work. There were several helpful guides on mountain formation, rivers and watersheds, general advice on the map design and vegetation. This channel has several videos on not only these topics but also on mineral deposits, wind, and more.

This procedural map generator also helped give some base material to shape.

I figure once I have a map it will make sorting out additional details a bit easier. I can ask it questions or think through how the existing factions and cities would maneuver through the world instead of coming up with ideas out of thin air and then trying to make them all fit together. For example: placing one city (Baita City) on a major river that empties out into a bay (Bao Bay) which is the location of another major city. If trade occurs mostly along the coast of the land, then Bao Bay can unilaterally blockade Baita City, so Baita City might want to develop a land route to the other major city. But perhaps the only viable path is expensive or dangerous to develop so Baita City can't do so until some new technology makes it feasible. But once that happens it dramatically shifts the relationship between Baita City and Bao Bay. Similarly, the character and culture of a settlement is going to be influenced by its geography so this also helps me have a stronger idea of what the cities look and feel like.

Here's what I have so far:

Working out the world geography

Closer view of the mainland

I need to work out the two smaller islands, fill in more details of the mainland, and add in settlements. I had a pretty good workflow going using Blender's vertex coloring to paint on different terrain textures, but for some reason the vertex color limit is set to 8 (really annoying), so I have to figure out a different approach now.

This part of the process is such an emotional roller coaster, at times overwhelming from the possibilities/uncertainty, daunting from all the work different decisions imply, frightening because of all the ways things could go wrong, anxiety-inducing from the possibility of foreclosing certain mechanics or world/narrative features by committing to choices, or satisfying when pieces start to click together. In any case, it's not something that I can rush. The world's regions, factions, and narrative arc are coming together...but very slowly.

<< >>