Game Design, Programming and running a one-man games business…

Improving Democracy 4 events balance by using a LOT of data

I’ve been doing some number crunching to check that I am moving in the right direction regarding balance on Democracy 4.

The game has a series of ‘events’ which are triggered by various inputs. Some of these events are pretty ‘informational’, in that they mark changes the player has made and their impacts, other affect the games difficulty by stopping the player racing too far ahead or too far behind. Others are there as a random curve-ball that the game throws to the player to shake things up. There are about 120 of them.

Because they are not triggered purely randomly, the frequency with which each event shows up is determined by the complexity of the neural network that forms the basis of the games simulation. This means that some events may end up triggering much more than others, if the inputs to those events are not balanced correctly.

In an ideal world, the whole game would be a truly self-balancing neural network, but I don’t trust the systems enough to unleash anything like that, so I collect statistical data on which events trigger in each game version, and then change some of the inputs each update so that things balance out for the better.

In an ideal world, every player gets to see 100% of the content in the game, so they are effectively getting what they paid for, and not being constrained to a tiny subset of the content because of poor balance. In other words, if I have an event called ‘school shooting’ and its triggering one tenth as often as ‘scientific breakthrough’, then the balance may be off, in that the inputs to the first event are too strong, and the second too weak.

Note that I am talking in grand statistical terms over thousands and thousands of players. An *individual* player may never see school shooting, or scientific breakthrough, depending on the countries they choose and their play style and skill, but I need to check I’m not adding content to the game that hardly anybody sees!

By collecting all this data, I can build up tables for each game version showing the total events triggered for players of that version, AND the number of times each event triggered. Like this:

1.27
Event Namecount
shareiposuccess7790
shareipocancelled7112
hugehurricane4515
dubiousrolemodel4190
resourcesobsolete3913
multinationalcompanyheadquarters3810
militarycontractscandal3780

..and so on. Because I am looking at 120 events, I can work out that for version 1.27, if every event was equal, they would all trigger 1,406 times given the number of version 1.27 games played. I can then store the difference between the ACTUAL number of triggers, minus the target, and express this as a deviation in percentage terms from the target:

1.27
Event Namecountpercentage deviation
shareiposuccess7790453.78%
shareipocancelled7112405.58%
hugehurricane4515220.96%
dubiousrolemodel4190197.86%
resourcesobsolete3913178.17%
multinationalcompanyheadquarters3810170.85%
militarycontractscandal3780168.71%

So you can see that back in version 1.27, the top event (Share IPO success) is triggering 4.5 times as often as I would like.

I can then add up all of those percentage deviations and get a final number representing the total percentage deviation (in absolute terms) over all events. That gives me this:

VersionAbsolute Deviation
1.276426.89%
1.285717.38%
1.296128.34%

In an ideal world, this number trends down over time as each version comes out.

Another way I looked at things was to just look at the absolute value of the top 10, and bottom 10 events, to see if I am effectively ‘squashing’ the trigger probabilities to prevent extremes. This is a major goal, because players will not notice and event triggering 5% more than another, but if one triggers EVERY game, or NEVER, then that does get noticed. When I look at this data I get these values

VersionAbsolute DeviationExtremes Deviation
1.276426.89%2941.48%
1.285717.38%2239.64%
1.296128.34%2433.29%

By doing this for a whole bunch of different game versions, I can end up with a chart showing my progress!

More helpfully, I can quantify that the uneven-ness of the event triggering has improved by 14.73% since version 1.27. The extremes have been reduced more, by 30.29%

This probably sounds stupidly maths-y and the system is full of issues, such as events that only trigger in a few countries, and therefore SHOULD be seen less… but there are limits to which a single coder/designer/bizdev/marketer person like me can spend time crunching all this.

TBH its just a big relief that the numbers DO seem to be trending in the right direction, especially as I keep adding new countries and complex simulation elements (meaning that the simulation is in flux, and thus just remaining the same would be a balance-win of sorts.

It might be bizarre but I really enjoy this kind of analysis :D Now go tell your friends how well balanced Democracy 4 is :D