Game Design, Programming and running a one-man games business…

Game pace vs content cost

A friend of mine is making a game that looks absolutely stunning. its a 3d action game, and its got a wonderful art style. The art ‘design’ is fantastic and the level of detail is really high. There are lots of tiny little interesting objects all around the world that have been lovingly modelled and textured and animated and had sound effects attached to them. It makes for a lovely world. The game is a sort of survival/action thing with some sneaking, but a lot of running, and as a result, you get through areas of the game relatively quickly.

Here is an icon from Production Line (my car factory sim).

If you play Production Line, you’ve seen that graphic a lot. Its in every game, all over the place. You see it maybe 10,000 times in the course of a decent play-through, and it may be on the screen 70% of the time you are playing, because its a key resource in the game. You can guess that, lovely though it is, it didn’t take days of artist time to make this icon. In other words, getting that piece of ‘content’ produced was pretty cheap.

Content is cheap, but at a certain level of output, the cheapness starts to multiply and become not so cheap…then expensive, then ruinous. This is why the realistic FPS scene is dominated by a small number of big players. You simply cannot make a photorealistic style FPS that is only going to sell 500,000 copies, even if they are all at $59.99, because you will go bankrupt immediately. Take a look at this:

Its awesome right? Its amazing that video games look this good these days. But textures that high-def take a LOT of time to make, vs the 128×128 pngs in Production Line. And worse still, MUCH WORSE, is that you are running around those worlds at a high pace, zapping, shooting, dodging, jumping. You are not sightseeing, you are running like crazy.

In a multiplayer FPS, things are not so bad. You can limit the fighting area, and get a lot of mileage out of a relatively small area. In a single player action game, you are going to whizz through that content at high speed. This means you need a LOT of content, especially when gamers are constantly complaining that a game that cost them $59.99 is *only* 40 hours long. (A pretty good cost/entertainment ratio in any media frankly). The situation is so bad, that even the huge budget publishers make sure that game stories include some reason why you have to backtrack, and ‘revisit’ older areas of the game for some nebulous story reason. This is not lazy writing, its producers and accountants telling you that you need 40 hours but only have the budget for 10 hours of content experienced at the expected pace.

Few indies are crazy enough to make an FPS, but plenty make ‘fast paced’ games. Be aware that you need to take the content-consumption rate into account when designing your game. It is, of course a balancing act. On the one hand, you want the player to constantly be exposed to novelty, to ‘keep the game interesting’ but on the other hand, you simply cannot afford to do this at high speed. You need to design your game according to your budget, or you risk making a heavily compromised game, or running out of money. Do not fall into the trap of thinking games need a ton of varied content to be good. Chess has few pieces and a tiny game area, yet has entertained people for an amazing amount of times. Some of the best, most classic TV episodes are those written with only a single set and 3 or 4 characters, for purely budget reasons.

If making the player move 10% faster adds 10% to the required content and dev time/budget, is it still a good idea?

The big old HTTP vs HTTPS nightmare

In case you didn’t already know, Google is building up to giving a bit of an SEO smackdown to sites that do not use HTTPS, but simply use HTTP (Like most sites). If you notice sometimes you see sites with a big old green padlock in the address bar, thats because they are https, and thus ‘secure’ and you can be pretty sure the page you got was the page you thought you would get, compared to http, where pretty much any script kiddie/russian haxxor may have spoofed between you and the server and served up fake stuff.

It used to be the case that you only saw https when shopping, or entering passwords or logging in or handing over information in a form. It was assumed that other traffic was harmless, but the advent of man-in-the-middle attacks, and more sophisticated malware malarkey means that google basically want everyone to use https everywhere, and if we don’t, they will punish you in SEO, which for a small website and brand basically means death.

So I grabbed my SSL certificate ($90 / 2 years), and installed it on my server, and sure enough you can visit https://www.positech.co.uk and everything is padlocky and impressive:

The problem is, 99.99% of links to my site obviously point to http://www.positech.co.uk, and are thus technically insecure, so you have to ALSO set up a server-wide redirect to make all http calls https calls. So I got my managed server dudes to do that 9I have a dedicated server). And thats when EVERYTHING fucked up. Gratuitous Space Battles campaign mode log-ins stopped working, stats reporting for production line just ended, and various other things went BANG. I had not realized it, but a billion years ago when I coded my online integration into my engine, I had coded it to use HTTP and explicitly not accept any redirects to HTTPS (Which would have failed). This has come back to haunt me.

Combine this with the fact that you likely have, on any page you manage, a whole bunch of third party content that likely is NOT https, and things get ugly. In my case, the most common culprits were embedded youtube videos, which were defaulting to http. They are simple to change (just a URL edit) but there are lots.

So this morning I gave up and removed the default server wide http redirect and experimented with some internal changes. So if you just go to www.positech.co.uk, it has an explicit page-level redirect to force the https version, PLUS all the outbound links from that page now hard code in an https link. However, I have not done it everywhere. For example this url: https://www.positech.co.uk/cliffsblog/ will not automatically direct you to https://www.positech.co.uk/cliffsblog/ yet, even though the HTTPS version is fine.

I HOPE that google is sensible enough tom understand that getting a cert is easy, but converting every page so you can do server-side redirects is tricky, and actually checks for the legit serving of an https page before http, and doesn’t penalize the lack of server-wide redirects, but who knows. FWIW, I found this page really helpful for working out where my problems were, and if you are going to do the transfer yourself, you should bookmark it now.

I guess this opens up the wider topic of whether or not hosting your own html style site on a dedicated server makes any sense in 2018 for an indie games developer. I am not sure how I feel about this. My site has existed since 1998, so I have a lot of legacy stuff on there, and I am pretty old-school about the internet, in the sense that I think broken links and content removed from the net is pretty bad. HTTP has tech built in from the start to support redirects, it really is a last-resort that you should EVER hit a 404 page… but I digress.

I know many indies will think the cost (hundreds of dollars a month) for a dedicated server is nuts, but I spread that over this blog, my main site, my own metrics collection stuff, the online component of GSB, the update checking code and patch delivery for a bunch of older games, my forums (which are surprisingly large and busy for a single-dev company), a site I host for an old friend, and also showmethegames.com and other bits and pieces. This has all grown up over the twenty years that I’ve had positech.co.uk, and transferring all of that to some turn-key solution without breaking a load of stuff would be pretty bad.

I know many indies think that if they are PC developers, then their homepage is basically store.steampowered.com/yourgame  but I find that approach dangerous. I am an INDEPENDENT game developer, and the longer you hang around as an indie, the more you see the tides change around you. When I started, Download.com was THE STORE, then it became real games, or yahoo, then eventually steam & impulse, currently its just steam, but who will it be next year?

If your entire business model is based around a single company, whether its facebook, bigfishgames, steam or microsoft, then your independence is pretty marginal. You are in effect, a subdivision of that company only with no fixed salary or pension, but with considerable day-to-day freedom. Stores can change their royalty split when they feel like it, and their submission rules. If Microsoft buys valve, and decides that violent games aren’t what they want on their store, do you still have a business the next day? This should keep you awake at nights.

Anyway, enough doom and gloom, just my thoughts on why I’m such a dinosaur with his own http problems :D

 

You should aim to be the Elon Musk of software

I’m an Elon Musk fanboy, drive a tesla and own tesla stock, I’m a true believer. One of the things I like about the man is the way he does everything in reverse, when it comes to efficiency and optimization. The attitude of most people is

‘This thing runs at 10m/s. How can we make it run at 12 m/s?

Whereas Elon takes the opposite view:

‘Are there any laws of physics that prevent this running at 100,000m/s? if not, how do we get it to do this?’

This is why he makes crazy big predictions and sets insane targets, most of which don’t get met on time, but when they do, its pretty phenomenal. If the next falcon heavy launch recovers the center core too, thats even more game changing, and right now, the estimate is that the falcon heavy launch cost is $90 million verses $400 million of its nearest competitor (which only launches half the weight anyway). That not just beating the competition, but thats bludgeoning them, tying them up, putting them in a boat, pushing the boat out into the middle of the lake, and laughing from the shore during a barbecue as the boat sinks.

When it comes to my favorite topic (car factory efficiency, due to me making the game Production Line), he comes out with even crazier targets.

“I think we are … in terms of the extra velocity of vehicles on the line, it’s probably about, including both X and S, it’s maybe five centimeters per second. This is very slow,” he said. Musk then added he was “confident” Tesla can get a twentyfold increase of that speed.”

Now we can debate all day long whether the guy is nuts, and over promising and whether or not we could ever, ever get a production line that fast, but you have to admire the ambition. You dont get to create privately-made reusable rockets without ambition. I just wish we had the same sort of drive in software as he has for hardware. The efficiency of modern software is so bad its frankly beyond embarrassing, its shameful, totally and utterly shameful. let me dredge up a few examples for you.

I’m running windows 10, and just launched the calculator app. Its a calculator, this is not rocket science. A glance at task manager shows me that its using up 17.8MB of RAM. I am not kidding, try it for yourself. I’m pretty sure that there was a calculator app for the sinclair ZX-81 with its 1k (yes 1k) of RAM. Sure, the windows 10 app has…err nicer fonts? and the window is very slightly translucent… but 17MB? We need 17MB to do basic maths now? As I type this, firefox has got 1,924MB of RAM assigned to it, and is regularly hitting 2% of my CPU. I’m just typing a blog post, just typing… and thats 2% of a CPU that can do 49,670 MIPS or roughly 50 BILLION instructions per second. Oh…we have slightly nicer fonts too. wahey?

I’d wager the percentage of people coding games who have any real idea how the underlying engine works is tiny, maybe 5%, and of those maybe 1% understand what happens at a lower level. Unity doesn’t talk to your graphics card, it does it through OpenGL or DirectX, and how many of us really understand the entire code path of those DLLS? (I don’t) and of those, how many understand how the video card driver translates those directx calls into actual processor instructions for the card hardware? By the time you filter your code through unity, directx and drivers, the efficiency of what actually happens to the hardware is laughable, LAUGHABLE.

We should aspire to do better, MUCH better. Perhaps the biggest obstacle is that most of us do not even know what our code is DOING. Unless you have a really good profiler, you can easily lose track of what goes on when your game runs, and we likely have zero idea what happens after our instructions leave our code and disappear into the bowels of the O/S or the graphics API. Decent profilers can open your eyes to this stuff, one that can handle displays of each thread and show situations where threads are stuck waiting is even better. Both AMD and nvidia provide us with tools that let us step through the rendering of individual frames to see how each pixel is rendered, then re-rendered and re-rendered many times per frame.

If you want to consider yourself not just a hacker but an ENGINEER, then you owe it to yourself, as a programmer to get familiar with profilers and code analysis tools. Some are free, most are not, but they are a worthy investment. Personally I use AQTime, and occasionally intel XE Amplifier, plus the built-in visual C++ tools (which are a bit ‘meh’ apart from the concurrency visualizer). I also use nvidias nsight tools to keep an eye on graphics performance. None of these tools are perfect, and I am by no means an especially good programmer, but I am at the very least, fully aware that the code I write, despite my efforts, is nowhere REMOTELY close to as efficient as it could be, and that there is plenty of room to do better.  If Production Line currently runs at 60FPS for you (average speed across all players is 58.14) then eventually I should be able to get it so you can play with a factory at least 10x that size for the same frame rate. I just need to keep at it.

I’ll never be the Elon Musk of software, but I’m trying.

 

Battering the RAM

I had a bug in Production Line recently that made me think. Large factories under certain circumstances (and by large I mean HUGE) would occasionally crash, seemingly randomly. I suspected (as you usually do if you have a large player-base) that this must be the players machines. If code works fine of 99.5% of PCs, and breaks on the remainder…that seems suspicious. Once I managed to get the same save games to crash on mine, again in seemingly weird places, but always near some memory allocation…the cause became obvious.

I had run out of memory.

This might seem crazy to most programmers because memory, as in RAM is effectively unlimited right? 16GB is common, 8GB practically ubiquitous, and in any case windows supports virtual memory, so really we are talking hundreds of gigs potentially. Sure, paging to disk is a performance nightmare…but it shouldn’t just…crash?

Actually it WILL, if you are running a 32 bit program (as opposed to 64 bit) and you exceed 2 GB of RAM for your process.  This has always been the case, I’ve just never coded a game that used anything LIKE that. Cue angry rant from unhinged ‘customer’ who thinks it is something akin to being a Neanderthal that my game is 32 bit. Take a look at your program files folders people…the (x86) is all the 32 bit programs. I bet yours is not empty… 64bit is all well and good, but the majority of code you run on a day-to-day basis is still 32 bit. For the vast majority of programs it really wont matter. For some BIG games, it really does. Clearly a lot of FPS games easily need that RAM.

So the ‘obvious’ solution is just to port my engine and game to 64 bit right?

No.

Not for any backwards compatibility reasons, or any porting problems reasons (although it WOULD be a pain tbh…) but because asking how to raise that 2 GB RAM limit is, to me, completely the wrong question. The correct question is “Why the fuck are we needing over 2GB  of RAM for an indie isometric biz sim anyway?” And it turns out that if you DO ask that question, you solve the problem very quickly, very easily, and with a great deal of pride and satisfaction.

So what was the cause? and how was it fixed? Its actually quite surprising. Every ‘vehicle’ in the game code has a GUI representation. Because the cars are built from a number of layers, they are not simple sprites, but actually an array of sprites. The current limit is 34 layers (some layers have 2 sub-layers, for colored and non-colored), from axles to drive shafts to front and rear doors, windows, wing mirrors, headlights, exhausts and so on. Every car in the game may be drawn at any time, so they all need a GUI representation. The game supports 4 directions (isometric), so it turns out we need 34 layers X 2 sub-layers X 4 directions = 272 sprites per car. Each sprite needs 4 verts and a texture pointer (and some other management fluff). Call it 184 bytes per sprite, that means the memory requirements for any car amount to 50k per car. If the player has been over-producing and has 6,000 cars in the showroom then this suddenly amounts to 300MB just of car layer data.

So that explains where a lot of the memory comes from…but what can I do about it? I’m not stupid enough to DRAW all these all the time BTW, I cache them into a sprite sheet and when zoomed out I use that for single-draw-call splats of a whole car, but for when zoomed in, or when the sprite sheet is full, I still need them, and I need them to be there to fill the sprite sheet mid-game anyway. How did I reduce the memory requirements so much?

Basically I realized that although SOME vehicle components (car doors etc) had 2 layers, the vast majority did not. I was allocating memory for secondary layers that would never be rendered. I simply reorganized the code so that the second layer was just a NULL pointer which was allocated only if needed, saving myself the majority of that RAM. With my optimizing hat on, I also found a fair few other places in the code where I have been using ints instead of shorts (like the hour member of a sales record) and wasting a bunch more RAM. Eventually I ended up knocking something like 700MB of RAM in the largest, worst cases.

Now my point is… if I had just taken the attitude of many *modern* coders and thought ‘ha! what a dick! lets allow for > 2GB memory’ rather than thinking about exactly how the amount had crept up so much, I would never have discovered my own silliness, or made the clearly sensible optimization. Lower memory use is always good, as long as there isn’t a major CPU impact. More vehicle components can now fit into the cache, and be processed quicker. I’ve sped up the games performance as well as reducing its RAM hunger.

Constraints are good. If you have never, ever given any thought to how much RAM your game is using, you really should. It can be eye-opening.