Game Design, Programming and running a one-man games business…

Getting Gratuitous Tank Battles to run on my laptop

So…. it runs! and it runs in places at 60FPS, but in other places….not.

My dell video card has a 64MB VRAM intel GMA chipset, so hardly a gaming laptop, but my aim was to get GTB to at least run on it, even if the framerate sucked. That way, I know people with 128MB or 256MB cards should be fine, and I’d like to keep the min spec as low as I can.

The game already had a ton of stuff you could turn off, such as shadows, shaders etc, but running it on the Dell, and then profiling it using the awesome free intel GMA tools showed up a ton of stuff that I could do to increase the initial 20FPS rate. These were:

  • Realise that the refresh rate on the dell was set at 40FPS not 60FPS initally, hence making an artificially low limit. DOH!
  • Removing a redundant Clear() at the start of each frame. I fill the screen anyway, so why bother? I don’t use a Z-buffer.
  • Removing some render-target sets and clears when the shader options were turned off. With these off, I can render direct to the back buffer, old-school style and save time on render target changes.
  • Adding code that detects a jpg when loaded, and mip-maps it. Previously, they had no mip maps at all. Could possibly reduce some memory consumption
  • Add a graphical detail slider to options which can turn off a bunch of frilly details like window shadows, and drifting smoke on menu screens.
  • Providing a separate list of lower-res textures that get used in some cases when the graphical detail slider is below 25%. Such as mech legs and the shadow maps for scenarios. Any texture of 2048 or higher gets a lower res replacement. I had tried auto-scaling them on load, but this gave unexplained errors and I don’t trust D3DX to do this reliably on all video cards to be honest, so separate low-res textures it is.

I think the biggest wins were the texture-size reductions and the removal of the render target clears. It was interesting to note that the dell considered the game to be GPU limited, despite it being a fairly old and crappy chip (and only a single core). I guess at 1920×1200 res with all the options on for the desktop, things may be very different though.

Things may start to race ahead from here. The game is definitely very playable in its basic form, with the majority of extra work now likely to be the online challenge and integration stuff. That will take months, but still, the end is definitely in sight.

 


6 thoughts on Getting Gratuitous Tank Battles to run on my laptop

  1. •Removing a redundant Clear() at the start of each frame. I fill the screen anyway, so why bother? I don’t use a Z-buffer.

    If I was you I would check that this is an optimization on all hardware….I belive (and I may be wrong) that more modern hardware can optimize things if you do a clear. For example by starting drawing anything happening after the clear into a different frame buffer memory (as it knows that the output can’t depend on anything before the clear). If you don’t do a clear then it can’t start rendering the new frame out of order because it can’t know if the output depending on anything else in the render queue.

    The advice seems to be to always do a clear as it’s fast and gives the driver significant optimization possibilities.

    As always though, you’d need to profile it on different hardware to be sure :)

  2. Interesting. I had always assumed that BeginScene() would have signalled that to the driver, but then there are some freaks who use multiple BeginScenes() per frame. As always, we are left to speculate and guess because there is no absolute ruling on stuff like this from Microsoft + the big three hardware vendors :(

  3. That’s just what I’ve read, I’m not an expert on this.

    Definatly worth profiling it on high end hardware too though, with and without.
    And can I add that this is looking like it might be a fun game :) Reading about the development on here is making me want to buy it…

  4. What’s the max texture atlas size you’re using? I’m personally using texture atlases with a max size of 512×512 to ensure support of old video cards, but I’m beginning to think that the performance boost from upping them to 1024×1024 is worth more than the few old cards that’d be supported?

Comments are currently closed.