Game Design, Programming and running a one-man games business…

Rendering to a texture

My new game does more rendering-to-a-texture stuff than the others have done. I’m very pleased with it so far.

What am I talking about?

Say you have a sprite, and that sprite is a ninja hamster wearing a bullet proof vest and wearing a crash helmet. This is a character in your game, and an icon used in various places. To allow customisation, the player has chosen the helmet and the vest, and maybe tweaked their colors. This gives them a slightly customised hamster.

When you draw this onscreen, you need to draw 3 layers (hamster, vest, helmet). That means drawing 3 sprites on top of each other, and 3 times as many pixels as normal. That can mount up quick, and is called overdraw. It also means changing texture 3 times, and changing textures is sloooowwww.

The solution, before the game starts, is to create a new blank texture in memory, and ‘offscreen’ (while the player isn’t looking) you draw the hamster, with all 3 layers and colors. You then save that texture to disk, with a unique name, Later, in the game, you just blap that single sprite, already composed, in a single draw call. If you are really obsessed, you could, at run-time, stitch together all the units textures into a single sprite-sheet to make it even simpler, and faster.

I have no idea if people tend to do this any more. Most people make 3D games with little or no alpha-blending, where you don’t worry much about texture swaps because you sort by Z and use a Z Buffer. My games can’t be done like that because it’s all soft-edged alpha-blended 2D, so these things start to matter. The code to do all this is a bit fiddly, but I’m already massively glad I’ve got it working. It will let me do all kinds of cool stuff in the next game, and means it will run fast as a race horse wearing rocket-skates.

13 thoughts on Rendering to a texture

  1. “I have no idea if people tend to do this any more.”

    Oh, definitely. Fwiw we do it a lot to save on draw calls, but more of an indication is that the Unity3D guys went out of their way to support “render textures”. One of their examples is an in-game sort of closed-circuit-tv showing output from a “camera” in the game. Of course, that isn’t for the sake of saving draw calls (and we use a simpler method that works better for 2D).

    And I don’t think any of the uses I’ve done or seen involves saving the composed texture to disk. Temporary enough to just leave in RAM until not needed any more. But I suppose that would make sense if you had a lot of them or only used them intermittently or whatever.

  2. I prefer 2D games myself, though I haven’t gotten any to the salable point. But, I’ve used the render to texture fairly often, mainly because it is great for the reasons you said above (customizable sprites), but also because you can do some cutsey special effects with windows when they rendered to texture (like have them wave as you drag them around). Also, to optimize static backgrounds into a single texture instead of layering the entire GUI in parts.

  3. This is quite similar to the 3D method of using imposters – models rendered to sprites for use at far distances.

    Also, the sprite sheet, or atlas, is still very common and is used for lightmaps, low LOD model textures, particle animations, etc.

    Finally, some modern 3D engines even sort their solid objects near-to-far so that z-testing will reject more pixels. This can lead to better performance than batching them by texture if the shader is very complex. They can even go to the extreme of rendering the entire scene to just the z-buffer and drawing the colors in a second pass, which would be a lot of wasted work if shaders weren’t getting so expensive.

  4. Not only I do that with textures, I also do that with meshes. A unit is made of several components, with several textures and meshes. I just merge textures together, then meshes together. Plus instancing, I end up with less than a couple dozens of draw calls to render the scene. :)
    Then I draw the UI and the number of draw calls skyrockets because my UI code is a mess right now :(
    But anyway, its basically the same principle. It all comes down to decreasing draw calls.

    As for saving overdraw, a z-prepass (render depth, then color) will save all the time lost on shading rejected pixel. But you might increase the number of draw calls. I know the Xbox 360 SDK has native support of Z-prepass so I’m guessing Dx10 or up (?) might have something similar.
    There are also all kind of occlusion methods that are really fun to implement, but this is getting off track

  5. As I recall, this is how the total war games work. They skin and animate one soldier in maybe 30, and copy the geometry to the location of the other guys. At least they did for ROME.

  6. That is actually call cloning. You can animate a crowd by playing a couple animations, clone/retarget them on other skinned mesh, clone the mesh and use instancing to draw them. But you need a decent variety of animation/mesh to give the illusion of a good crowd. I remember guitar hero where the third of the crowd was doing the same devil sign with their hand all at once by a variety of black, blue and white t-shirt guys. But then again, that might just represent a real heavy metal audience :P

  7. On iphone, where drawcalls are a killer, this method is a must have if you want a decent framerate.

  8. Is there a reason why a sprite-sheet has speed advantages?
    I mean you can have 200 small sprites in (V)RAM and copy them to a (screen-) surface or you can have a big sprite-sheet with 200 sprites on it to copy it from there.
    Why the speed difference?

  9. TBH I don’t know why it happens, but switching texture causes video cards to flush all current draw operations, as I understand it, and that is slow because it defeats any asynchronous stuff. That might all be crap I dreamt up, but I know it’s slow. Maybe someone else has lower level driver experience and knows why?

  10. A while ago I attended a Sony presentation about the PS3 RSX. As I remember, they said changing states (texture/blending/etc) was expensive because it had to flush the current pipeline and start from scratch. Or something along the line. Probably because the hardware states must be set all at once. I must say I did not pay much attention since I was not a graphics programmer at the time :/

  11. If you’re using OpenGL, you should be aware that a lot of integrated graphics chips don’t support render to texture or FBOs. So your code shouldn’t rely on it if you want to also target customers that don’t have top of the line computers.

Comments are currently closed.