Game Design, Programming and running a one-man games business…

Deferred Rendering / lighting. Balls, maybe not

For a while I’ve been thinking about putting deferred lighting with normal maps into GTB. This was something I talked about briefly during the development of GSB but it never happened. it basically a way to ‘fake’ the 3D lighting effect with a 2D image, *if* you have the original 3D model the  2D image came from, and thus can make a ‘normal map’.

Here is what I mean:

http://experimentalized.blogspot.com/2010/07/2d-deferred-normal-lighting.html

This stuff is definitely not my area of expertise, and to confound the problem, all of the tutorials and explanations of the effect seems to concentrate on XNA or doing it with actual 3D scenes, whereas mine is a 2D engine.

Plus, it seems that it doesn’t do what i wanted it to, which is to take a lightmap full of various light sources (image the whole scene, with just the ‘light’ rendered onto it), and convert that into realistic looking shadows on 2D sprites. it appears to be a single-light source only solution, involving pixel shaders. Bah.

As I type this, I wonder out loud if bump maps are the answer to my problem? It’s not a disaster if there isn’t a solution, as GSB looked fairly pretty, but I’d like to take things up a level with GTB. There is only so far you can go with 2D top-down view rendered stuff, but I’d like to be the prettiest, shiniest 2D game of it’s type, if at all possible. The other effect I tried once but wimped out of was those distortion-map effects where you get a sphere that distorts the pixels accross it, and thus get a ‘shockwave-air-blast’ style effect. I think Call of Duty 4 used it a lot. I ran into ‘tearing’ and other bugs and eventually abandoned it in a strop :D

Anyone got any tips for fairly awesome 2D top-down effects in games?

(The minute I have my logo finished I’ll talk about GTB)

 


12 thoughts on Deferred Rendering / lighting. Balls, maybe not

  1. Surely you need more than a normal map to simulate shadows – a height map and an ambient occlusion map would probably be really helpful. In a 3D scene you would get some of that information “for free”. Unfortunately, this type of work isn’t something I’ve done before so I don’t think I can give you any useful pointers.

    The distortion sphere is nice. FEAR used it for grenade explosions and I remember being impressed by how it looked.

  2. How are you doing your 2d blits? If at any point in your pipeline you’re hit the 3d side of the graphics card (which is how a lot of 2d engines work these days as far as I’m aware), going the shader route is what you want/need.

    Doing lighting calculations to contribute to the color of the sprite (i.e. the higher parts on the bump/normal map) is really easy once you get a list of your light sources into the shader. Actual shadows would require actual geometry to do effectively or accurately at all. And yeah, if you’re going to the extent of having a normal map you could also do specular and baked AO really easily which will help to punch up the effects.

    The distortion effects you’re talking about are really, really easy in a 3d shader as well. One easy way is to take the normal from your normal map and then tweak your regular color/texture lookup coordinates based on that vector. This article from GPU Gems covers a simple way of doing that: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html

    Deferred is likely a lot more work and more restrictive than you’d want. For all intents and purposes it requires that you have only a single set of materials and translucency still requires forward rendering.

    Sorry that this has been a 3d specific rant, but there’s a lot to gain from being able to use the card’s pipeline and acceleration and if it’s happening anywhere within your system hooking into it gets you where you seem to be wanting to go.

    -r

  3. I might be wrong here but what you described doesn’t seem like deferred rendering at all.

    What you want to do is render your entire scene once into an fbo that will hold everything from color to normal information, and then render that fbo to the screen several times (one time for each light).

    What you’ve described (render lights into texture) here might work for just one light, but it does work for many.

    basically what you want in your shader is colour*normal*(light1+light2+light3)

    where normal*(lights) is the light map (with it’s own normal map) applied to the normal map and rendered onto a black fbo, adding to the rendered fbo.

    In the end as you render it to the screen you render first the resulting light*normal map fbo, and then with a multiplication you render the colour fbo you obtained in the first step.

    If this sounds like something you want but didn’t understand my explanation then please mail me, and If I completely misunderstood your post then sorry for being obnoxious.

  4. Yup my engine is basically centered around D3DTLVERTEX verts in a 3D-rendering style pipeline, just with 2D sprites rather than bothering with any Z. I am already using shaders to do bits and pieces.
    I should try and gvie the distortion thing another go. One thing that I’m aware of is it means yet another fullscreen copy. If I render everything to a fullscreen buffer, then render it all from there to another offscreen backbuffer in order to apply lightmap effects, then maybe again to another one for my super-super top secret effect… I’d need to do this yet again in order to apply distortion effects I assume?

    My knowledge of integrating shaders into actual C++ game code is pretty limited, but as I understand it each shader is requiring another render of that screen? I’m not familiar with how you can apply several shaders at once to a single copy from render target to render target.
    In my day we had just 1 backbuffer and we was happy. :D

  5. Well it depends on whether your effects are all full screen or not. Things like explosions would ostensibly only cover a certain area.

    So really you just need to do one more full screen where each effect is then drawn in back to front order (because of the transparency). You would bind the current buffer as a texture sampler so that your other effects could read it.

    Lightmap effects would/could be sampled in the individual sprite drawing instead (i.e. where your normal mapping and lighting is also taking place).

    From the way you’re describing it it sounds like the one element you’re conceptually taking from deferred rendering is compiling all of your scene data into a set of buffers and doing the whole screen in passes of sorts. The more traditional forward rendering setup is likely a lot more straightforward and versatile.

  6. Yes, I think actually everyone *does* do what I’m planning, which is to do another fullscreen copy for each required pass. I did this for GSB for stuff like color tinting of the maps combined with motion blur.
    It just seems a tad expensive in fill-rate.

  7. It seems like you wouldn’t need fullscreen for something like color tinting (unless it really was for the whole screen) and instead just do that per-sprite in that shader.

  8. I remember seeing this post a while back on the Spelunky blog: http://mossmouth.com/forums/index.php?topic=1016.msg30804#msg30804

    I don’t think they’re actually casting any shadows with it, but the effect from the normal mapping looks pretty great in the screenshots and trailer they’ve released. Here’s the trailer for reference: http://www.youtube.com/watch?feature=player_embedded&v=TCswu85rJYY

    Anyway, good luck, I think it would be worth the effort to figure it out if you can get that much visual effect out of it.

  9. Hey Cliff,

    I actually coded a deferred pipeline a couple of years ago. You can easily support more than a hundred lights active at the same time. Dynamic shadows is a bit tougher and expensive but doable.

    I have placed a demo of the tech online for you to check. Everything in the film is 2D sprites, deferred, realtime. There are many different techniques integrated in the shot. The movie itself is a bit hard to watch, but it was the only thing I could find on short notice, I should have some more images somewhere around here though…

    http://www.tangrin.com/Deferred/show.avi

    I don’t want to go into details about how it works in public though, because I might still use all later. If you want more information, just shoot me an e-mail.

  10. Dear Cliff,

    It is quite possible that I’m an idiot (albeit a Kudos loving one). I’ve been looking for a way to email you but the links on your game site don’t work..

    So frustrating being technologically challenged. Any possibility of you sending me one so I can respond with my brilliant ideas? Thanks in advance.

    Love the pics of your cats by the way.

    Keep up the great work & take care,
    Carolyna

  11. It doesn’t have to be crazy to look good. Even something as simple as dynamic shadows using box2d: http://forums.tigsource.com/index.php?topic=8803.0 (click on the first image for an interactive version in flash) looks cool (imagine lots of explosions etc).

    it’s all about style. bump mapping could be nice, but how well are you going to see it if the default viewport is quite far à la GSB?

  12. Hi Cliff,

    For a game demo of mine I had 2D sprites with normal maps generated from the original mesh, like you described. I render them with environment mapping and up to four local dynamic lights. The video below doesn’t do it much justice, so I’d suggest checking out the demo itself and use the scroll wheel to zoom in and look at the effect up close. All of the ships and planets are lit this way.

    http://www.youtube.com/watch?v=4Faddv3Y1rE&hd=1

    http://naixela.com/alex/downloads/GammonTriggerDemo8_unlocked.zip

    The basic approach is to find up to the 4 nearest lights per object, and pass them into the shader along with a pre-blurred environment map. Scale the light intensities by their distance to the object’s center (per sprite) and by the clamped dot product of the normal and normalized direction to the light (per pixel). For the environment map reflections, have a scale factor that maps the object position into the environment map, and then offset from that position by the per-pixel normal and do a texture lookup.

    You can also get a distortion effect by passing in a 2-channel texture with perlin noise in it, scrolling it over time, and using that as an offset to your per-pixel texture coordinates.

    If done right all of that stuff works without multiple passes (no offscreen rendering) and was even fast enough to run on an Atom netbook without slowing down.

Comments are currently closed.