Questions about deferred shading

I’ve been looking a lot into various lighting techniques lately, and am considering using deferred shading (haven’t implemented anything yet though). Now, one of the downsides about deferred shading is that it requires so many extra buffers to store data - the G buffer, the normal buffer, the depth buffer, etc. On this point, I have to sit back and ask: why?

We already have a back buffer. If you use Z-buffering (which I am) you have a depth buffer attached to it. Why can’t I simply use the back buffer instead of additional render targets? Or, is it impossible to read directly from the back buffer in a fragment shader? If it’s impossible, then I would likely want the additional buffers anyway, since I would need them to do any sort of post-processing effects…


Also, since I’m here. Frankly I’m not sure if deferred shading is right for me or not. Pre-baked shadow/light maps almost always look better than anything you can do even with deferred shading, but I’m not sure if I can even take advantage of it. Most of the significant lights are dynamic (flickering flames, sun moving across the sky, the player carrying a lantern, etc), and I’m going to want to simulate radiosity from those lights somehow. How efficient are lightmaps from dynamic lights casting on dynamic objects? Somehow I’m thinking not so much… How difficult is it to calculate light positions for radiosity for a deferred shader?

Transparency’s an issue too, though it is in a forward renderer too (though less so), and it’ll probably be another 5 years before they have an efficient order-independant-transparency that works in a deferred renderer (yes, I know about dual depth peeling - keyword “efficient”).

Anti-Aliasing isn’t an issue for me, I’d just as soon use a post-process technique like FXAA or SMAA.

Also: Baked Ambient Occlusion maps, or SSAO?

On this point, I have to sit back and ask: why?

Because that’s what the algorithm requires. At the absolute minimum, you need the normal, a diffuse color, and enough information to reconstruct a camera-space position of that point (or in whatever space you do lighting in); the depth usually is sufficient for this.

As you may notice, that’s 3 buffers: one for normals, one for color, and one for depth. The back and depth buffers alone aren’t enough; you need a third one. Therefore, even if you could bind the default framebuffer as a texture, it’s simply not enough information to do anything besides gray-scale rendering.

Pre-baked shadow/light maps almost always look better than anything you can do even with deferred shading

This statement presupposes that “Pre-baked shadow/light maps” are somehow not possible with deferred rendering; this is inaccurate. If you want to use light maps with deferred rendering, nobody is going to stop you. All you need to do is pre-seed your deferred lighting buffer with values sampled from your light map. It would just be another buffer output during your pre-pass.

How efficient are lightmaps from dynamic lights casting on dynamic objects?

They’re not. The whole point of light maps is that they’re static. Hence you can calculate them off-line as a pre-processing step. That’s why you can do fancy radiosity and what-not. Light maps are applied for static lighting environments to static objects in the world.

Transparency’s an issue too, though it is in a forward renderer too (though less so)

Transparency is no worse off in a deferred renderer than in a forward one. You still have to render everything after the opaque objects, and you still have to sort them. Really, transparency is entirely orthogonal to deferred/forward rendering.

Also: Baked Ambient Occlusion maps, or SSAO?

What about it? One is static, the other is dynamic.

  1. If you are wondering why deferred shading requires the G-buffer then it means you don’t understand how it works. Do some research.
  2. Lightmaps from dynamic lights casting on dynamic objects is not really something that you can do real-time.
  3. Transparency is more or less a solved issue. You can implement order-independent-transparency using linked-list buffers pretty efficiently.
  4. Antialiasing is a solved issue too. Any recent GPU can do MSAA even in case of a deferred renderer.

As you may notice, that’s 3 buffers: one for normals, one for color, and one for depth. The back and depth buffers alone aren’t enough; you need a third one. Therefore, even if you could bind the default framebuffer as a texture, it’s simply not enough information to do anything besides gray-scale rendering.
What I’m asking is, do I need to bind 3 EXTRA buffers, or can I use the default back and depth buffer plus 1 additional buffer for normals?

This statement presupposes that “Pre-baked shadow/light maps” are somehow not possible with deferred rendering; this is inaccurate. If you want to use light maps with deferred rendering, nobody is going to stop you. All you need to do is pre-seed your deferred lighting buffer with values sampled from your light map. It would just be another buffer output during your pre-pass.
That is true, but the advantages of deferred shading make light maps less vital to create realistic lighting, since it’s possible to do something similar using dynamic lights. Would this be less efficient than processing the baked lightmaps? I’m guessing it would be…

Transparency is no worse off in a deferred renderer than in a forward one. You still have to render everything after the opaque objects, and you still have to sort them. Really, transparency is entirely orthogonal to deferred/forward rendering.
The problem is that translucent objects have normals too, and the only way to have them be lit properly is to do a lighting pass in between drawing every single translucent object. At that point, it might be faster just to forward-render them, though that requires light sorting, which I was really hoping I’d be able to avoid if I go deferred…

  1. If you are wondering why deferred shading requires the G-buffer then it means you don’t understand how it works. Do some research.
    I know why I need a G-buffer, I’m wondering why said G-buffer can’t be the back buffer that I’ve been using all this time.
  1. Transparency is more or less a solved issue. You can implement order-independent-transparency using linked-list buffers pretty efficiently.
    This sounds like it would be expensive memory-wise, but I’m curious. Could you link me to a paper or a tutorial or something?

What I’m asking is, do I need to bind 3 EXTRA buffers, or can I use the default back and depth buffer plus 1 additional buffer for normals?

As I implied when I said, “even if you could bind the default framebuffer as a texture,” you can’t. The default framebuffer images cannot be used with anything except the default framebuffer.

That is true, but the advantages of deferred shading make light maps less vital to create realistic lighting, since it’s possible to do something similar using dynamic lights. Would this be less efficient than processing the baked lightmaps?

That’s not the point. You use dynamic lighting because it’s dynamic; it affects things that aren’t static. It affects everything equally.

You’re spending performance to provide a uniform lighting environment, where all of the objects are lit the same way. Rather than having the hack of using lightmaps for terrain, while moving objects use a completely different lighting model.

The question you need to concern yourself with is this: is uniform lighting worth the performance for you? If so, then deferred rendering is generally the best way to get it on most of the desktop hardware available. If you feel that the lightmap hack is good enough for your needs, then feel free to use it.

The problem is that translucent objects have normals too, and the only way to have them be lit properly is to do a lighting pass in between drawing every single translucent object.

You don’t do deferred rendering on transparent objects. You can’t, because you can’t accumulate light passes. Well, you kinda can, but you’d have to use an off-screen buffer for it, so it’s not really worth it.

This sounds like it would be expensive memory-wise

Everything is ultimately a tradeoff. Or did you think you were going to get free per-sample sorting on the GPU?

The general technique is shown in [this power-point file](http://developer.amd.com/gpu_assets/OIT and Indirect Illumination using DX11 Linked Lists_forweb.ppsx), though uses D3D11 to do it. It can be translated to OpenGL.