Deferred Shading

Hi,
I am looking for an example of doing deferred shading with OpenGL using only pure ARB extensions. Possible?

http://www.area3d.net/story.php?id=810

Can’t be done with out using some vendor extensions, mine here only uses ATI_draw_buffer and the ATI float texture extension. It may actually run on a nvidia 6800.

Hi…

I’m actually working on a deferred rendering system and I’ve to say there are a lot of interesting problems around the implementation. I want to have it running on both common hardver platforms so I’ve to create multiple code pathes for ATI and NVIDIA cards. I’ve got a GFFX5900XT so until now I could only experimenting with the 128 bit float FAT buffer method in absence of MRTs. The first but minor problem is tight packing the necessary properties (position, normals, albedo, etc…), 128 bit of space is not so much. It really would be good to have integer binary operations on GPU that helps to pack the elements (like in X-BOX), so for example we could use only 5 bits for ambient occlusion, etc. Of course the problem with the lack of space can easily be solved by doing for example one more pass with a tranditional 8 bit RGBA render target, but with this we lose the major benefit of a deferred system. Besides I’ll give it a try and this is not a serious problem with MRTs. The second and most important problem is to have multiple materials/shaders instead of one monolithic lighting shader for the whole scene. Some time ago I’ve read a relative easy way to work around this problem, maybe in one the Advanced formus topics, I can’t remember. The quintessence of this method is to store the material IDs in the FAT buffer when building up the g-buffer and after that we’ve a depth replacing pass that simply draws a screen aligned quad that reads the IDs from the g-buffer and puts them right into the display contexts depth buffer.I’ve done it like this:

In the g-buffer pass the IDs are integer numbers for example in the range of 0…100, these are packed and passed as 16 bit half floats to the next pass trough the FAT buffer. In the following pass we draw the screen aligned quad and multiply the material IDs (after taking its integer part with FLR) taken from the g-buffer with 0.01 and put them into the display contexts depth buffer. In the deferred phase we use the early z rejection to drop those pixels that have different material IDs/Z depth, so we draw the quad with the proper z coordinates (0.23 for the 23. material/shader). The main problem here is the precision. Despite of the full precision 32 bit multiplication the GL_EQUAL depth test mode is only usable and correct for a reltive few float values. On the FX5900 we can use the EXT_depth_bounds_test extension wich solves this problem, but it’s only working for these cards and 6800s of course. I can’t use early stencil rejection neither because of the lack of support. Besides, as I saw the depth replacing shader is quite slow despite of its simplicity. I use the NV_FRAGMENT_PROGRAM api. Any ideas would be appreciated…

LOL :smiley: Factor, the stupid one!!! I’ve just read some old topics about early Z reject on NV cards. I’ve must been LAGging or what :smiley: but as we say it in Hungary: Jobb késöbb, mint soha. :smiley: peace…