A variation of deferred rendering

I had this idea last night, it is some kind of variation of deferred rendering. If I’m right it should be compatible with msaa and does not require the large g-buffer associated with classic deferred rendering.

My idea is to first simply do a depth-only prepass of the scene. Then you render all the geometry again with access to the previous depth buffer. In this pass you compare the depth from the old depth buffer with the new depth that you are about to write(these operations are made in the fragment shader). If the new depth is equal to the old depth, you perform the shading-calculations.

If im right, this will only perform shading-calculations on the pixels visible, it would eliminate the need for a large g-buffer and should be compatible with msaa. Either I have misunderstood all that i have learned in opengl, or it should work.

I would just like to know what you guys think about this idea before I try to implement it myself.

Edit: I might have put this in the wrong section of the forum. If this thread is misplaced, please move it.

I’m not seeing how this would accomplish anything that the GPU doesn’t already automatically do for you anyway (i.e. early Z rejection before the fragment shader runs). The required shader branching might also make things expensive.

I think you might have misunderstood me. The method described would, much like deferred rendering, only shade pixels that are visible on the final output, but without the cost of large g-buffers.

What you describe is a Pre-Z pass, look e.g. here: http://developer.amd.com/media/gpu_assets/Depth_in-depth.pdf

the main advantage to deferred shading is not to the actual final shading done, but rather in the way you do light accumulation.
The main performance increase lies in running parts of the lighting code separately and only where it’s needed using data already calculated, so theres no need for multiple passes even if you do have early z rejection.

And if i remember correctly doom 3 engine uses this very method and every nvidia (probably ATI/AMD as well) card since the gf5900 has it on by default.

In this pass you compare the depth from the old depth buffer with the new depth that you are about to write(these operations are made in the fragment shader). If the new depth is equal to the old depth, you perform the shading-calculations.

If im right, this will only perform shading-calculations on the pixels visible, it would eliminate the need for a large g-buffer and should be compatible with msaa

BUT, at which point are you writing out all the information necessary to separate out your lighting information from the geometry (i.e. the G-Buffer contents)?

The Z-Test in the second pass can be done by GL itself.
Then your algorithm is Pre-Z pass as mbentrup stated correctly and the disadvantages are the need for two geometry passes and less flexibility in lighting of a full Deferred Renderer.

Possible, works well, but sadly, you’re a few years too late. Take it as a sign that you have understood your OpenGL/Rendering basics :slight_smile:

Thanks! I’ll look into that!

How can the performance be increased by simply running the lighting code separately? If the lighting calculations and accumulations only are done on visible areas, how can there be any performance increase just by separating the lighting calculations from the rest of the computations?

I calculate the information in the second pass and use it there as well.

Needing to draw the geometry twice sucks, but seeing as deferred lighting gets away with it at acceptable speeds, i think its not suck a big disadvantage. Could you explain how there would be less flexibility with lighting? As i’ve understood, classic deferred rendering has problems with it, how could a pre-z pass make it less flexible?

Your object onscreen: 1 million pixels. 10 lights cover 1k pixels of it each. First case: you do 10-40 mil calculations, of which only 10k are useful. Second case (deferred shading): you do 10k calculations, all/most of which are useful.

On pre-Z you will have to evaluate the lighting from each lightsource for each fragment. But if you have lots of small light sources with very limited radius of influence (in screen space) you can limit the lighting calculations to just that radius if you do the lighting in a seperate step (thats what Ilian Dinev explained).
You can also do the lighting on a lower resolution (Inferred Lighting) if you want, that would not be an option on pre-Z.
Deferred Shading has conceptually no problems. In practice you had problems with Antialiasing for a long time, but nowadays you could even do the deferred shading per fragment. The main drawback would be the high memory consumption and bandwight requirement.
Whether rendering the geometry twice is a problem depends on you geometric complexity as well as the complexity of your vertex/tessellation/geometry shaders…