Fragment shader - Occlusion Culling

Hey,

I have a short question about occlusion culling, done with the fragment shader.
What do you think about the following algorithm:


Take the previous rendered depth buffer from the scene
Pass it to the forward rendering shader
All shaders, except the fragment shader, are default pass through shaders
The fragment shader calculates the screen space coordinate from the current rendered fragment
The fragment shader compares the value from the previous rendered depth buffer with the current rendered fragments depth
The fragment shader skips the fragment, if the fragments depth is higher then the previous depth value (Skip it with keyword "discard")

I know that the fragment shader is the last step before rasterization, and now, i don’t know, if it would gain enough performance for implementation.
With this method, the rasterization for the rendered fragment would be skipped.

Gain performance in what way? Aren’t you just talking about a light pre-pass? And if so, that shouldn’t involve the fragment shader at all; depth testing isn’t something a fragment shader should be doing.

Gain performance in what way?

Occluded fragments do not go to the rasterization pass.

Aren’t you just talking about a light pre-pass?

No, i just take the depth texture from the previous render pass, and compare it with the current rendered fragments, so i don’t need to pre-render the scene.

At all, this fragment shader occlusion culling is like the default forward rendering shadow mapping algorithm, except that you do not take the lights depth texture, you take the cameras depth texture from
the previous render-pass, and you do not modify the color of the fragment, no, you just discard it.

And i think, that this can maybe increase the performance of the whole scene, depending on the scenes complexity and occlusions, about a few frames.

Hopefully, i subscribed it good enough :wink:

OK, it’s not clear to me what your algorithm here is, so let me try to restate it and see if that’s correct.

You have a depth buffer. It’s from the “previous render pass”, which I’ll assume means “previous render frame”. You use this depth buffer in the fragment shader, comparing its depth value against your own to see if it’s closer. And if it is, you cull the fragment.

The problem is that this is not good enough. If you cull based on the last frame, your mesh data is old (and if your camera has moved since then, then it won’t even work that well). If some meshes have moved around, you could get objects that are visible culled.

And if “previous render pass” is from the same frame, then as I said before, what you’re doing is pointless. This is just regular depth buffering; you don’t need the fragment shader to do anything. That’s just multipass rendering; it’s not occlusion culling.

You got it :smiley:

Ok, with your answer, my question is fully answered, thank you very much!

Which it did I get?

I meant my algorithm.

You have a depth buffer. It’s from the “previous render pass”, which I’ll assume means “previous render frame”. You use this depth buffer in the fragment shader, comparing its depth value against your own to see if it’s closer. And if it is, you cull the fragment.