Stencil-op depth fail with depth testing disabled

If I disable depth testing with

glDisable(GL_DEPTH_TEST);

but still request a stencil op on depth pass / fail (in my case fail) with something like

glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR, GL_KEEP);

Will it still perform the stencil operatrion for failed depth tests as specified by the depth function, or does depth testing need to be enabled.

I ask because I need to draw all geometry regardless of whats in the z buffer, but I also need to decrement / increment the stencil buffer on z fails.

Will it still perform the stencil operatrion for failed depth tests as specified by the depth function

The question makes no sense. You turned off depth testing, so there is no such thing as a failed depth test.

Thanks for the reply, Alfonso.

See I thought that the stencil operation will still check glDepthFunc, even though I disabled the depth test that keeps fragments from getting into the frag shader.

Which brings me to the heart of the problem. Is it possible to get all geometry into the fragment shader regardless of fragment depth but still perform stencil operations based on depth test results?

The way I understand it is if I have:

glEnable(GL_DEPTH_TEST);
glEnable(GL_STENCIL_TEST);
glDisable(GL_CULL_FACE);
glDepthFunc(GL_GREATER);
glDepthMask(NULL);
glStencilMask(0xFF);
glStencilFunc(GL_ALWAYS, 0, 0xFF);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR, G_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR, G_KEEP);

// Draw shadow volumes.

This is the correct state setting for having the stencil buffer keep 1s for shadowed regions. However the problem with this is that I won’t be drawing shadow volume geometry that fails the depth test (in this case in front of my g-buffer depth buffer) even though I need to do things in the fragment shader for geometry in front of my g-buffer’s depth buffer as well.

Is the only way to accomplish what I want two passes? I would like to avoid that if possible, and hence why I am looking for a way to update the stencil buffer on one depth test function (GL_GREATER), but draw fragments on another (GL_ALWAYS);

Is it possible to get all geometry into the fragment shader regardless of fragment depth but still perform stencil operations based on depth test results?

So you want to perform an operation based on a test you’ve turned off?

The stencil value is part of a fragment’s output. The fragment’s output, all of it (colors to the various buffers, depth, stencil) is either written or not. Certain fragment outputs can be masked on or off, but only on a per-primitive basis. That is, you can not write the depth for a particular rendering command. But once the rendering command begins, it’s all or nothing. Either all of the non-masked color, depth, and stencil values will be written for a fragment, or none of them will be.

However the problem with this is that I won’t be drawing shadow volume geometry that fails the depth test (in this case in front of my g-buffer depth buffer) even though I need to do things in the fragment shader for geometry in front of my g-buffer’s depth buffer as well.

Why? Unless your depth buffer has some meaning other than distance from the camera, why would you need to shadow something that is behind something else and therefore not visible? It seems like a waste of time.

I would like to avoid that if possible, and hence why I am looking for a way to update the stencil buffer on one depth test function (GL_GREATER), but draw fragments on another (GL_ALWAYS);

What are you doing with stencil shadowing where you need to “draw fragment” (by this, I assume you mean write colors to the framebuffer) while rendering the shadow volumes? Usually, you draw the shadow volumes with the colors all masked off, so that no colors are written. You’re writing stencil data, not color data.

Thank you once again for the time taken to write the reply. Your help is appreciated.

Ok to better explain what I want to do:

I currently have a deferred rendering set up with two FBOs: 1) a g-buffer consisting of three attached color buffers holding various scene information (normals, albedo, material values) as well as a combined stencil-depth buffer and 2) a draw-buffer holding the final color buffer and another buffer for special “volume depth” information (used for a novel single scatter algorithm I am working on). The same depth stencil buffer attached to the g-buffer is attached to the draw-buffer.

The rendering pass in question is the one where I draw shadow volumes for various reasons (one shadowing, and two scatter information for a later single scatter pass)

  1. Increment the stencil buffer for all shadow volume back faces behind my scene geometry

  2. Decrement the stencil buffer for all shadow volume front faces behind my scene geometry*

  3. For all shadow volume front faces in front of the scene geometr decrease the super special “volume depth” buffer by fragCoord.z

  4. For all shadow volume back faces in front of the scene geometry increase the super special “volume depth” buffer by fragCoord.z

Now the problem here is that steps 1 and 2 don’t seem to be possible with steps 3 and 4, at least not in the same rendering pass. Why? because if I enable depth testing then 3 and 4 won’t be able to exeute (depth test GL_GREATER fails and fragments are discarded). In the mean time if I disable depth testing then the stencil operations won’t ever happen because the depth test never fails or succeeds cause its not there (thank you for informing me of this).

I am now wondering if there is a way to accomplish what I desire without resorting to two passes (one with depth test set to GL_GREATER for the stencil operations, and the other to GL_LESS for writing to my super special buffer) because this would mean clculating shadow volumes twice which is undesirable (they are calculated in the geometry shader).

The only solution I can think of is to use a color buffer as a fake stencil buffer. Doing something like:

out int fakestencil

if (fragCoord.z < texture(gDepth, corresponding.x, corresponding.y) && gl_FrontFacing)
fakeStencil = texture(fakeStencilBuff, corresponding.x, corresponging.y) + 1;

But I doubt this naive approach would do anything useful because gl will convert the texture look up to a float, and probably doesn’t even allow ints as outs.

  • This method is called “depth fail”, which basically means you count all the shadows that aren’t drawn. So yes even though this seems counter intuitive and a “waste of time” it gets rid of an ambiguity problem due to the near clip plane sometimes clipping a shadow volume.

But I doubt this naive approach would do anything useful because gl will convert the texture look up to a float, and probably doesn’t even allow ints as outs.

Just use a non-normalized integer texture format (R8UI, for example). And similarly, you can write integer values to outputs, and funnel those outputs into buffers with integral image formats.

To pursue this avenue of thought further I have two questions:

  1. Is there a special function in glsl for reading in integral textures that does not clamp it as a float between 0.0f - 1.0f? If not how does OpenGL handle the conversion from signed int to float for negative numbers?

  2. Does the GL_BLEND still apply for integer textures? For example I would like to have in my application:

glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);

out fakeStencil

void main()
{
fakeStencil = -1;
}

And for this to decrement the fake stencil by exactly 1. Would that work?

EDIT:

I actually have a more fundamental problem:

If I attach a texture defined like so:

glBindTexture(GL_TEXTURE_2D, cStencil);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8I, wWidth, wHeight, 0, GL_RED, GL_RED_INTEGER, NULL);

To a frame buffer object as a color attachment, I get nothing. Even trying to draw to the other regular floating point color attachment fails.

If I however change the above call to glTexImage2D to a safe one like

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wWidth, wHeight, 0, GL_RGBA, GL_FLOAT, NULL);

Then I can draw to both color attachments (as expected).

I have checked for framebuffer completeness in both cases and everything seems to be a-ok. So what am I doing wrong now?

EDIT EDIT:
Fixed using

glTexImage2D(GL_TEXTURE_2D, 0, GL_R8I, wWidth, wHeight, 0, GL_RED_INTEGER, GL_BYTE, NULL);

unsure whether to use GL_BYTE or GL_INT as type.

I cannot edit the above post anymore.

An update.

I can safely say that the blend function does not work with integer outputs even to color attachments, and that doing your own stencil buffer with an integer texture is much much slower than the fixed pipeline one. Now if only I could get a 32 bit depth buffer going with a stencil buffer everything would be fine but that doesn’t seem to be possible so :frowning: