Alpha Test + FBO (FP16)

Hi,

I have two questions concerning the combination of Alpha Test and Framebuffer Objects:

  1. When Alpha Test is enabled, the program crashes at rendering.

  2. When is the alpha test applied? Before or after the fragment shader stage?

Thanks,
Benjamin

After i think, though if you are using fragment shaders then it would be a little easier just to use the discard function in the shader.

I might be totally off here, but I remember that Early-Z pass wasn’t working with FP16 render-targets, and it wasn’t working with alpha on.

So even if the specs say the alpha is applied before, which I doubt, as fs is free to modify the functions and values, it might be different with FP rendertargets.

But I guess there are people who know that better here… I’ve been away from OpenGL for like a year :frowning:

Early z should work in fp16 targets, but this is however a different issue.
I just did a test on alpha testing and a FP16 rendering target, and it works fine, maybe it depends on your graphics card.
I don’t think there alpha testing is applied before the fragment stage, simply because to do that you have to know the final alpha value, so it has to be done during or just after the fragment stage (depending a little on what is being done).

At first: Thanks for your answers.

My card is an ATI 9600.

About the discard:
Of course, I could discard the fragments. But I have read, that alot of cards discard the fragment at the end of fragment processing - but this is too late, because:

  1. I have a multipass algorithm, which is computitionally intense - I want to save time at each fragment possible.

  2. I generate a map at the beginning, showing important pixels (which have to be rendered). The rest can be thrown away and should not go through the fragment stage.

How can I avoid that?

I’m using alpha test with FP16 and RGBA8 render targets. I never experienced any problems on any GPU.

You could use shader that writes small and large depth values in first pass, and then just use depth test when drawing fulscreen quads. You could also use stencil test.

As far as I know, cards that do not support FP16 blending also do not support the alpha testing for that format. This is case of anything older than GeForce 6 or Radeon X1x000.

s far as I know, cards that do not support FP16 blending also do not support the alpha testing for that format
Could be. I only use FP16 formats on GPU’s that support FP16 blending.
On others I use RGBA8 but alpha test works with FBO on those.

And what about stencil test?

Stencil is independent from texture formats, so it should work ok. Just remember to use packed_depth_stencil if available.
Newer tried it with FBO’s, so my opinion is just an opinion, not a fact.

Hi, I tried now to attach two renderbuffers ( one depth and one stencil ) to my framebufferobject, with:

  
// initialize depth renderbuffer
        glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBuffer );
        glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
                                 GL_DEPTH_COMPONENT24, width, height);
        glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,  GL_DEPTH_ATTACHMENT_EXT,
                                     GL_RENDERBUFFER_EXT, depthBuffer );

        // initialize stencil buffer
		glBindRenderbufferEXT( GL_RENDERBUFFER_EXT, stencilBuffer );
        glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_STENCIL_INDEX , width, height );
        glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,  GL_STENCIL_ATTACHMENT_EXT, 
                                     GL_RENDERBUFFER_EXT, stencilBuffer ); 

If I check for completeness now, it tells me that GL_FRAMEBUFFER_UNSUPPORTED_EXT occurred. If I don’t attach these two renderbuffers (having just one framebuffer with two floating point textures) it perfectly works. Changing internal formats to fixed point doesn’t change anything.
It seems, that GL_DEPTH_STENCIL extension is not supported by the ATI 9600.

Any ideas, how I can get doublebuffered fbo + stencil?

Try with GL_DEPTH_COMPONENT16.

It seems, that GL_DEPTH_STENCIL extension is not supported by the ATI 9600.
That’s right - it’s NVIDIA’s extension. On GeForce you should use it instead of separate renderbuffers.

Hi olmeca,

As k_szczech pointed out, you have to use packed_depth_stencil, since seperate depth and stencil is not supported right now. I use this code for fbo and stencil:

glGenRenderbuffersEXT(1,&m_depthBufferID);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,m_depthBufferID);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_STENCIL_EXT, m_width, m_height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, m_depthBufferID);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT,	GL_RENDERBUFFER_EXT, m_depthBufferID);

Hope this helps,

GuentherKrass

Thank you for your answers.

PACKED_DEPTH_STENCIL is a nvidia extension. It is not supported by my ATI 9600. Does anybody know how to use a FBO with STENCIL on an ATI?

It is not supported by my ATI 9600. Does anybody know how to use a FBO with STENCIL on an ATI?

Im afraid you can’t use a stencil target under ATI. And even with NV, you can’t sample the stencil from fp shaders anyway…

Originally posted by olmeca:
[b] At first: Thanks for your answers.

My card is an ATI 9600.

About the discard:
Of course, I could discard the fragments. But I have read, that alot of cards discard the fragment at the end of fragment processing - but this is too late, because:

  1. I have a multipass algorithm, which is computitionally intense - I want to save time at each fragment possible.
  1. I generate a map at the beginning, showing important pixels (which have to be rendered). The rest can be thrown away and should not go through the fragment stage.

How can I avoid that? [/b]

  1. Your card does not support dynamic branching in the fragment shader and is thus unable to “early-out”. It will always run the full fragment shader except when the stencil or depth tests fail. That directly leads to…
  2. since you can’t use stencil with FBO on your card, you have to use depth to mask out pixels.

This is pedantic, but EXT_packed_depth_stencil is not an “NVIDIA” extension. It was developed in the (then) ARB with participation from ATI members (see the list in the extension spec). ATI has declined to implement it so far, perhaps because their hardware doesn’t support it, although they have also considerably lagged in supporting other newish extensions where hardware support isn’t an issue.

But the bottom line is that those of us who want stencil for offscreen rendering on ATI cards are stuck using pbuffers for now.

Originally posted by Komat:
As far as I know, cards that do not support FP16 blending also do not support the alpha testing for that format. This is case of anything older than GeForce 6 or Radeon X1x000.
This is the case in D3D since they’ve packed everything after the pixel shader (blending, alpha test, color mask, fog etc.) under the same POSTPIXELSHADER_BLENDING caps bit, so you need to support everything to expose this flag, but it’s not tied in any way in OpenGL. The 9600 doesn’t support alpha test on a FP render target, but X800 do (but not blending).

Originally posted by Humus:
The 9600 doesn’t support alpha test on a FP render target, but X800 do (but not blending).
Thanks for clarification. I based my comment on both DX flag and the “ATI OpenGL programming and Optimization Guide” from ATI SDK. The Guide mentions in part about limitations of R300 and R400 hw that alpha test on floating point buffer will cause sw rendering with additional comment that it is supported on R500 series. Is is omission in the documentation that it does not mention its support on R420?