Texture and depth precision: bits, bits, bits

Hi guys,

i’m coding a shadow mapping for three platforms: a Nokia N900, an iPad 2 and a common desktop.

I want to compare the precision of the depth buffer of all of them, i.e., how many bits I have for the depth buffer of each one of them, and how many bits per channel i can write on the RGBA texture during the shadow mapping generation.

I’m a little bit confused here. How do i get these numbers?
Are all of them integers?

For example, on the desktop, if I get the precision with glGetIntegerV:

GLint dbits = 0;
    glGetIntegerv(GL_DEPTH_BITS, &dbits);
    qDebug() <<  "z-buffer bits: " << dbits;

and this tells me: 24.

But when I generate my shadow map FBO, i’m using


glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32F, shadowMapSize, shadowMapSize);

and it works! Doesn’t it mean that my z-buffer is a 32-bit floating point instead of 24?

Now, concerning the texture generation,
If i ask the precision of the color buffer, for example, of the blue channel:


    GLint bbits = 0;
    glGetIntegerv(GL_BLUE_BITS, &bbits);
    qDebug() <<  "b bits: " << bbits;

it gives me: 8.

But again, when i create my depth texture (which i’ll use to write the depth on), i ask:


glGenTextures(1, &depthTex[0]);
    glBindTexture(GL_TEXTURE_2D, depthTex[0]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, shadowMapSize, shadowMapSize, 0, GL_RGBA, GL_FLOAT, 0);

I’m asking you guys: is my texture 32-bit per channel floating point or not?

And most important of all, where do i find specific information on how many bits i have for each platform? which query should i use? am i querying it correctly?

Thank you!

You query by glGetIntegerv the bit depth of the currently bound framebuffer. This has nothing to do with your textures as long as you don’t attach them to an FBO and bind that.
You have currently a 24 bit depth buffer, but a 32 bit buffer is also possibe (and you created one, you just didn’t use it).

To query the parameters of an FBO’s attachment, you should use:

glGetFramebufferAttachmentParameteriv( GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_FRAMEBUFFER_ATTACHMENT_DEPTH_SIZE, &dbits);

When I last checked this function had problems on certain OpenGL implementations when trying to query properties of the default framebuffer, which left you with no way to query default framebuffer properties, since glGetIntegerv with GL_XXXX_BITS is no longer allowed in the core profile, but they also weren’t allowing you to use:


glBindFrameBuffer(0);
glGetFramebufferAttachmentParameteriv( GL_DRAW_FRAMEBUFFER, GL_BACK_LEFT, GL_FRAMEBUFFER_ATTACHMENT_RED_SIZE, &dbits);

As it was reporting GL_BACK_LEFT as an invalid enum.

As it was reporting GL_BACK_LEFT as an invalid enum.

That’s a straight-up driver bug. Do you remember at least whether it was NVIDIA or AMD that was doing it?

Hmm, found my post from back then (December 2009 + revived much later) - http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=257357

AMD Catalyst 12.2 still fails with GL_INVALID_ENUM whilst querying the default framebuffer with glGetFramebufferAttachmentParameteriv in both the core + compatibility profiles and glGetIntegerv(GL_DEPTH_BITS, &dbits) is disabled in the core profile leaving no way to query. Not sure whether NVidia have fixed glGetFramebufferAttachmentParameteriv, or disabled querying GL_XXX_BITS in a core profile yet.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.