# Thread: Texture and depth precision: bits, bits, bits...

1. ## Texture and depth precision: bits, bits, bits...

Hi guys,

i'm coding a shadow mapping for three platforms: a Nokia N900, an iPad 2 and a common desktop.

I want to compare the precision of the depth buffer of all of them, i.e., how many bits I have for the depth buffer of each one of them, and how many bits per channel i can write on the RGBA texture during the shadow mapping generation.

I'm a little bit confused here. How do i get these numbers?
Are all of them integers?

For example, on the desktop, if I get the precision with glGetIntegerV:
Code :
GLint dbits = 0;
glGetIntegerv(GL_DEPTH_BITS, &amp;dbits);
qDebug() <<  "z-buffer bits: " << dbits;
and this tells me: 24.

But when I generate my shadow map FBO, i'm using
Code :
and it works! Doesn't it mean that my z-buffer is a 32-bit floating point instead of 24?

Now, concerning the texture generation,
If i ask the precision of the color buffer, for example, of the blue channel:
Code :
GLint bbits = 0;
glGetIntegerv(GL_BLUE_BITS, &amp;bbits);
qDebug() <<  "b bits: " << bbits;
it gives me: 8.

But again, when i create my depth texture (which i'll use to write the depth on), i ask:
Code :
glGenTextures(1, &amp;depthTex[0]);
glBindTexture(GL_TEXTURE_2D, depthTex[0]);
I'm asking you guys: is my texture 32-bit per channel floating point or not?

And most important of all, where do i find specific information on how many bits i have for each platform? which query should i use? am i querying it correctly?

Thank you!

2. ## Re: Texture and depth precision: bits, bits, bits...

You query by glGetIntegerv the bit depth of the currently bound framebuffer. This has nothing to do with your textures as long as you don't attach them to an FBO and bind that.
You have currently a 24 bit depth buffer, but a 32 bit buffer is also possibe (and you created one, you just didn't use it).

3. ## Re: Texture and depth precision: bits, bits, bits...

To query the parameters of an FBO's attachment, you should use:
Code :
glGetFramebufferAttachmentParameteriv( GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_FRAMEBUFFER_ATTACHMENT_DEPTH_SIZE, &amp;dbits);
When I last checked this function had problems on certain OpenGL implementations when trying to query properties of the default framebuffer, which left you with no way to query default framebuffer properties, since glGetIntegerv with GL_XXXX_BITS is no longer allowed in the core profile, but they also weren't allowing you to use:
Code :
glBindFrameBuffer(0);
glGetFramebufferAttachmentParameteriv( GL_DRAW_FRAMEBUFFER, GL_BACK_LEFT, GL_FRAMEBUFFER_ATTACHMENT_RED_SIZE, &amp;dbits);
As it was reporting GL_BACK_LEFT as an invalid enum.

4. ## Re: Texture and depth precision: bits, bits, bits...

As it was reporting GL_BACK_LEFT as an invalid enum.
That's a straight-up driver bug. Do you remember at least whether it was NVIDIA or AMD that was doing it?

5. ## Re: Texture and depth precision: bits, bits, bits...

Hmm, found my post from back then (December 2009 + revived much later) - http://www.opengl.org/discussion_boa...;Number=257357

AMD Catalyst 12.2 still fails with GL_INVALID_ENUM whilst querying the default framebuffer with glGetFramebufferAttachmentParameteriv in both the core + compatibility profiles and glGetIntegerv(GL_DEPTH_BITS, &amp;dbits) is disabled in the core profile leaving no way to query. Not sure whether NVidia have fixed glGetFramebufferAttachmentParameteriv, or disabled querying GL_XXX_BITS in a core profile yet.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•