Hi guys,
i’m coding a shadow mapping for three platforms: a Nokia N900, an iPad 2 and a common desktop.
I want to compare the precision of the depth buffer of all of them, i.e., how many bits I have for the depth buffer of each one of them, and how many bits per channel i can write on the RGBA texture during the shadow mapping generation.
I’m a little bit confused here. How do i get these numbers?
Are all of them integers?
For example, on the desktop, if I get the precision with glGetIntegerV:
GLint dbits = 0;
glGetIntegerv(GL_DEPTH_BITS, &dbits);
qDebug() << "z-buffer bits: " << dbits;
and this tells me: 24.
But when I generate my shadow map FBO, i’m using
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32F, shadowMapSize, shadowMapSize);
and it works! Doesn’t it mean that my z-buffer is a 32-bit floating point instead of 24?
Now, concerning the texture generation,
If i ask the precision of the color buffer, for example, of the blue channel:
GLint bbits = 0;
glGetIntegerv(GL_BLUE_BITS, &bbits);
qDebug() << "b bits: " << bbits;
it gives me: 8.
But again, when i create my depth texture (which i’ll use to write the depth on), i ask:
glGenTextures(1, &depthTex[0]);
glBindTexture(GL_TEXTURE_2D, depthTex[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, shadowMapSize, shadowMapSize, 0, GL_RGBA, GL_FLOAT, 0);
I’m asking you guys: is my texture 32-bit per channel floating point or not?
And most important of all, where do i find specific information on how many bits i have for each platform? which query should i use? am i querying it correctly?
Thank you!