As title says, I have a problem with depth testing :mad: If I use the following fragment shader
#version 400 core
out float gl_FragDepth;
void main() {
"gl_FragDepth=0.666;
"
}
I suppose, that the depth values written to the framebuffer would all be 0.666, but no they are all 0.666000008583069. First, I thought that my code gets the framebuffer data corrupted when it is read from the buffer, but I cannot see any possibility to do so below:
//here we copy the depth data from framebuffer to a pixel buffer object
glBindBuffer(GL_PIXEL_PACK_BUFFER, depthPBO);
glReadPixels(0, 0, length, height, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//next copy data from pixel buffer object to an ordinary float array
glBindBuffer(GL_PIXEL_PACK_BUFFER, depthPBO);
GLfloat *src =(GLfloat*) glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
memcpy(cpuBuffer, src, length*height*sizeof(GLfloat));
bool testi=glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
//finally let's read some samples
float test0=cpuBuffer[0];
float test1=cpuBuffer[1];
float test2=cpuBuffer[2];
float test3=cpuBuffer[3];
Secondly I thought and still think, that the problem is due to transforming 32-bit floats into 24-bit depth buffer, which is my defined render format:
int iAttributes[]={
WGL_ALPHA_BITS_ARB, 8,
WGL_COLOR_BITS_ARB, 24,
WGL_STENCIL_BITS_ARB, 8,
WGL_SUPPORT_OPENGL_ARB,GL_TRUE,
WGL_DEPTH_BITS_ARB, 24,
WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_SAMPLE_BUFFERS_ARB, GL_TRUE,
WGL_SAMPLES_ARB, 8,
0, 0
};
If this is the reason, am I forced to use 32 bit WGL_DEPTH_BITS_ARB or make a separate 32 bit renderbuffer? Or could I use some trick inside a shader (this should be possible, because normally fragment shader acquires depth value automatically from vertex shader’s gl_Position.z attribute. And I suppose that that attribute doesn’t have this problem)?