gl_FragDepth value and it's counterpart value in a depth buffer are different

As title says, I have a problem with depth testing :mad: If I use the following fragment shader


#version 400 core

out float gl_FragDepth;

void main() {
  "gl_FragDepth=0.666;
"
}

I suppose, that the depth values written to the framebuffer would all be 0.666, but no they are all 0.666000008583069. First, I thought that my code gets the framebuffer data corrupted when it is read from the buffer, but I cannot see any possibility to do so below:


//here we copy the depth data from framebuffer to a pixel buffer object
  glBindBuffer(GL_PIXEL_PACK_BUFFER, depthPBO);
  glReadPixels(0, 0, length, height, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
  glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
  glBindFramebuffer(GL_FRAMEBUFFER, 0);

//next copy data from pixel buffer object to an ordinary float array  
  glBindBuffer(GL_PIXEL_PACK_BUFFER, depthPBO);
  GLfloat *src =(GLfloat*) glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
  memcpy(cpuBuffer, src, length*height*sizeof(GLfloat));
  bool testi=glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
  glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);

//finally let's read some samples
  float test0=cpuBuffer[0];
  float test1=cpuBuffer[1];
  float test2=cpuBuffer[2];
  float test3=cpuBuffer[3]; 

Secondly I thought and still think, that the problem is due to transforming 32-bit floats into 24-bit depth buffer, which is my defined render format:


int iAttributes[]={
  WGL_ALPHA_BITS_ARB, 8,
  WGL_COLOR_BITS_ARB, 24,
  WGL_STENCIL_BITS_ARB, 8,
  WGL_SUPPORT_OPENGL_ARB,GL_TRUE,
  WGL_DEPTH_BITS_ARB, 24,
  WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB,
  WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
  WGL_SAMPLE_BUFFERS_ARB, GL_TRUE,
  WGL_SAMPLES_ARB, 8,
  0, 0
};

If this is the reason, am I forced to use 32 bit WGL_DEPTH_BITS_ARB or make a separate 32 bit renderbuffer? Or could I use some trick inside a shader (this should be possible, because normally fragment shader acquires depth value automatically from vertex shader’s gl_Position.z attribute. And I suppose that that attribute doesn’t have this problem)?

You cannot expect to get perfect accuracy from floating-point values. Even if your GLSL shader says “0.666”, that doesn’t mean that the value is exactly presentable, even in a 32-bit float.

You got back a number which is accurate to within the expected accuracy of a float. You’re not going to get better than that.

Okay, I understand. So there’s no hope to get better resolution by choosing a 32-bit depth buffer, but if I used 64-bit floating point numbers inside GLSL shader and 24-bit depth buffer, then instead of

gl_FragDepth=0.666f; -> got back 0.666000008583069

I could get something like

gl_FragDepth=0.666d; -> got back 0.666000000000069

right? I could test it, but unfortunately I need to rewrite testing…

Anyway, using doubles is expensive, so that’s not an option. And there’s not need to do that, I think, because lots of decimals are needed only by addition and subtraction, not multiplication and division. Unfortunately depth testing involves especially addition and subtraction, but as we can see, there’s eight accurate decimals with floats, and that should be enough to me (it can hold dimensions from 0.01 metre to 100000 metres). So my problem lies somewhere else…

[QUOTE=mamannon;1282179]I could get something like

gl_FragDepth=0.666d; -> got back 0.666000000000069

right?[/quote]

No. The precision of the results is the lowest precision of any of the components of an expression. The precision of gl_FragDepth as a variable is float. The precision of the depth buffer is whatever it is, but it certainly is not double. Therefore, the precision you get will be the lowest of these, which will most assuredly not be double.

There aren’t any accurate “decimals” with floats; they’re binary, not decimal.

You may find matters get simpler if you just forget that decimal representation even exists.

And GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, and GL_DEPTH_COMPONENT32 aren’t floating-point, they’re fixed point. GL_DEPTH_COMPONENT32F is floating-point, although it has less precision than GL_DEPTH_COMPONENT32, as it only has a 24-bit significant (23 bits stored, the leading 1 is implied).

If you care about precision to the level where e.g. rounding direction matters, you should refer the specification for the details of what gets converted, when, and how.

I stay corrected. And finally, looking at decimals when designing a depth buffer is also stupid because of logarithmic scale of depth values. But when you get desperate, you make desperate choices…