Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Shadow Mapping Depth Buffer not storing 32bit floats

  1. #1
    Junior Member Newbie
    Join Date
    Feb 2010
    Posts
    20

    Shadow Mapping Depth Buffer not storing 32bit floats

    Not sure if this should go here or in the shader forum...

    I am having trouble deciphering why my render depth buffer is coming out not as expected. I am attempting to write a shader based shadow mapper, as I was partially successful with an immediate mode version.

    I have attached three screen shot to attempt to illustrate this along with the GL setup code and the two shaders.

    It appears that my off screen depth buffer is storing only 8bit values and not the packed 32 float to RGBA I am attempting to store. This is seen when I capture the depth buffer and output to a file. The areas with infinite depth have 80 80 80 FF as there hex values, R, G, B all have same value and alpha is FF or 1, Im assuming 80 is 0.5. This is illustrated by the depth.jpg.

    The depth_noshader.jpg show the same logic but no enabling the render buffer thus rendering to the current screen buffer. Basically not calling glBindBuffer(GL_FRAME_BUFFER). By the looks of the output Id say the say the shader is calculating a floating point representation in RGBA. Banding seems to indicate some accurate scaling.

    The screencapture.jpg is a shoe of the final render stage with the depth buffer in the lower right.

    All I can seem to get out of the shader is an alpha effect. All my depth calculations fail as I suspect the depth buffer doesnt actually have what I really want to see, the linearly adjusted distance from light source on the first pass to depth buffer.

    Can anyone see from the code below is there is something wrong with the way the depth buffer is setup? I suspect the problem is there as when rendering to the actual frame buffer the bluish banding seems consistent with what I might expect a bit pattern to progress by for depth.

    Setup Render Depth Buffer
    Code :
    private bool SetupDepthMapTexture(int width, int height, int depth)
    {
      // Generate the frame buffer and bind
      Gl.glGenFramebuffersEXT(1, out frameBufferID);
      Gl.glBindFramebufferEXT(Gl.GL_FRAMEBUFFER_EXT, frameBufferID);
     
      // Generate the render buffer and bind
      Gl.glGenRenderbuffersEXT(1, out renderBufferID);
      Gl.glBindRenderbufferEXT(Gl.GL_RENDERBUFFER_EXT, renderBufferID);
     
      // Render buffer depth, width and height setup
      Gl.glRenderbufferStorageEXT(Gl.GL_RENDERBUFFER_EXT, depth, width, height);
     
      // Attach the render buffer to the frame buffer
      Gl.glFramebufferRenderbufferEXT(Gl.GL_FRAMEBUFFER_EXT,
        Gl.GL_DEPTH_ATTACHMENT_EXT, Gl.GL_RENDERBUFFER_EXT, renderBufferID);
     
      // Setup for new color(draw) buffer and to read buffer
      Gl.glDrawBuffer(Gl.GL_NONE);
      Gl.glReadBuffer(Gl.GL_NONE);
     
      // Generate texture and bind
      Gl.glGenTextures(1, out textureBufferID);
      Gl.glBindTexture(Gl.GL_TEXTURE_2D, textureBufferID);
     
      // Allocate texture space of desired format but no data is specified
      Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_DEPTH_COMPONENT32,
        1024, 1024, 0, Gl.GL_DEPTH_COMPONENT, Gl.GL_FLOAT, null);
      Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_NEAREST);
      Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_NEAREST);
      Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP_TO_EDGE);
      Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP_TO_EDGE);
      Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_DEPTH_TEXTURE_MODE, Gl.GL_INTENSITY);
     
      // Attach texture to FBO so we can render to it
      Gl.glFramebufferTexture2DEXT(Gl.GL_FRAMEBUFFER_EXT,
        Gl.GL_DEPTH_ATTACHMENT_EXT, Gl.GL_TEXTURE_2D, textureBufferID, 0);
     
      // Check if FBO creation was successful
      int status = Gl.glCheckFramebufferStatusEXT(Gl.GL_FRAMEBUFFER_EXT);
      if (status != Gl.GL_FRAMEBUFFER_COMPLETE_EXT)
      {
        frameBufferID = renderBufferID = textureBufferID  = -1;
        return false;
      }
     
      // Revert back to fixed pipeline rendering
      Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
      Gl.glBindFramebufferEXT(Gl.GL_FRAMEBUFFER_EXT, 0);
     
      return true;
    }

    Vertex Shader
    Code :
    #version 120
    uniform mat4 mProjection;
    uniform mat4 mView;
    uniform mat4 mModel;
     
    attribute vec3 a_vVertex;
     
    varying vec4 v_vPosition;
     
    void main(void)
    {
      v_vPosition = mView * mModel * vec4(a_vVertex, 1.0);
      gl_Position = mProjection * v_vPosition;
    }

    Fragment Shader
    Code :
    #version 120
    varying vec4 v_vPosition;
     
    vec4 pack (float depth)
    {
      const vec4 c_bias = vec4(1.0 / 255.0, 1.0 / 255.0, 1.0 / 255.0, 0.0);
     
      float r = depth;
      float g = fract(r * 255.0);
      float b = fract(g * 255.0);
      float a = fract(b * 255.0);
      vec4 color = vec4(r, g, b, a);
     
      return color - (color.yzww * c_bias);
    }
     
    void main()
    {
      const float c_LinearDepthConstant = 1.0 / (1.0 - 30.0);
      float linearDepth = length(v_vPosition) * c_LinearDepthConstant;
     
      gl_FragColor = pack(linearDepth);
    }
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	depth.jpg 
Views:	91 
Size:	19.8 KB 
ID:	944   Click image for larger version. 

Name:	depth_noshader.jpg 
Views:	84 
Size:	44.1 KB 
ID:	945   Click image for larger version. 

Name:	screencapture.jpg 
Views:	73 
Size:	63.3 KB 
ID:	946  

  2. #2
    Junior Member Newbie
    Join Date
    Feb 2010
    Posts
    20
    I've read a few posts stating that the...


    Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_DEPTH_TEXTURE_MODE, Gl.GL_INTENSITY);

    ...may be depreciated. Maybe TEXTURE_MODE is not the way to go. So I am only getting the red component. I have seen it referenced as rrr1 which seems to be about what I am getting. I get one value repeated three times and an FF which I suspect translates to 1.0 in GL speak.

    This sound familiar to anyone?

    Any suggestions on getting a 32bit floating point depth buffer?

  3. #3
    Junior Member Newbie
    Join Date
    Feb 2010
    Posts
    20
    Anyone? I'm stumped. I've been reading faqs, other examples on the net and everything seems to indicate this is correct. Playing with the shadow depths from the unpack and setting colors for ranges it appears my unpack is returning values from 0 to 1. Which seeme to indicate the depth buffer texture have values 0 to 1 which kinda follows what I am seeing.

    I have ported the pack/unpack methods to a C# program using random float from 0 to 1 and the logic makes sense and works.

    I'm missing something really trivial but I can't see it.

    Can anyone can throw me a bone?

    I am starting to think its in my second pass shaders.

    Note: the vAdj and multiply by vShadowDepth are so I can see something. The intent of teh vAdj is to multiply by all 1s (white) for in the light and by 0101 (green) if in shadow. The vShadowDepth in the final color calculation will come out.

    I still think something is wring in the depth texture since all I see is rrr1 so I think there are two issues at play here.

    2nd pass vertex
    Code :
    #version 150
     
    uniform mat4 mProjection;                                                         // to clip space  (projection)
    uniform mat4 mView;                                                               // to view space  (camera)
    uniform mat4 mModel;                                                              // to world space (world)
     
    uniform mat4 mLightProjection;                                                    // to clip space  (projection)
    uniform mat4 mLightView;                                                          // to view space  (camera)
     
    uniform vec3 vLightSourcePosition;                                                // world space
    uniform vec3 vCameraSourcePosition;                                               // world space
     
    attribute vec3 a_vVertex;                                                         // incoming vertex (object space)
    attribute vec3 a_vNormal;                                                         // incoming normal (object space)
    attribute vec2 a_vTexel;                                                          // incoming texel  (object space)
    /* ------------------------------------------------------------------------------------------ */
    // The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region.
    // Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0.
    const mat4 ScaleMatrix = mat4(
      0.5, 0.0, 0.0, 0.0, 
      0.0, 0.5, 0.0, 0.0, 
      0.0, 0.0, 0.5, 0.0, 
      0.5, 0.5, 0.5, 1.0
    );
     
    varying vec4 vPosition;
     
    varying vec3 vVertex;                                                             // object space
    varying vec3 vNormal;                                                             // object space
    varying vec2 vTexCoord;
     
    varying vec3 vLightPosition;                                                      // object space
    varying vec3 vCameraPosition;                                                     // object space
     
    void main(void)
    {
      vVertex = a_vVertex;                                                            // object space
      vNormal = a_vNormal;                                                            // object space
      vTexCoord = a_vTexel;
     
      vPosition = mLightProjection * mLightView * mModel * vec4(a_vVertex, 1.0);      // view space (light)
      vPosition = ScaleMatrix * vPosition;                                            // offset to UV space
     
      // Calculate the positions from the model using the inverse of the model matrix
      //  Move from model space to local (object space)
      vLightPosition = vec3(inverse(mModel) * vec4(vLightSourcePosition, 1));          // object space
      vCameraPosition = vec3(inverse(mModel) * vec4(vCameraSourcePosition, 1));        // object space
     
      // Calculate the vertex position by combining the model/view(camera)/projection matrices
      // This results in a clip space coordinate (vec4)
      gl_Position = mProjection * mView * mModel * vec4(a_vVertex, 1);                 // clip space
    }

    2nd pass fragment
    Code :
    #version 120
     
    #ifdef GL_ES
    precision highp float;
    #endif
     
    // Linear depth calculation.
    // You could optionally upload this as a shader parameter.
    const float c_Near = 1.0;
    const float c_Far = 30.0;
    const float c_LinearDepthConstant = 1.0 / (c_Far - c_Near);
     
    uniform sampler2D iColorTextureUnit;
    uniform sampler2D iShaderMapTextureUnit;
     
    uniform vec4 vLightAmbientColor;
    uniform vec4 vLightDiffuseColor;
    uniform vec4 vLightSpecularColor;
    uniform vec3 vLightAttenuation;
     
    uniform float fMaterialShininess;
    /* ------------------------------------------------------------------------------------------ */
    varying vec3 vVertex;                                                        // object space
    varying vec3 vNormal;                                                        // object space
     
    varying vec2 vTexCoord;
     
    varying vec3 vLightPosition;                                                 // object space
    varying vec3 vCameraPosition;                                                // object space
     
    varying vec4 vPosition;
     
    float unpack (vec4 color);
     
    void main(void)
    {
      // Interpolated normal across the face
      vec3 vNewNormal = normalize(vNormal);
      vec4 vDecalColor = texture2D(iColorTextureUnit, vTexCoord);
     
      vec3 vLightDirection = vLightPosition - vVertex;
      float fDistance = length(vLightDirection);
      vLightDirection = normalize(vLightDirection);
     
      // Possible optimization: Move to vertex shader and pass interpolated - May decrease quality
      float fAttenuation = 1.0 / (vLightAttenuation.x +                          // constant
                                  vLightAttenuation.y * fDistance +              // linear
                                  vLightAttenuation.z * fDistance * fDistance);  // quadratic
     
      // Dot product on light vector with the interpolated normal
      float dotLightDirection = max(0.0, dot(vNewNormal, vLightDirection));
     
      float fSpecularPower = 0.0;
      if(dotLightDirection >= 0.0 && fMaterialShininess > 0.0)
      {
        // Does camera position need to be transformed?
        vec3 vCameraDirection = normalize(vCameraPosition - vVertex);
        vec3 vHalfVector = normalize(vLightDirection + vCameraDirection);
     
        float dotHalfVector = max(0.0, dot(vNewNormal, vHalfVector));
        fSpecularPower = pow(dotHalfVector, fMaterialShininess);
      }
     
      // Calculate shadow
      vec3 vShadowCoord = vPosition.xyz / vPosition.w;
      vShadowCoord.z = length(vVertex - vLightPosition) * c_LinearDepthConstant;
      float vShadowDepth = unpack(texture2D(iShaderMapTextureUnit, vShadowCoord.xy));
      vec4 vAdj = vec4(1, 1, 1, 1);
      if(vShadowCoord.z > vShadowDepth)
      {
        vAdj = vec4(0, 1, 0, 1);
      }
      // Calculate shadow
     
      // Diffuse color multiplied by the dot product of the light/normal calculation
      vec4 vAmbient  = vLightAmbientColor;
      vec4 vDiffuse  = vLightDiffuseColor * dotLightDirection * fAttenuation;
      vec4 vSpecular = vLightSpecularColor * fSpecularPower * fAttenuation;
      vec4 vLightVal = (vAmbient + vDiffuse + vSpecular) * vAdj * vShadowDepth;
     
      gl_FragColor = vDecalColor * vLightVal;
    }
     
    // Unpack an RGBA pixel to floating point value.
    float unpack (vec4 color)
    {
      const vec4 bitShifts = 
        vec4(1.0,
             1.0 / 255.0,
             1.0 / (255.0 * 255.0),
             1.0 / (255.0 * 255.0 * 255.0));
      return dot(color, bitShifts);
    }

  4. #4
    Junior Member Newbie
    Join Date
    Feb 2010
    Posts
    20
    OK...got part of it...My unpack was still scaling by an inverse. removed the 1.0 / (far-near) on pass 2 for unpacking to revert to original distances.

    Issue still remains on why only rrr1 in depth buffer.

  5. #5
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,193
    Quote Originally Posted by bmcclint_gl View Post
    Issue still remains on why only rrr1 in depth buffer.
    Your depth buffer, if it is a standard depth buffer, only has a single value per sample.

    The rrr1 is just how GLSL populates the vec4 that's returned from the texture sampling function in GLSL. Pre-GLSL 1.3, GLSL populates the vec4 return value as follows for these DEPTH_TEXTURE_MODE assignments:

    * INTENSITY = rrrr
    * LUMINANCE = rrr1
    * ALPHA = 000r
    * RED = r001

    where "r" is the depth value (or depth comparison value). In GLSL 1.3+, DEPTH_TEXTURE_MODE is ignored and GLSL behaves as if it is always set to LUMINANCE.

  6. #6
    Junior Member Newbie
    Join Date
    Feb 2010
    Posts
    20
    Quote Originally Posted by Dark Photon View Post
    Your depth buffer, if it is a standard depth buffer
    Not quite sure I understand this statement. What else could it be? During the definition, above, of the FBO, RBO and Texture I define as depth buffer and attach to the depth attach point. When I read to create a screenshot, I read from the depth buffer attachment. Am I missing something about a depth buffer?

    I was thinking about this...in the shaders I am manually calculating the distance of the fragment to the light source. Thus I shouldn't really need a depth buffer at all. I could encode my linear depth floating points using the pack/unpack methods, store in a color buffer, or any buffer for that matter, and reconstitute on the fragment shader on the other side. The 'depth' buffer seems to be irrelevant. Depth seems to be the buzz word for shadow mapping but ultimately I don't think the type of off screen render buffer matters. It's simply a transport mechanism for data from shader to shader.

    Thoughts?

  7. #7
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,193
    Quote Originally Posted by bmcclint_gl View Post
    Quote Originally Posted by Dark Photon
    Your depth buffer, if it is a standard depth buffer
    Not quite sure I understand this statement. What else could it be?
    Quote Originally Posted by bmcclint_gl
    I could encode my linear depth floating points using the pack/unpack methods, store in a color buffer, or any buffer for that matter...
    Your 2nd quote answers your 1st. By "standard" I'm referring to a texture or renderbuffer that has format GL_DEPTH_STENCIL or GL_DEPTH_COMPONENT ... one which stores 0..1 window-space depth values written by the standard depth pipeline.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •