Clamping of depth buffer

I’m trying to implement cubemap shadows where I render the scene from six directions, but I find my depth texture I read back is clamping to 1 even though I’m using a float texture.

Here’s my vertex/fragment shader to render my scene. I’m trying to write out the world-space distance from the light. If I use the default depth value, when I read the images back, I can write them to an image and they look correct.

Here’s where I attach the depth buffer to the render buffer I’m rendering to.


glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, 1024, 1024);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBufferId);

And here’s where I’m rendering out the distance to my light in world space. Hard-coding the outgoing depth to 2.0, I still get 1.0 back for all the values. If I set it to something like 0.5, they read back at 0.5, so it seems to be clamping.


#version 130
uniform mat4 objToWorld;
uniform mat4 cameraPV;
in vec3 vertex;
out vec3 world;
void main() {
  world = (objToWorld * vec4(vertex,1.0)).xyz;
  gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
}

uniform vec3 origin;
in vec3 world;
void main() {
  gl_FragDepth = 2.0;
//distance(origin,world);
}

gl_FragDepth always seems to be clamping to 1. Should I not be using gl_FragDepth? Do I need to do something to disable the clamping? Or is GL_DEPTHCOMPONENT32 not an actual floating-point format?

GL_DEPTH_COMPONENT32 is an unsigned normalized format and will always be clamped to [0, 1].

GL_DEPTH_COMPONENT32F is a floating point format. But, per ARB_depth_buffer_float and the GL3 core spec, it will also always be clamped to [0, 1].

Only the NV_depth_buffer_float spec allows unclamped floating point depth values.

You don’t need depth values outside of the [0,1] range in order to use cubemap shadows. What you store in the depth buffer are “normalized” depth values, what I want to say here is that value 0 will correspond to your near clip plane’s distance while 1 will correspond to your far clip plane’s distance.

I don’t know why you want to store values outside of [0,1] in your depth texture. Can you explain it?

Well, first I’ve hard a hard time find many examples using cube maps in general. The most popular by google is nvidia’s old example from 1991. I have yet to find ANY example that actually uses the GLSL shadowCube(…) function at all, so I’m kind of trying to figure this out on my own.

I read that Stalker (http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html) experimented with a depth cube map, but they stored their distance from the light in an R32F color texture and not the depth texture itself. I guess I could also do this, but then I wouldn’t use the shadowCube() function and I’d do the lookup myself, right?

I was trying to store the distance to the light in world space in my depth buffer and then use shadowCube() in my fragment shader passing it the vector of the fragment to the light and the distance (in world space) between the light and the fragment.


vec3 lightVec = lightPosW - fragPosW;
float lightIntensity = shadowCube(depthCubeMap, vec4(normalize(lightVec), length(lightVec)).r; // not sure why a vec4 is returned

I guess if I were to use the shadowCube() I’ll have to normalize the distance between the light and fragment myself to fit between 0-1 as happened in the render buffer. Is this how shadowCube() is supposed to function? Only on normalized depth values? Are there any examples of this?

Also, reading EXT_gpu_shader4 I can’t make out what these shadow functions return. Why a vec4 and not a float?

The most popular by google is nvidia’s old example from 1991.

I’ll assume you meant 2001, since NVIDIA didn’t even exist in 1991.

Is this how shadowCube() is supposed to function?

Despite the frankly ridiculous naming of the shadow functions and samplers, they do not make shadows. All they do is perform [b]comparison[/b]-based texel fetches, rather than getting the actual pixel values. What that comparison means is entirely up to you.

So there is no way it “is supposed to function,” outside of performing comparison testing. It’s a tool; how you use it is entirely up to you.

Personally, I would say that the “clamp-and-normalize” is a reasonable solution.

Right, 2001. Woops.

So can I just do this normalization by hand on the distance in both fragment shaders?


float zNormalize(float d)
{
    return (d-zNear)/(zFar-zNear);
}

void main() // in shadow-map fragment shader
{ 
  ...
  gl_FragDepth = zNormalize(distance(origin,world));
}

void main() { // in "beauty" fragment shader
  vec3 lightVec = lightPosW - fragPosW;
  float lightIntensity = shadowCube(depthCubeMap, vec4(normalize(lightVec), zNormalize(length(lightVec))).r;
}

Is that the same normalization that occurs with the perspective projection? Also, can I use any channel returned from these shadow functions? Is there any different between say .r and .a or are they treated just like any single-channel texture like luminance where they all return the same value? Thanks.

Is that the same normalization that occurs with the perspective projection?

Does it matter? You’re writing a value and then comparing that against a value that you compute in the same way.

The only reason you’re normalizing it at all is because the ARB hasn’t gotten their heads out to realize that clamped floating-point depth buffers are stupid and counterproductive. It’s not a solution; it’s a workaround.

What matters isn’t whether it matches what the perspective projection does. What matters is that you’re computing the value the same way when you write it and you read it. The normalization is there as a workaround for a bad bit of the spec. The near and far values you pick should be values that best make use of the available bitdepth. How much they resemble the perspective projection is irrelevant.

Also, can I use any channel returned from these shadow functions?

Well, I was going to say that the texture functions for shadow lookups return floats, but you’re still using the old-style gpu_shader4 stuff, which needlessly returns a vec4. I’d just stick with .x/.r/.s, just to be safe.

Without a really clear idea how this cube map needs to properly be configured, I’m having difficulty wondering if it’s even setup correctly since I get nothing but black right now.

First, I bind my texture and setup my texture…


glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
glGenTextures(1, &depthCubeMap);
glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);

The I render each face. glReadPixels seems to give correct results.


// render...
// copy into correct face
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT32, shadowXres, shadowYres, 0, GL_LUMINANCE, GL_FLOAT, 0);
glCopyTexSubImage2D(faceTarget[i], 0, 0, 0, 0, 0, shadowXres, shadowYres);
glReadPixels(...);

Then I set texture before sending it to the shader.


glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
shader->setUniformValue("depthCubeMap", GL_TEXTURE0);

Something seems to be wrong as even if I set my distance to be a really low value, shadowCube(…) still returns 0.


lightIntensity = shadowCube(depthCubeMap, vec4(vec3(0,1,0), 0.01)).x;

The values I’m getting back from glReadPixels implies that most of the image values are 1 and all of them are at least greater than .1. Am I doing something wrong setting up the texture or connecting it to the shader?

First, I bind my texture and setup my texture…

Where’s the rest of it? You know, the part where you set up the comparison test (I didn’t link you to that for my health, you know;) )? You know, the part where you set up your GL_TEXTURE_COMPARE_MODE and your GL_TEXTURE_COMPARE_FUNC?

glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT32, shadowXres, shadowYres, 0, GL_LUMINANCE, GL_FLOAT, 0);

So, you too have fallen prey to a common OpenGL blunder. It’s understandable, and yet another reason why glTexStorage is the best idea the ARB has had since explicit_attrib_location.

You have to use GL_DEPTH_COMPONENT as the format, not GL_LUMINANCE. I know you’re not actually passing anything; that doesn’t mean that OpenGL implementations aren’t required to check and error in this case. And you should use GL_UNSIGNED_INT as the type, rather than GL_FLOAT.

But in any case, you shouldn’t be calling this every frame. You call it once, when you create the texture (technically 6 times). You then use the SubImage functions to put data into it.

Yes, they used a simple one component color texture because wide support of depth cube maps in hardware was not available at that time. But that’s slower as you cannot take advantage of the hardware depth comparision (i.e. shadow sampling), also you need a color buffer in addition to your depth buffer in order to render the shadow map itself. In case of depth cube maps you don’t store the distance, but the “normalized” depth value of the fragment. This does not need values outside of the [0,1] interval. This is how non-cubemap shadow maps work as well. I would suggest first to try to implement regular 2D shadow maps and then you’ll understand how to do cubemap shadow maps.


shader->setUniformValue("depthCubeMap", GL_TEXTURE0);

not sure if your shader does some magic here, but GL_TEXTURE0 is 0x84C0, when you probably want to pass 0, because the texture is bound to texture unit 0?

Sorry, I didn’t been to gloss over your link. Updating glTexImage2D to the correct format fixed the binding issue.


        glGenTextures(1, &depthCubeMap);
        glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
        glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );
        glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
        //glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, )
        glBindTexture(GL_TEXTURE_CUBE_MAP, 0);


glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT32, shadowXres, shadowYres, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);

Yes, they used a simple one component color texture because wide support of depth cube maps in hardware was not available at that time. But that’s slower as you cannot take advantage of the hardware depth comparision (i.e. shadow sampling), also you need a color buffer in addition to your depth buffer in order to render the shadow map itself. In case of depth cube maps you don’t store the distance, but the “normalized” depth value of the fragment. This does not need values outside of the [0,1] interval. This is how non-cubemap shadow maps work as well. I would suggest first to try to implement regular 2D shadow maps and then you’ll understand how to do cubemap shadow maps.[/QUOTE]

I’ve implemented regular 2D shadow maps before. I feel most of my issues revolve around the differences between them and cubemaps. I’m doing the normalization myself now to fit into the [0,1] interval. With 2D shadow maps I’d be using a fixed matrix transform, but I’d like to avoid six of those in my cubemap and just use the lookup by light normal.

I’ve placed glGetError throughout my code to make sure everything’s correctly executed, and I believe I’ve resolved all the logged errors at least.


glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
shader->setUniformValue("depthCubeMap", depthCubeMap); // was GL_TEXTURE0

What I don’t understand now, though, is now matter I send my depthCubeMap sample, I seem to get zero back regardless of the value. Wouldn’t values beyond [0,1] be valid lookups that should return true or false?


lightIntensity = lightIntensity * shadowCube(depthCubeMap, vec4(vec3(0,1,0), 100)).x;
OR
lightIntensity = lightIntensity * shadowCube(depthCubeMap, vec4(vec3(0,1,0), -100)).x;

Thanks.

I just found that it doesn’t matter if I set my texture comparison function to GL_ALWAYS, I still seem to get zero back. It makes me wonder if something else is wrong with my configuration, but I don’t know what. I’m setting the texture to the uniform samplerCubeShadow. The location I get back from this variable is greater than zero, so I assume the shader accepts and is using it. I have the texture bound and GL_TEXTURE_CUBE_MAP enabled even though I thought that wasn’t required for shaders. Is there anything else I could do to see if everything is setup correctly?


        // depth cube map
        glGenTextures(1, &depthCubeMap);
        glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
        glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );
        glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, GL_LEQUAL)
        glBindTexture(GL_TEXTURE_CUBE_MAP, 0);


glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_FUNC, GL_NEVER);
OR
glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_FUNC, GL_ALWAYS);