Depth buffer to color...

Hello,

I am curently trying to get the gl Zbuffer through a color texture and then in a picture with the BMP format.

But, now what I obtain is a perfectly black picture!
So, I would like to know where is(are) the mistake(s) in my code.

So, first, I create a texture id for the depth buffer texture, called “shadowTex”

GLuint shadowTex;
glGenTextures(1, &shadowTex);

then I create a texture with the depth buffer data.

glBindTexture (GL_TEXTURE_2D, shadowTex);
      glTexImage2D (GL_TEXTURE_2D,
                    0,
                    3,
                    shadowMapSize,
                    shadowMapSize,
                    0,
                    GL_DEPTH_COMPONENT24,
                    GL_UNSIGNED_BYTE, NULL);
      glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

Now, I have some questions
I set the depth buffer’s depth to 24 with GL_DEPTH_COMPONENT24, so the size of each depth datum is 3 bytes. I would like to know, if I make mistake when I say: the texture’s data type is GL_UNSIGNED_BYTE so the depth buffer data is clamped to the [0 255] interval ?

then, I create the bitmap picture. I don’t put all the code (the first part concerns the header of the picture’s file).
first I read the texture data with the glReadPixels function and put it in an array (I allow the necessary memory previously, of course) and then I put it in the bitmap file:

for(int i=0;i<height;i++)
    {
        for(int j=0;j<width;j++)
        {
            glReadPixels( j, i, 1, 1, GL_DEPTH_COMPONENT24, GL_UNSIGNED_BYTE, pictData );
        }
    }
    for(int i=0;i<height;i++)
    {
        for(int j=0;j<width;j++)
        {
            fwrite(pictData + 1*(i*width + j), 1*sizeof(char),1,picture); // R component
            fwrite(pictData + 1*(i*width + j), 1*sizeof(char),1,picture); // G component
            fwrite(pictData + 1*(i*width + j), 1*sizeof(char),1,picture); // B component
        }
        for(int j=0;j<nb0;j++) fwrite(&title[2],sizeof(char),1,picture); //zeros de fin de ligne
    }
    
    fclose(picture);

That’s all, there are not segmentation errors, but the picture is entirely black!

Thank you for your help.

There several errors within your code:

  1. The texture you created will not contain content of the depth buffer. Such texture must be initialized using glCopyTexImage2D with one from GL_DEPTH_COMPONENT* internal formats. Additionally for what you wish to do, there is no need to copy the content to texture (see bellow).

  2. The glReadPixels reads values from the frame buffer and not from the texture.

  3. For glReadPixels to read from depth buffer, it should be given the GL_DEPTH_COMPONENT format and will probably fail with the GL_DEPTH_COMPONENT24 enum.

  4. The fragment of code you shown will also read all pixels to the start of the pictData array.

Entire code that tries read the depth buffer can be replaced by one line without any need to create texture:
glReadPixels(0,0,width,height,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE, pictData ) ;

Thank you very much!!^^

I was making a big confusion! I forgot that glReadPixels reads data from the framebuffer!
Now it works!

But, nevertheless, I need to put the depth buffer in a texture.

is this correct now:

      glBindTexture (GL_TEXTURE_2D, shadowTex);
      glTexImage2D (GL_TEXTURE_2D,
                    0,
                    GL_DEPTH_COMPONENT,
                    shadowMapSize,
                    shadowMapSize,
                    0,
                    GL_DEPTH_COMPONENT,
                    GL_UNSIGNED_BYTE, NULL);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

Moreover, if it works, is depth buffer resized to shadowMapSizeshadowMapSize format without cutting data out of the texture? In fact, the screen format is 800600, it is the format of the depth buffer; but the size of the texture shadowTex is 512*512 (it must be a power of 2).
I need this in order to do shadow mapping, with GLSL shaders.

Thank you.


is this correct now:

This is not correct. This will create depth texture however it will contain random values. You need to use the glCopyTexImage2D instead of glTexImage2D if you wish to fill it with data from current depth buffer.


Moreover, if it works, is depth buffer resized to shadowMapSize*shadowMapSize format without cutting data out of the texture?

The texture creation functions do not resize anything. In case of the glCopyTexImage2D this means that you will get texture that contains 512x512 subrectangle from the depth buffer. For the shadowmap purposes you need to set viewport to cover the 512x512 subrectangle or use FBO extension to render directly into the texture.

Ok thank you very much for your reply.
But there is still something that I don’t understand:
How can I know if the depth buffer is the current GL_READ_BUFFER according to the glCopyTexImage2D’s specifications:

“glCopyTexImage2D defines a two-dimensional texture image with pixels from the current GL_READ_BUFFER.”

thank you.

Originally posted by dletozeun:

How can I know if the depth buffer is the current GL_READ_BUFFER according to the glCopyTexImage2D’s specifications:

That setting is not relevant if you create the texture using one from DEPTH_COMPONENT internal formats. Such textures are special case and have theirs data always taken from the depth buffer. Actually this internal format is the only way in which you can use the glCopyTexImage2D to create texture that contains data from the depth buffer.

Ok thanks a lot, but glCopyTexImage2D seems to not support the GL_DEPTH_COMPONENT format according to its specifications:
glCopyTexImage2D specs

but when I search some example codes on google I see that people use GL_DEPTH_COMPONENT format in this function and it works! So I tried to do this:

   glBindTexture (GL_TEXTURE_2D, shadowTex);
      glCopyTexImage2D (GL_TEXTURE_2D,
                    0,
                    GL_DEPTH_COMPONENT,
                    0,
                    0,
                    shadowMapSize,
                    shadowMapSize,
                    0);
                    
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

But the shadows still don’t appear, but may be the issue is further in my code or in the shader code. So are these lines of correct?

thank you.

Originally posted by dletozeun:
Ok thanks a lot, but glCopyTexImage2D seems to not support the GL_DEPTH_COMPONENT format according to its specifications:

The documentation you are refering to is from some old OpenGL version where the depth textures were not supported.


But the shadows still don’t appear, but may be the issue is further in my code or in the shader code. So are these lines of correct?

They should be correct altrough on some implementations is better to use the formats with explicit depth (e.g. GL_DEPTH_COMPONENT16). The the problem is probably somewhere else. To use the shadowmap compare functionality which is offten used to generate the shadows, you also need to set

glTexParameteri( GL_TEXTURE2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB ) ;

to enable it and also use shadow2D sampler inside your shaders to sample from texture with the compare functionality enabled.

aah fortunately, you are here! ^^
There is not the opengl version in this spec! So I could search for a while!

I suppose that:

glTexParameteri( GL_TEXTURE2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB ) ;

enable the comparison between current fragment depth and depth texture values given to the fragment shader?

Yes I use sampler2DShadow type as an uniform variable, and I give the depth texture ID with:

glUniform1iARB(ShadowMapLoc, shadowTex);

But now, I am not sure that the shader’s code is correct. I have taken this code in the orange book but it already contains some crass syntax errors! So it is not impossible that the code don’t generate correctly shadows…
It is the only GLSL code that I have found to set shadow mapping…

this is the vertex shader:

varying vec3 normal,lightDir,halfVector;

varying vec4 ShadowCoord;

uniform mat4 LightModelViewProjectionMatrix;

void main()
{
	vec4 texCoord;	

	normal = normalize(gl_NormalMatrix * gl_Normal);
	lightDir = normalize(vec3(gl_LightSource[0].position));
	halfVector = normalize(gl_LightSource[0].halfVector.xyz);

	diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;
	
	texCoord = LightModelViewProjectionMatrix * gl_Vertex;
	ShadowCoord = vec4(texCoord / texCoord.w);

	gl_Position = ftransform();
} 

And the fragment shader:

varying vec4 diffuse,ambient;
varying vec3 normal,lightDir,halfVector;

uniform vec4 skyColor;
uniform vec4 groundColor;

uniform sampler2DShadow ShadowMap;
varying vec4 ShadowCoord;


float lookup(in float x, in float y)
{
	float depth = shadow2DProj(ShadowMap, vec4(ShadowCoord)).r;
	return(depth != 1.0 ? 0.0 : 1.0);
}

void main()
{
	vec3 n,halfV;
	float NdotL,NdotHV;
	float ambientBlendFactor;
	vec4 color;
	float shadeFactor;
		
	n = normalize(normal);

	ambientBlendFactor=0.5 + 0.5*n.y;
	color = ambientBlendFactor*skyColor + (1.0-ambientBlendFactor)*groundColor;
		
	NdotL = max(dot(n,lightDir),0.0);
	
	shadeFactor = lookup(0.0, 0.0);

	color += shadeFactor * diffuse * NdotL;
	halfV = normalize(halfVector);
	NdotHV = max(dot(n,halfV),0.0);
	color += gl_FrontMaterial.specular * gl_LightSource[0].specular * pow(NdotHV, gl_FrontMaterial.shininess);
	
        gl_FragColor = color;
}

This shader shader set per pixel lighting with a skylight as ambient color.

Sorry for getting away from the original topic…
Thanks a lot!

Originally posted by dletozeun:
[b]
I suppose that:

glTexParameteri( GL_TEXTURE2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB ) ;

enable the comparison between current fragment depth and depth texture values given to the fragment shader?
[/b]
It enables comparison between depth texture value and third texture coordinate used for sampling of that texture. If you use for example shadow2D( foo.xyz ) then the depth from the texture will be compared with the foo.z. It is your task to ensure that the foo.z contains distance of currently rendered fragment from the sun.

Originally posted by dletozeun:
[b]
Yes I use sampler2DShadow type as an uniform variable, and I give the depth texture ID with:

glUniform1iARB(ShadowMapLoc, shadowTex);

[/b]
That is not correct. The GLSL sampler uniforms do not contain texture identifiers. They contain index of texture unit to which specified texture is bound.

Remove the texCoord / texCoord.w from the vertex shader. The shadow2DProj already does this per pixel so you will get more correct interpolation if you do not divide in the vertex shader.

There are many things that can go wrong with the shadowmaps. I would suggest that you start with shader with single ordinary 2d texture that will be projected into the scene in the same way the shadowmap is. This will allow you to see if the texture appears to be projected correctly (e.g. to check that projection matrices are correct). If you are sure that the shadowmap image is correct, you can use the shadowmap texture with depth compare mode disabled and bound to sampler2D instead of the ordinary texture i mentioned before. This way you can visualize the stored depth values to see if they end where they should be.

shadow2D( foo.xyz )
Are you sure that is correct? this function take two arguments:

vec4 shadow2D( sampler2DShadow, vec3 [,float bias] )

when I do:

  glUniform1iARB(ShadowMapLoc, shadowTex);
      
      glEnable(GL_TEXTURE_2D);
      glBindTexture (GL_TEXTURE_2D, shadowTex);
      glCopyTexImage2D (GL_TEXTURE_2D,
                    0,
                    GL_DEPTH_COMPONENT,
                    0,
                    0,
                    shadowMapSize,
                    shadowMapSize,
                    0);
                    
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB ) ;
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);
      glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB, GL_INTENSITY);

I don’t know here if the texture format is correct to assure the comparison between the depth from the texture and the foo.z …

in my fragment shader I think that I can use Shadow2D function in this way:

  shadeFactor = shadow2D(ShadowMap, ShadowCoord.xyz).r;

if I want to keep the texCoord / texCoord.w

I dont understand everything with the functions Shadow2D and Shadow2DProj, In my opinion it seems very strange that these funtions return a vec4 value whereas we only need a shadeFactor that we take in the r component of the returned value…

I followed your advice to see if projection is correct but in an other way:
In fact, in order to see if light’s projection and modelview matrices are correct I do

  gl_Position = LightProjectionMatrix * LightModelViewMatrix * gl_Vertex;

in the vertex shader and surprise! They seems to not being correct because what I see in the viewport is completly different when I render from the light point of view, using in the vertex shader:

  gl_Position = ftransform();

It means that these two matrices are not correctly passed to the vertex shader or there are not correct in the opengl application when I do:

  glMatrixMode(GL_PROJECTION);
	
	glLoadIdentity();
    gluPerspective(45,1.0,1,10); //ratio is 1.0 because shadowmap size is 512*512
    glGetFloatv(GL_PROJECTION_MATRIX, LightProjectionMatrix);

and:

   glMatrixMode(GL_MODELVIEW);
     glLoadIdentity();
     
     gluLookAt(LightPos[0],LightPos[1],LightPos[2], 
		       0.0,0.0,0.0,
			   0.0f,1.0f,0.0f);
			   
    glGetFloatv(GL_MODELVIEW_MATRIX, LightModelViewMatrix);

these matrices are declared like this:

  GLfloat LightModelViewMatrix[16];
GLfloat LightProjectionMatrix[16];

And I give these ones to the vertex shader like this:

  glUniformMatrix4fv(LightModelViewMatrixLoc, 1, 0, LightModelViewMatrix);
      glUniformMatrix4fv(LightProjectionMatrixLoc, 1, 0, LightProjectionMatrix);

Thank you very much for your invaluable help! ^^

Originally posted by dletozeun:
Are you sure that is correct? this function take two arguments:

I left out function parameters that were not relevant to what I was saying.

[b]
when I do:

  glUniform1iARB(ShadowMapLoc, shadowTex);
..

I don’t know here if the texture format is correct to assure the comparison between the depth from the texture and the foo.z …
[/b]
The texture format should be correct altrough the CLAMP wrapping mode is more appropriate. You need to use the glUniform1iARB(ShadowMapLoc,index_of_texture_unit_with_shadowmap) instead of the glUniform1iARB(ShadowMapLoc, shadowTex)

[b]
in my fragment shader I think that I can use Shadow2D function in this way:

  shadeFactor = shadow2D(ShadowMap, ShadowCoord.xyz).r;

if I want to keep the texCoord / texCoord.w
[/b]
Actually you should do the oposite. Remove the texCoord / texCoord.w from vertex shader and use the shadow2DProj because that corresponds to what happens when the content of the shadowmap is generated. You need to replicate conditions that were present during shadowmap generation as closely as possible.


I dont understand everything with the functions Shadow2D and Shadow2DProj, In my opinion it seems very strange that these funtions return a vec4 value whereas we only need a shadeFactor that we take in the r component of the returned value…

When the shadowmap support first appeared in OGL, the hw was significantly less capable and having the value replicated to all channels was necessary. The GLSL simply keept the existing behaviour.


I followed your advice to see if projection is correct but in an other way:
In fact, in order to see if light’s projection and modelview matrices are correct I do

I currently do not see problem with your matrix loading code when it is used to view the geometry from light point of view altrough I might miss some transposition. Are the locations of uniform matrices valid? Was any error generated?

There is some difference that you need to be aware of when you will project the shadowmap texture. The ordinary projection matrix maps visible area into <-1,1> range (after perspective division). The texture however is sampled using the <0,1> range. During the shadowmap projection you need to convert the range generated by projection matrix into range used for texture access. The prefered way to do that is to multiply the matrix used to project the shadowmap with additional matrices that will do this conversion.

Ok thanks, so I have to do that:

When the shadowmap support first appeared in OGL, the hw was significantly less capable and having the value replicated to all channels was necessary. The GLSL simply keept the existing behaviour.
Ok thank you for this precision.

I currently do not see problem with your matrix loading code when it is used to view the geometry from light point of view altrough I might miss some transposition. Are the locations of uniform matrices valid? Was any error generated?
There are no errors, when I get back the matrices with glGetFloatv. These projection and modelview matrices seems to be correct when I look at these ones in a log.
But I don’t know how I can be sure that the mat4 variables that contain projection and modelview matrices, really contain these! All that I know, is that there is not opengl error during the transfer with:

There is some difference that you need to be aware of when you will project the shadowmap texture. The ordinary projection matrix maps visible area into <-1,1> range (after perspective division). The texture however is sampled using the <0,1> range
Yes I knew that. In fact I omitted to put this matrix:

  GLfloat ClampTo0_1Matrix[16]={0.5f, 0.0f, 0.0f, 0.0f,
                              0.0f, 0.5f, 0.0f, 0.0f,
                              0.0f, 0.0f, 0.5f, 0.0f,
                              0.5f, 0.5f, 0.5f, 1.0f};

In order to see the real light’s point of view scene…But it is necessary, then for the shadow mapping.

Thank you very much for your help, maybe I should create an other topic for the matrix problem…