PDA

View Full Version : Shadow Mapping / z-Buffer texture on ATI card



smani
12-01-2010, 11:55 AM
Hi,
I am having some problems in getting shadow mapping to work on an ATI card. I guess the problem is best described by these two images:

Scene rendered with nvidia card (GF9600GT, Linux Driver 260.19.12):
http://nas.dyndns.org/temp/nvidia.png
Scene rendered with ati card (HD3400, Linux Catalyst 10.11):
http://nas.dyndns.org/temp/ati.png

For some reason the depth buffer obtained by the ATI card is missing lots of the geometry of the scene...

Code: http://nas.dyndns.org/temp/demo.zip , relevant portions:
Initialization:


///*** CREATE DEPTH TEXTURE ***///
glGenTextures(1, &shadowmap.texture);
glBindTexture(GL_TEXTURE_2D, shadowmap.texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP ); // Remove artifacts when outside the domain of the shadowmap
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowmap.width, shadowmap.height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);

///*** CREATE SHADOW FRAMEBUFFER OBJECT ***///
glGenFramebuffers(1, &shadowmap.fbo);
glBindFramebuffer(GL_FRAMEBUFFER, shadowmap.fbo);
glDrawBuffer(GL_NONE); // Instruct openGL that we won't bind a color texture with the currently binded FBO
glReadBuffer(GL_NONE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowmap.texture, 0); // Attach depth texture to FBO
if(glCheckFramebufferStatus(GL_FRAMEBUFFER)!=GL_FR AMEBUFFER_COMPLETE){
std::cerr<<"GL_FRAMEBUFFER_COMPLETE failed, CANNOT use FBO\n";
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);


Drawing:


///*** RENDER FROM LIGHT POV ***///
// Bind framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, shadowmap.fbo);
glClear(GL_DEPTH_BUFFER_BIT);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
// Draw
glCullFace(GL_FRONT);
glUseProgram(shadowprog.program);
for(unsigned i=0; i<models.size(); ++i){
glBindVertexArray(models[i].vao);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, models[i].indices);
glDrawElements(GL_TRIANGLES, models[i].n_indices, GL_UNSIGNED_INT, 0);
}
glBindVertexArray(0);
glCullFace(GL_BACK);
// Store matrix
Matrix4f textureMatrix=shadowmap.biasMatrix*shadowmap.persp ectiveMatrix*shadowmap.modelViewMatrix;



Would not be the first time the ATI driver shows some problems, on the other hand many people say that ATI is stricter on OpenGL while nvidia being more permissive, hence the code may be wrong and only by chance render okay on the nvidia card.

Anyone has some hints?
Thanks!
smani

ZbuffeR
12-01-2010, 12:52 PM
This line :
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowmap.width, shadowmap.height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);

Means you do not care about precision. So you can get anything.
Instead, use GL_DEPTH_COMPONENT24 for typical 24 bits depth buffer.

smani
12-01-2010, 12:55 PM
Thanks for your reply - unfortunately the situation did not improve with GL_DEPTH_COMPONENT24.
I also tried with a power-of-two texture size, also without any improvements.

_arts_
12-01-2010, 04:02 PM
From my opinion this is not a problem of precision, it looks like when you draw the depth buffer, the cube is discarded.

I don't know about the reason, but at first glance I would say you use a too narrow perspective, so only the sphere is in the perspective but the cube stays outside.
One other reason would be about the shadow bias, since in the nvidia demo the shadow of the cube is far from the bottom of the cube.

dukey
12-01-2010, 05:00 PM
My guess is the cube outside of the view frustum. Ie its past the far plane and simply getting clipped. Either that or you simply aren't drawing it which seems unlikely.

smani
12-04-2010, 06:40 AM
Hi, thanks both. After a nice long session of commenting and testing, I found out that the the problem was that shader attrib locations conflicted between the two shader programs I was using on the ATI gpu, and hence part of the geometry was indeed missing / corrupt. That it worked on the nvidia card was just pure luck since it returned the attrib location identifiers in such way that i.e. the vertices location had the same identifier with both shader programs (i.e. the one generating the depth map and the one drawing the scene). Anyway, solved:D