I’m trying to implement dual-paraboloid shadow mapping. The shadow maps I’m rendering look correct, but I believe something is wrong with my configuration because even if I change my GL_TEXTURE_COMPARE_FUNC to GL_ALWAYS, I still seem to get 0 back from my texture(…) function in GLSL.
Create two depth textures
// depth map setup
glGenTextures(2, depthMaps);
for (int i = 0; i < 2; i++) {
glBindTexture(GL_TEXTURE_2D, depthMaps[i]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );
// I would expect setting this to GL_ALWAYS, the texture(...) lookup in my
// GLSL code would always return 1 regardless of what I pass it
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glBindTexture(GL_TEXTURE_2D, 0);
}
Render the scene from my light, setup the size/format of my texture, and copy my read buffer into it. I also do a glReadPixels(…) just to save my depth buffer off to a file to make sure its rendering correctly (light-projection setup correctly and reading from correct buffer).
renderSceneFromLight(...);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, shadowXres, shadowYres, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0); checkGL("glteximage2d to depth texture");
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, shadowXres, shadowYres); checkGL("copytexsubimage2d to depth texture");
glReadPixels(0, 0, shadowXres, shadowYres, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); checkGL("read pixels from depth buffer");
Finally, I render my scene from the camera-perspective perspective.
setupShader(...);
if (depthMaps != 0) {
QString depthMapNames[] = { "depthMapP", "depthMapN" };
for (int i = 0; i < 2; i++) {
glActiveTexture(GL_TEXTURE0+i);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, depthMaps[i]); checkGL("bind depth texture");
shader->setUniformValue(depthMapNames[i].toStdString().c_str(), depthMaps[i]); checkGL("set dm texture uniform");
std::cout << shader->uniformLocation(depthMapNames[i]) << std::endl; // valid locations returned
}
}
renderScene();
In my GLSL code I have the light intensity multiplied against the value returned from the texture(…) lookup. From what I can tell though, the value returned is always zero.
float res = 0.5;
if (alpha >= 0.5) { // user 1st hemisphere
res = texture(depthMapP, hemiShadowLookup0);
} else { // use 2nd hemisphere
res = texture(depthMapN, hemiShadowLookup1);
}
float lightIntensity = uniformLightIntensity * res;
I’m setting the uniform sampler values in the shader to the texture handles. I’m always confused if I should be passing them or the texture units. Passing the texture handles (generated by glGenTextures) doesn’t raise an error from my GL error check, even though this documentation/example (binding samplers) seems to be passing texture units?
Anyway, maybe I’m misunderstanding something, but I thought that regardless of the contents of my shadow map or the texture coordinates passed to it, if the texture is setup/bound correctly (in C++ and in the shader) and the comparison function is set to GL_ALWAYS, it should always return 1, right?