Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 1 of 1

Thread: Trouble with depth look-up in omnidirectional shadow mapping w/ cube map

Hybrid View

  1. #1
    Newbie Newbie
    Join Date
    Dec 2013
    Posts
    1

    Trouble with depth look-up in omnidirectional shadow mapping w/ cube map

    Hey all, I'm trying to do omnidirectional shadow mapping for a point light with a cube map but am running into some issues when it comes to sampling the depth values in my second pass. I've put some sample renders up which can be seen on imgur at /a/74JLz (I can't seem to post links or images? I guess cause I'm new?) where you can see some of the errors. It seems to depend somewhat on model complexity as I see the most errors in the suzanne render, quite a few in the polyhedrons but none in the cubes. Earlier on I color coded things based on the face they rendered too to help make sure I was looking up the right faces in my second pass, which is why the colors are a bit funky. The actual shadow lookup is done in world space by using the vector from the light to the fragment, and comparing against the fragment z-coord after transforming by the cube face viewing and projection matrices. The code for the project can be viewed on github at Twinklebear/Deferred-Rendering/tree/layered_rendering_test (I can't post URLs at all?) but I'll post the important snippets below.

    The shadow pass uses the shaders (located under res/): vlayered_instanced (vertex), glayered_test (geometry) and fshadow (fragment). The cube face is selected via layered rendering in the geometry shader which for now just amplifies each primitive and outputs it to every face of the cube map. The shadow pass draws to an fbo with color and depth cube maps attached, images of which are also in the album. The shadow pass is drawn on lines 345-355 of main. The shadow pass seems to go well as far as I can tell.

    The shadow pass render call in main, Model::bindShadow binds the model's shadow pass program and VAO, both models use the same program.
    Code :
    glViewport(0, 0, 512, 512);
    glBindFramebuffer(GL_FRAMEBUFFER, fbo);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glEnable(GL_POLYGON_OFFSET_FILL);
    glPolygonOffset(2.f, 4.f);
    model.bindShadow();
    glDrawElementsInstanced(GL_TRIANGLES, model.elems(), GL_UNSIGNED_SHORT, NULL, numInstances);
     
    glBindVertexArray(quad[VAO]);
    glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 1);
    glDisable(GL_POLYGON_OFFSET_FILL);

    The second pass uses the shaders: vinstanced (vertex) and fshader (fragment) and is drawn on lines 361-365 of main.

    The second pass draw call. Model::bind binds the models VAO and program, both models here use the same program.
    Code :
    model.bind();
    glDrawElementsInstanced(GL_TRIANGLES, model.elems(), GL_UNSIGNED_SHORT, NULL, numInstances);
     
    glBindVertexArray(quad[VAO]);
    glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 1);

    The second pass vertex shader (vinstanced) passes the world space position and normal on to the fragment shader for the lighting and shadow calculation.
    Code :
    #version 330 core
     
    //A simple shader for rendering instanced geometry
    uniform mat4 proj;
    uniform mat4 view;
     
    layout(location = 0) in vec3 position;
    layout(location = 1) in vec3 normal;
    layout(location = 3) in mat4 model;
     
    out vec4 world_pos;
    out vec4 f_normal;
     
    void main(void){
    	world_pos = model * vec4(position, 1.f);
    	gl_Position = proj * view * world_pos;
    	f_normal = normalize(model * vec4(normal, 0.f));
    }

    The second pass fragment shader (fshader), is where I think the issue is, but I'm not really sure. The shader finds the view, light and half vectors for Blinn-Phong shading, then uses the negative light vector (ie. from light->fragment) to lookup which cube face we're closest too using a pretty naive method of just finding the largest dot product w/ the cube normals. From here we compute the depth of the fragment for that cube face by applying the face's view and projection matrices, then applying perspective division and scaling. The lookup in the cubemap texture is done with the negative light vector (light->fragment) and takes the z coord of the shadow pos to compare (our depth for that face).

    The color is then chosen based on the face index and the shadow lookup value is factored into the lighting calculations.

    Code :
    #version 330 core
     
    //Selects the color for the fragment based on its layer.
    const vec4 colors[6] = vec4[](
    	vec4(1.f, 0.f, 0.f, 1.f),
    	vec4(0.f, 1.f, 0.f, 1.f),
    	vec4(0.f, 0.f, 1.f, 1.f),
    	vec4(1.f, 1.f, 0.f, 1.f),
    	vec4(1.f, 0.f, 1.f, 1.f),
    	vec4(0.f, 1.f, 1.f, 1.f)
    );
    //Cube face normals, indexed in gl order: pos_x, neg_x, pos_y, neg_y, pos_z, neg_z
    const vec4 cube_normals[6] = vec4[](
    	vec4(1.f, 0.f, 0.f, 0.f),
    	vec4(-1.f, 0.f, 0.f, 0.f),
    	vec4(0.f, 1.f, 0.f, 0.f),
    	vec4(0.f, -1.f, 0.f, 0.f),
    	vec4(0.f, 0.f, 1.f, 0.f),
    	vec4(0.f, 0.f, -1.f, 0.f)
    );
    //Hardcoded view position and light location for now
    const vec4 view_pos = vec4(5.f, 5.f, 5.f, 1.f);
    const vec4 light_pos = vec4(0.f, 0.f, 0.f, 1.f);
     
    uniform samplerCubeShadow shadow_map;
    uniform mat4 light_view[6];
    uniform mat4 light_proj;
     
    flat in int fcolor_idx;
    in vec4 world_pos;
    in vec4 f_normal;
     
    out vec4 color;
     
    void main(void){
    	vec4 v = normalize(view_pos - world_pos);
    	vec4 l = normalize(light_pos - world_pos);
    	vec4 h = normalize(l + v);
     
    	//Is there a better way to figure out which face we're projected onto?
    	//For now we just run through each view dir for the faces and see which one's normal
    	//that l is closest too.
    	//Maybe this should just be done in the vertex shader? What about tris overlapping multiple faces?
    	float max_dot = -1.f;
    	int face;
    	for (int i = 0; i < 6; ++i){
    		float a = dot(-l, cube_normals[i]);
    		if (a > max_dot){
    			max_dot = a;
    			face = i;
    		}
    	}
    	//Project our world pos into the shadow space for the face and scale it into
    	//projection space. There are faster ways to do this bit
    	vec4 shadow_pos = light_proj * light_view[face] * world_pos;
    	shadow_pos /= shadow_pos.w;
    	shadow_pos = (shadow_pos + 1.f) / 2.f;
    	float f = texture(shadow_map, vec4(-l.xyz, shadow_pos.z));
     
    	float diff = max(0.f, dot(f_normal, l));
    	float spec = max(0.f, dot(f_normal, h));
    	if (diff == 0.f){
    		spec = 0.f;
    	}
    	else {
    		spec = pow(spec, 50.f);
    	}
    	vec3 scattered = vec3(0.1f, 0.1f, 0.1f) + f * diff;
    	//White specular highlight color
    	vec3 reflected = f * vec3(spec * 0.4f);
    	color = colors[face];
    	color.xyz = min(color.xyz * scattered + reflected, vec3(1.f));
    }

    Let me know if there's any more information that would be helpful, I've been stumped by this for a bit. Sorry that I can't seem to post the images or urls to the code, it does make it a huge pain to go view them, but please do at least look at the images (on imgur at /a/74JLz) as they show the issue much more clearly than I can really explain. If you do want to run the program yourself you'll need SDL2, GLEW and GLM, and if you're on windows should have the environment variables SDL2, GLEW and GLM set to the root folders of those libraries so CMake can find them. You can also choose to view the color or depth of the cube map faces by pressing D (depth) or C (color) and picking a face w/ 1-6, the scene view is chosen with S.

    Thanks folks.

    Edit: I put up some clearer higher resolution renders, and also noticed that the missing fragments are pure black, while any fragment that's part of a model should have at least some very low ambient color. Very strange.

    From some further fiddling (ie. rotating the cubes) it seems that the issue is provoked depending on the faces angle somehow, although for cubes where the face normal is the same across the entire face the gaps only appear in portions of the face, instead of the whole thing. So perhaps it's really more related to the light direction? The gaps also remain in the same location when viewed from a different angle, so I don't think the viewing angle has an effect. These new images are also added to the album on imgur.

    Edit: I realized I shouldn't be normalizing the PCF value, and am starting to think the issue is with how I'm scaling shadow_pos.z to match the range of values in the cube map depth texture. The texture itself is DEPTH_COMPONENT_32F, but I'm having some trouble finding how to scale the values in properly.

    Final edit: I solved it! Turns out I had the wrong up vectors for some of my faces.
    Last edited by Twinklebear; 12-20-2013 at 04:48 PM. Reason: Solved!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •