Shadow Mapping work on Intel but not Nvidia?

I recently had some free time at work and the intel graphics card in my machine supports opengl 3 so I decided to see if my engine would run… it does. So I thought, the only thing it’s really missing right now is to convert my shadow mapping. I do it and it works with flying colors. It only took me about 3 hours to add it to the renderer.

So I thought, that was too easy, especailly for an intel card. So I brought the code home and put it in my svn and compiled on my home computer with a geforce 550 from nvidia themselves. It supports opengl 4.2. Suddenly, I realized the shadows do not work at all, in fact the whole screen is black…

Has anyone else ran into this? I’ve been faithful to NVidia and it has to work on nvidia and ati, and this is very strange. Normally I’m fighting with Intel cards.

Anyway, just was hoping for some quick advice, maybe a couple things to check out… here’s some screen shots

Here is a screenshot from work of the shadows working on the intel gma card:

Here is the shadow map as generated on my NVidia card (It is exactly the same as the one as the Intel card!):

But alas, like I said, the screen is black…

I looked at gDeBugger and there seems to be no errors, especially no OpenGL Errors.

Once again, any thoughts would be greatly appreciated!

Have you tried different render paths? For instance, can you render the scene with a constant color and a brackground?

Without seeing any code it’s hard to say what’s going on. I like to revert to the most basic rendering mode and incrementally work my way to the non-functional parts.

I have, every other effect works just like it does on intel (Normal Mapping, SSAO, Bloom, HDR Toning) and the SSAO effect works on a depth buffer with the exact same FBO system. And as you can see the Depth Buffer is acurately getting captured! Very frustrating, thanks for the tip though!

You could write a very simple shader which only does the depth comparison and puts out 1 and 0 to the color buffer. If that works, there’s no reason to believe the depth comparison is not functioning properly.

Good point, it shall be the first thing I do when I get home. Sometimes, I get so befuddled, I forget the most basic debugging available… simple output.

Just out of curiosity, could you post your lighting shader?

It’s a deferred renderer so don’t be shocked. It’s just the fragment shader, because the vertex shader is just setting the gl_Position and texcoord… that is all.


#version 130

smooth in vec2 texcoord;

uniform mat4 shadow_mapping_bias;
uniform int shadow_mapping_enabled;
uniform sampler2DShadow shadow_map;
uniform vec2 shadow_map_size;

uniform sampler2D position_map;
uniform sampler2D normal_map;
uniform vec3 camera_position;
uniform vec3 light_position;
uniform vec3 light_color;
uniform vec3 light_attenuation;
uniform float light_radius;

out vec4 fragment_output;

void main(){
	vec4 position_specular = texture(position_map, texcoord);
	vec3 position = position_specular.xyz;
	vec3 light_vector = position - light_position;
	float shadow_influence = 1.0;
	float light_distance = abs(length(light_vector));
	if (light_distance > light_radius){
		discard;
	}

	float specular_factor = position_specular.w;
	vec4 color = vec4(0.0, 0.0, 0.0, 0.0);

	vec4 normal_height = texture(normal_map, texcoord);
	vec3 normal_vector = normalize(normal_height.xyz);
	float height_value = normal_height.w;

	vec3 view_vector = normalize(camera_position - position);
	light_vector = normalize(light_vector);
	vec3 reflect_vector = normalize(reflect(light_vector, normal_vector));

	float diffuse = max(dot(normal_vector, light_vector), 0.0);
	if (diffuse > 0.0){
		float attenuation_factor = 1.0 / (light_attenuation.x + (light_attenuation.y * light_distance) + (light_attenuation.z * light_distance * light_distance));
		attenuation_factor *= (1.0 - pow((light_distance / light_radius), 2.0));

		float specular = pow(max(dot(reflect_vector, view_vector), 0.0), 16.0) * specular_factor;

		vec3 diffuse_color = (diffuse * attenuation_factor) * light_color;
		vec3 specular_color = (specular * attenuation_factor) * light_color;
		color = vec4(diffuse_color + specular_color, 1.0);
	}

	if (shadow_mapping_enabled == 1){
		vec4 projected_coordinate = shadow_mapping_bias * vec4(position, 1.0);
		shadow_influence = textureProj(shadow_map, projected_coordinate);
	}

	fragment_output = vec4(color.rgb * shadow_influence, 1.0);
}

Since I’ve written a light pre-pass renderer, I’m quite sympathetic to that. :slight_smile:

If you let the shader write the following

fragment_output = vec4(vec3(shadow_influence), 1.0);

you have your debug output.

If you get something there, your lighting code maybe faulty.

Could you still please show us your vertex shader? I’m not sure if your tex coord for the G-Buffer lookup is correct.

It’s still doing full screen quad lighting, I haven’t switched to a scissor op or a sphere rendering yet, so here it is…


#version 130

uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform mat4 normal_matrix;
uniform vec2 resolution;

in vec4 vertex_position;
in vec4 vertex_normal;
in vec4 vertex_texcoord;
in vec4 vertex_binormal;
in vec4 vertex_bitangent;

smooth out vec2 texcoord;

void main(){
	// Sphere Method
	/*vec4 position = projection_matrix * view_matrix * model_matrix * vertex_position;
	texcoord = position.st / resolution;
	gl_Position = position; */

	// Quad Method
	texcoord = vertex_texcoord.st;
	gl_Position = projection_matrix * view_matrix * model_matrix * vertex_position;
}

I’m thinking when doing that debug operation, I should try inverting the shadow map influnce too to see if it’s just negative on that card for some reason.

Keep in mind, this same shader and entire process works great on the Intel card. In fact, it all works great on the NVIdia card too except shadow mapping. The intel card is OpenGL 3 and the NVidia card is 4.2.

Can’t really see any errors right now. It’s interesting that you don’t generate the texcoord from the position but as long as it’s range [0…1] I guess it’ll work.

I’ll have another look at it tomorrow if you didn’t get any further.

Ok, I did some of the debugging you spoke of, I tried just outputting the shadow map influence, as well as inverting it. I’m pretty sure it’s just zero… :frowning: I’m going to try outputting the shadow map from there to make sure it’s getting input properly.

Now I’ve also checked that the depths are stored properly in the shadow map by changing it to be passed in as a sampler2D and outputting it modified by the camera range. It looked just fine. The only other thing I can think of is the shadow mapping bias matrix… which would be weird because it’s calculated the same way on the intel gfx card based machine as well as my home computer with an nvidia card.

Also, I should note that my bias matrix is pretty simple. Since my positions in my G-Buffer are in world space, I just pass this in as my bias matrix…

shadow_mapping_bias * light->GetProjectionMatrix() * light->GetViewMatrix()

That way it transforms the position from world space to light space for a proper comparison.

Note, I tried setting the bias as part of textureProj’s third parameter and it just made the shadow map all white, no matter what value I put in.

Turns out it was me not following the opengl spec properly from nvidia…

I needed to add the correct TexParameters…

My code went from:


graphics->SetActiveTexture(2);
shaders->SetInt("shadow_mapping_enabled", 1);
shaders->SetMatrix4("shadow_mapping_bias", shadow_mapping_bias * light->GetProjectionMatrix() * light->GetViewMatrix());
graphics->BindTexture(light->GetShadowBuffer()->GetDepthTextureId(), 2);

To:


graphics->SetActiveTexture(2);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);
shaders->SetInt("shadow_mapping_enabled", 1);
shaders->SetMatrix4("shadow_mapping_bias", shadow_mapping_bias * light->GetProjectionMatrix() * light->GetViewMatrix());
graphics->BindTexture(light->GetShadowBuffer()->GetDepthTextureId(), 2);

The problem is, the first light is not generating a shadow, but I’m sure that’s just a lingering affect of not setting the parameters back.

Frankly I didn’t expect that. :slight_smile:

From my experience, neither NVidia nor AMD implementations behave as expected when doing depth comparisons without having set the correct tex compare mode and I believe this is in accordance with the spec. It would be good to get some confirmation on this and then file a bug report with Intel.

The GLSL specification explicitly states that you have to match the comparison mode with the sampler type (*Shadow). Otherwise the behavior is undefined.

If a non-shadow texture call is made to a sampler that represents a depth texture with depth comparisons turned on, then results are undefined. If a shadow texture call is made to a sampler that represents a depth texture with depth comparisons turned off, then results are undefined.

Any ideas why the last light would still be all black? Just curious. I even made my shadowing algorithm add a dummy light to the end with a radius of 0.0001 and a really high attenuation. Just to make all of the lights render.

I think you’re forgetting to bind the texture before editing parameters, ie. you should have:


graphics->SetActiveTexture(2);
shaders->SetInt("shadow_mapping_enabled", 1);
shaders->SetMatrix4("shadow_mapping_bias", shadow_mapping_bias * light->GetProjectionMatrix() * light->GetViewMatrix());
graphics->BindTexture(light->GetShadowBuffer()->GetDepthTextureId(), 2);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);

I’ll give it a try when I get home… by the way, I started a github repository for anyone who wants to see the code… if interested of course.

Element-Games-Engine is the project name. :slight_smile: