Fragment lighting proglem

I’m seeing some very strange artifacts in my per-fragment lighting.

This Youtube shows the issue; watch in 480p to get the best view of the problem. You can see the edges of the triangles that make up the sphere, but only certain ones. And only when the light is at certain angles relative to the camera.

Specifically, notice how the lines appear as the light rotates around the object. Rotating the camera also changes when and how the dark lines appear.

The problem definitely seems to be coming from the vertex normals. Here are some images of me rendering the normals directly, from three different angles:

<div class=“ubbcode-block”><div class=“ubbcode-header”>Click to reveal… <input type=“button” class=“form-button” value=“Show me!” onclick=“toggle_spoiler(this, ‘Yikes, my eyes!’, ‘Show me!’)” />]<div style=“display: none;”>


[/QUOTE]</div>

Now you might thing that these are the camera-space normals; they are not. I’m rendering the object-space normals, taken directly from the vertex attributes and interpolated across the surface. There should be no change one way or the other from changing the viewing angle.

Here are my shaders. The shaders all have some extraneous defines, as they were adapted and stripped down from shaders that were doing other computations.

<div class=“ubbcode-block”><div class=“ubbcode-header”>Click to reveal… <input type=“button” class=“form-button” value=“Show me!” onclick=“toggle_spoiler(this, ‘Yikes, my eyes!’, ‘Show me!’)” />]<div style=“display: none;”>
Vertex shader for both:


#version 330

layout(location = 0) in vec3 position;
layout(location = 1) in vec4 inDiffuseColor;
layout(location = 2) in vec3 normal;

out vec4 diffuseColor;
out vec3 vertexNormal;
out vec3 modelSpaceNormal;
out vec3 cameraSpacePosition;

uniform mat4 cameraToClipMatrix;
uniform mat4 modelToCameraMatrix;

uniform mat3 normalModelToCameraMatrix;

void main()
{
	vec4 tempCamPosition = (modelToCameraMatrix * vec4(position, 1.0));
	gl_Position = cameraToClipMatrix * tempCamPosition;

	vertexNormal = normalize(normalModelToCameraMatrix * normal);
	modelSpaceNormal = normalize(normal);
	diffuseColor = inDiffuseColor;
	cameraSpacePosition = vec3(tempCamPosition);
}

Fragment shader showing dot(N, L):


#version 330

in vec4 diffuseColor;
in vec3 vertexNormal;
in vec3 modelSpaceNormal;
in vec3 cameraSpacePosition;

out vec4 outputColor;

uniform vec3 modelSpaceLightPos;

uniform vec4 lightIntensity;
uniform vec4 ambientIntensity;

uniform vec3 cameraSpaceLightPos;

uniform float lightAttenuation;

const vec4 specularColor = vec4(0.25, 0.25, 0.25, 1.0);
uniform float shininessFactor;


void main()
{
	vec3 lightDir = normalize(cameraSpaceLightPos - cameraSpacePosition);
	vec3 surfaceNormal = normalize(vertexNormal);
	outputColor = vec4(dot(surfaceNormal, lightDir));
}

Fragment shader showing the normals:


#version 330

in vec4 diffuseColor;
in vec3 vertexNormal;
in vec3 modelSpaceNormal;
in vec3 cameraSpacePosition;

out vec4 outputColor;

uniform vec3 modelSpaceLightPos;

uniform vec4 lightIntensity;
uniform vec4 ambientIntensity;

uniform vec3 cameraSpaceLightPos;

uniform float lightAttenuation;

const vec4 specularColor = vec4(0.25, 0.25, 0.25, 1.0);
uniform float shininessFactor;


void main()
{
	outputColor = vec4(normalize(modelSpaceNormal), 1.0);
}

[/QUOTE]</div>

If you want to run the code yourself, you can grab this zip file from my tutorials (it’s larger than it looks). Run Tutorial 12, and you’ll see the problem. You can use the mouse to control the camera, ‘b’ will stop/start the light rotation, and ‘h’ will switch between viewing the model-space normals and the NdotL.

To run them, you’ll have to build FreeGLUT, glloader, and TinyXML, which come with the distribution. Instructions for doing so can be found here.

If you’ve got any ideas about what’s going on, I’d love to hear them.

[edit]
BTW, it’s not hardware-specific. I see this problem on my Radeon HD 3300, as well as my GeForce GT-250.

Could it be a manifestation of the Mach Band Effect?

what do you expect, the sphere has such a low poly count it might as well be a cube

That seems unlikely, since if I increase the tessellation of the mesh, the pattern of the lines change according to the mesh’s triangles.

what do you expect, the sphere has such a low poly count it might as well be a cube

First, it’s still there on high-poly meshes. Second, if you interpolated the normals across the edges of a cube, I wouldn’t expect it to have sharp edges either. It certainly wouldn’t look like a sphere, but it shouldn’t have these large bands. Third, the low polycount doesn’t change the fact that the per-vertex normals, in model space, appear to be changing color (and therefore direction) when you view the object from different angles.

I’m using a low-poly sphere because it best and most clearly demonstrates the problem. You can still see it on a high-poly sphere. Or the high-poly terrain mesh that I actually want to use when I first noticed the problem.

Perhaps just the inaccuracies of doing mere linear interpolation on normal vectors? What about a slerp?

Precision Normals (Beyond Phong)

Also could just be mesh density. The pixel positions you’re computing lighting for aren’t really on the sphere. They’re on a flat plane under the sphere. This should serve to highlight the faceted appearance at glancing angles with a spotlight.

One other time I’ve seen the lighting artifact of “bright shimmers” along the edges of a volumetric object (moreso than I see here though) is when you mistakenly apply two-sided lighting to a volumetric object like this, which is incorrect (i.e. you auto-flip the surface normal toward the eyepoint). However, you’re not doing that here.

Normal interpolation problems was the first thing I thought as well. But I don’t think the pattern is consistent with that.

It really goes back to those three pictures of the model-space normals. Those normals should not have different values based on the rotation of the camera, but they do. When they’re on the right side, they are brighter than the face of the triangle, but when they’re on the left side, they are darker.

I’m not sure how interpolating normals can give rise to camera dependency, when the original values are not camera-dependent.

Just added another thought to my post above.

Looks like you’re just mapping the object-space normals to R,G,B, and chopping everything less than 0 to 0. So yeah, with object-space normals you’d expect to see a difference as the eye moves around the object. Now if you were using eye-space normals, it’d be a different story. They shouldn’t change much.

But maybe I’m misunderstanding what you’re showing here.

Other things to check you’ve probably already taken care of: input normals computed from the real sphere not its faceted rep, input normals pre-normalized, normal matrix is a full inverse transpose (or no scales/shears in modelview),

These lines are caused because while the normals are C0 continuous, they are not C1 continuous (first derivatives). Your eye can pick up directive discontinuities, especially in the case you’ve provided (black/white gradient). Luckily, texturing the model (or adding a slight random noise texture to the sphere if it is to be untextured) will break up the brightness derivatives enough to make them disappear.

I’ve attached an edge detected image of your first image which demonstrates what I’m referring to. The ringing is due to the 8bit quantization from FP32 of the original png image, and is likely not due to the interpolation.