Seemingly simple lighting question

I have this high-quality mesh I want to render. I just want it to appear on screen, lighted properly, without looking terrible. However, these are the current results. You can see the triangles are really highlighted, and it looks ugly. It looks worse with coarser meshes, too. I’ve checked, and my normals are being computed correctly and are all pointing out of the surface. I know that the proper way to get true Phong shading is to code a shader on the GPU, but… surely I should be able to get better results using OpenGL shading.

Additionally, I want to be able to rotate the model, but keep the light fixed. That is, I want to be able to rotate to the back half of that model, or the underside, and not see darkness. I want the light to be in the same location, regardless of how the model is rotated, so I can rotate the model and see all the areas illuminated. I’ve coded what I think to be the way it works, but clearly things aren’t going the way I think they should. :slight_smile: I’ve read up on the lighting chapter in the openGL book, too, and I’ve tried all sorts of variations on the order of commands, but nothing seems to work. I feel like I’m not initializing something properly.

Can someone tell me where I’m going wrong with this C# code? I clearly am missing some rudimentary knowledge here. (I apologize in advance for poor tabbing in the code)


private void initGL()
        {
            // Set GL states, create lights, etc.
            Gl.glClearColor(1.0f, 1.0f, 1.0f, 0.0f); // Let OpenGL clear to white
            Gl.glShadeModel(Gl.GL_SMOOTH);
            Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
            Gl.glEnable(Gl.GL_DEPTH_TEST);
            Gl.glEnable(Gl.GL_LIGHTING);
            Gl.glEnable(Gl.GL_LIGHT0);
            Gl.glEnable(Gl.GL_BLEND);

            Gl.glColorMaterial(Gl.GL_FRONT_AND_BACK, Gl.GL_AMBIENT_AND_DIFFUSE);
            Gl.glEnable(Gl.GL_COLOR_MATERIAL);

            resizeGL(this, null);
        }

private void resizeGL(object sender, EventArgs e)
        {
            // Set up a projection
            int w = glControl.Size.Width, h = glControl.Size.Height;
            
            Gl.glViewport(0, 0, w, h);
            Gl.glMatrixMode(Gl.GL_PROJECTION);
            Gl.glLoadIdentity();

            Glu.gluPerspective(45, (double)w / (double)h, 1, 100);

            // Return to modelview mode for rendering
            Gl.glMatrixMode(Gl.GL_MODELVIEW);
            //Gl.glLoadIdentity();
        }

private void paintGL()
        {
            Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
            Gl.glMatrixMode(Gl.GL_MODELVIEW);
            Gl.glLoadIdentity();

            Glu.gluLookAt(0, 0, 7, 0, 0, 0, 0, 1, 0);

            Gl.glMultMatrixf(trackball.Matrix); // do rotation as per trackball
            Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_POSITION, lightPosition1); // lightPosition1 is { 10, 0, 10, 0 }            
            
            Gl.glEnable(Gl.GL_LIGHTING);

            surface.render();
}

// the surface.render function
private void render()
{
   // some stuff

float[] spec = { 1, 1, 1, 1 };
                Gl.glEnable(Gl.GL_LIGHTING);
                Gl.glColor4fv(m_shadeColor);
                Gl.glMaterialf(Gl.GL_FRONT_AND_BACK, Gl.GL_SHININESS, 30.0f);
                Gl.glMaterialfv(Gl.GL_FRONT_AND_BACK, Gl.GL_SPECULAR, spec);
                Gl.glMaterialfv(Gl.GL_FRONT_AND_BACK, Gl.GL_EMISSION, new float[] { 0, 0, 0, 1 });
                Gl.glPolygonMode(Gl.GL_FRONT_AND_BACK, Gl.GL_FILL);

Gl.glShadeModel(Gl.GL_SMOOTH);
                Gl.glLineWidth(2);

// surface is then rendered using VBOs
}

Changing the order of rotation and specifying the light’s position doesn’t have the effect I think it should; the light seems to rotate around properly, but rotating vertically to see the underside of the surface is still completely unlit (and I do specify GL_FRONT_AND_BACK in all my surface stuff).

I feel silly for having this problem, and even sillier asking for help on it, but … that’s what the beginner’s forum is for, right? :wink:

In fact, no. Unless you subdivide much more your mesh, with something like Catmull-Clark on quads.

The right way is to use per pixel lighting, with a simple shader.
Currently, the lighting is only computed at each vertex normal, and the resulting colors are linearly interpolated within each triangle. You need to interpolate the normal, then do the lighting computation per pixel.
Pretty good tutorials here :
http://www.lighthouse3d.com/opengl/glsl/index.php?dirlightpix

For proper rotation you need to change the order as you described. The key is to use the the GL_LIGHT_MODEL_TWO_SIDE light model parameter to enable two sided lighting. Otherwise the back facing material parameters are ignored.

In fact, no. Unless you subdivide much more your mesh, with something like Catmull-Clark on quads.

The right way is to use per pixel lighting, with a simple shader.
Currently, the lighting is only computed at each vertex normal, and the resulting colors are linearly interpolated within each triangle. You need to interpolate the normal, then do the lighting computation per pixel.
Pretty good tutorials here :
http://www.lighthouse3d.com/opengl/glsl/index.php?dirlightpix [/QUOTE]

I am using Loop subdivision. Even if I subdivide it extremely fine, you can still make out the triangles.

I suppose I can implement a quick per pixel shader. I’ve done it for other purposes before. It just strikes me as odd that the triangle artifacts are THAT obvious using OpenGL shading; I’ve noticed some linear blending across faces, but never with this type of dark/light contrast. It almost seems like the normals are alternating (one facing correctly, the adjacent one backwards), but this is not the case.

Komat: Applying that lighting model seems to let me do what I’m expecting with the rotation, however something weird is happening when I want to draw a different surface (see this image). It’s just a simple triangle mesh with the normals shown, and I get this alternating color thing happening. Do you know what the cause of this is? (EDIT - Realized that I was defining one triangle with clockwise ordering and the other with counterclockwise ordering… this issue seems to be fixed now)

The intensity of the specular lighting which is generating those white areas is very sensitive to direction changes so it becomes very visible that it is evaluated only at the vertices.

Imagine following geometry.


   B
  /|\
 / | \ 
A  |  D
 \ | /
  \|/
   C

Assume that the points A and D are at a line with very high intensity of specular lighting and that the B and C points have much smaller intensity of specular lighting. Ideally that high intensity line would be visible between A and D. However because of the interpolation, the only values which can be at B-C line are those, which are linear combination of values from B and C. Because of this they will have small intensity and the high intensity line is demonstrated only as high intensity areas near the A and D points instead of the proper line it should be.

Part of the problem here is your perception of gradients. Your human visual system is undermining the attempt at peicewise interpolation of colors. In reality there’s only a single color sample per vertex and triangle based interpolation, however your brain is detecting the change in gradient as an edge and reconstructing the underlying geometry. Isn’t it wonderful! This is true Mach banding, as opposed to the simple quantization normally incorrectly referred to as Mach banding (although another Mach effect exaggerates the appearance).

You can only truly solve this with fragment lighting (per pixel lighting), and specifically doing the lighting claculation after you interpolate the normal. Still no guarantees of course but since most of your issues are caused by the specular term I think you’ll be OK.

You could mitigate this SPECIFIC problem by triangulating along the other diagonal (as per komat’s suggestion), but you’ll see issues elsewhere instead so you’ll only be moving the problem without fragment lighting.

Okay, now I know something is messed up. :slight_smile: I copied one of my old Phong GLSL shaders I coded for another class. It’s not totally robust (it doesn’t color back-faced polygons properly for instance), but it does work. The artifacts still remain, and I know rendering a simple triangle mesh like this should never have all these problems.

This image is a simple parametric patch drawn using just openGL lighting, and this image is with the GLSL shader. Note that it conceals SOME of the bad regions (at the top of the bump for instance), but the rest of the regions still convey all that bad lighting. Notice how you can see the underlying triangles even when the region is square with the light source AND almost flat. This is clearly something wrong with how I’m drawing it.

I had thought I was drawing all the polygons in the same order (either CW or CCW, I forget)… and since I have a OBJ loader which loads all the triangles in the same way (and it has problems rendering those meshes too), I think it must be something else wrong with my code.

Does anyone know what’s causing the problem, or potential problems that I can paste my code for?

Are you calculating normals for vertices based on only one face, rather than averaging for all faces connected to a vertex?

In the second image, only specular highlights are almost better, the diffuse color still looks like gouraud shading… Can you post your vertex and fragment shaders?

And what kind of light do you use?

True, the diffuse shading seem to lack a normalize() somewhere in the fragment shader

The image seen in my last post was a parametric surface, so the normals are calculated per vertex (in the mesh) directly using the parametric definition. When I deal with triangle meshes, yes, the normals are computed by averaging all the associated face normals. When I draw the normals, they all seem to be correct.

The light in my source is positioned at {10, 0, 10, 0}. The surface is about 4 by 4 units square (between -2 and 2 for each x and y axis, and raised up about 1.5 units on Z at the bump).

Here are the vertex and fragment shaders. Remember: I kind of hacked these together for a class, so I’m not super confident they’re 100% robust. They produced acceptable results in the class, though, so I assumed it was “good enough”. It’s supposed to be an approximation of Phong lighting, but it doesn’t color the back-facing triangles at all (if someone knows how to do this, please let me know! :slight_smile: I want the end result to mimic the GL_LIGHT_MODEL_TWO_SIDE light model)

Fragment shader


varying vec3 lightDir,normal,viewDir;

void main()
{

	float intensity;
	vec4 color;
	
	// normalizing the lights position to be on the safe side
	
	vec3 n = normalize(normal);
	vec3 l = -lightDir;
	vec3 r = normalize(reflect(l,n));
	float shiny;

	if(dot(r,viewDir) < 0) shiny = 0;
	else shiny = pow(dot(r,viewDir), gl_FrontMaterial.shininess);

	gl_FragColor = gl_LightSource[0].ambient*gl_FrontMaterial.ambient + dot(lightDir,n)*gl_LightSource[0].diffuse*gl_FrontMaterial.diffuse + 
		shiny*gl_LightSource[0].specular*gl_FrontMaterial.specular;
	
} 


Vertex shader


varying vec3 lightDir,normal,viewDir;

void main()
{
	lightDir = normalize(vec3(gl_LightSource[0].position - gl_ModelViewMatrix*gl_Vertex));
	//lightDir = normalize(vec3(gl_LightSource[0].position));
	normal = gl_NormalMatrix * gl_Normal;

	viewDir = normalize(vec3(-1*(gl_ModelViewMatrix*gl_Vertex)));

	gl_Position = ftransform();
} 

For 2-sided lighting, this is simple, transform the normal in eye space like you do in the vertex shader. Then in the fragment shader, compute the dot product between the normal and the eye view vector which is (0,0,1) in eye space. If dot product is < 0, then revert the normal in the fragment before doing lighting calculations.

For the shader, I don’t see any problem right now even if it is not well optimized.