Black surfaces

Hello,

I’ve written a very simple program that displays a small plane. Whenever I try to scale the plane it’s color changes from white to grey and then to black if I scale it enough. I do not have ambient light enabled. Light 0 is located at 0,0,1 and its color is (1,1,1,1). The viewing volume is defined by ortho with the following parameters (-w/2,w/2,-h/2,h/2,-1,3) where w and h are the viewport width and height. The model is translated to (0,0,-1) and is not rotated. The model material is set with parameters (GL_FRONT, GL_AMBIENT_AND_DIFFUSE, (1,1,1,1)). What gives? How come color changes? If I turn on ambient light to (1,1,1,1) I get a white surface again, but not when just light 0 is enabled.

Confused.

If you scale your polygons, and lighting is being used at all, you must make sure you scale your normals as well. Generally, all polygons are defined by theyre vertices and their normal(s).

When your plane gets scaled, its vertices will move further from the light. As you move them further away, the angle between their normals and the direction of the light approaches 90 degrees and the amount of light affecting the plane approaches zero. Also, as the vertices move further away the strength of the light will drop off, depending on your attenuation settings.

Speaking of normals I’m having some problems. I’m new to openGL, however I just don’t understand normals, and the cross product. I understand what the normals do, but not what to pass to them based on what I want them to do. How in the world do you find out what to pass to glNormal()? Please help!

Normals are used particularly for 2 things: lighting (OpenGL) and collision detection (math).

A normal is a line (not a visual line, although it can be represented visually) that starts at some point on a polygon, and extends perpendicular to the triangle’s plane. So generally it’s where the polygon is facing.

For instance, if I had a triangle laying flat on Y=0, and it’s facing up, then it’s normal would be 0,1,0.

When it comes to lighting, normals define how a polygon is “lit”. They can be specified either per-face (as described in the example above) or per-vertex. Per-face normals tend to leave the lit geometry with a rough look, where the edges are clearly visible. Per-vertex normals give lit geometry a smooth look.

So with per-face normals, you would define the normal (with glNormal…()) once before you draw each face in the geometry. With per-vertex normals, you would define the normal before each vertex thats drawn in the geometry.

Calculating the face normal for a triangle with vertices A,B,C is pretty easy:

vector3D u,v;
u = B-A;
v = C-A;
vector3D normal;
normal = u.cross(v);
normal.normalize();

vector3D is any object that can represent a value with 3 components (X, Y, and Z). The “.cross” function is crossProduct, and is calculated like this:

…vector3D u …vector3D v
vector3D crossProduct;
crossProduct.x=u.yv.z-v.yu.z;
crossProduct.y=u.zv.x-v.zu.x;
crossProduct.z=u.xv.y-v.xu.y;

The “.normalize” functions make sure that the “length” of the vector is 1. This is how you find out if the length of a vector:

double length = sqrt(vector.xvector.x+vector.yvector.y+vector.z*vector.z);

Now for the normalize function, if the length of the vector is greater than zero, then you simply divide each component of the vector by its length.

Okay, I understood about half that. I know understand exactly what the normal product does, but don’t quite understand how it works. If all you do is tell the direction of the plane, what is the point of the cross product, which I still don’t understand. By the way I only am talking about using it for light, collision data is far beyond me.

This makes sense and seems to indicate to create uniform lighting there is a close connection between lighting, transforms and viewing volume definitions. For instance, what if I decide to create a program that would display some arbitrary model. My code right now sizes the viewing volume based on the viewport width and height. When scaling I also change the farclip and translation values so the model fits squarely in the viewing volume. This causes the associated nonuniform lighting problems. So then I thought I’d change it to a constant viewing volume where the volumes height is a constant and the volumes width is the height times the viewport aspect ratio and the near/far clips planes are constant. In this situation the lighting is uniform across all models since the distances and angle do not change. However, if I were to expand the program to allow editing of existing geometry and insertion of new geometry then when the model is larger than the viewing volume and the user uses the mouse to place new geometry the mouse x/y position has to be scaled to world coordinate positions and accuracy is lost. In the end that will probably be necessary reguardless of viewing volume definition, but it made me think of another possibility. What if I sized the viewing volume based on the model extents. In this case, after viewing volume definition the scale is set to 1.0. I’m not sure that it buys me anything though since the distances and angles change as the model is changed, making the lighting nonuniform again.

Maybe what I should ask is how other model viewing or modeling programs have approached this problem. Is there a web tutorial out there that discusses this problem?

The cross product (also known as the “wedge” product) is what you use to figure out the normal. You take the two vectors that make up the triangle (BA,CA)…these two vectors are named U and V. The normal of the triangle is the cross product of the U V vectors:

vector crossProduct(vector u, vector v)
{
vector result;
result.x=(u.yv.z)-(v.yu.z);
result.y=(u.zv.x)-(v.zu.x);
result.z=(u.xv.y)-(v.xu.y);
return result;
}

Make sure you normalize your normals before using them.

OpenGL offers an automatism for that. You can
glEnable(GL_NORMALIZE) for the full thing, or you can glEnable(GL_RESCALE_NORMALS) for just rescaling (applicable if you only applied uniform scales, translations and/or rotations).

You still need to supply normals, this will only ‘correct’ them.
The rationale is that normals are transformed by the inverse transpose of the modelview matrix and may change length during that process.

The automated normal’s are only useful if you are simply rendering graphics. If you plan on getting into things such as collision detection and reponse, then the following function will probably come in handy:

vector normalizeVector(vector vec)
{
float length;
vector v = vec;

length=sqrt(v.x*v.x+v.y*v.y+v.z*v.z);
if(length>0.0)
{
    v.x/=length;
    v.y/=length;
    v.z/=length;
}

return v;

}

“Normalizing” a vector is basically just making the vector’s length equal to 1. This could be, and commonly is, the difference between something working (and/or looking) correctly or not.