Face Normals vs. Vertex Normals

I’ve been reading up a lot on both face and vertex normals and I’ve read from a couple of sources that vertex normals are not really needed. Any input on this, when are the best times to use each of these?

You can compute reasonably-looking vertex
normals from face normals, with some small
caveats related to very sharp edges. So, in
that regard, vertex normals are “not needed”.

However, when rendering in OpenGL, the local
lighting model uses vertex normals, so for
smooth areas, you need to have a normal per
vertex. Using face normals will not give you
smooth gourad (or specular) shading; it’ll
give a sharp edge between each triangle.

It gets even worse: if you’re using face
normals in OpenGL, you may have to treat one
point in space as three or four different
vertexes (in a vertex array) if you’re using
face normals. That will cost you lots of
extra unnecessary transform work. This is
because in OpenGL, one “vertex” is the total
aggregate of position, normal, color, texture
coordinates, etc. Three triangles meeting in
one point of course have three different face
normals, and thus actually generate three
different vertexes, unless you’re using
(“smoothed”) vertex normals.

I think you should use vertex normals when you have rounded surfaces represented by a small number of flat triangles. In these situations if you use face normals a sphere would present too many edges, so it would not look like a sphere at all.

Whoever told you that vertex normals are not needed was either talking about a special case, or they are on strong medication (or should be). If you desire a faceted look to your model/scene/whatever, then face normals are the way to go (actually, you can achieve the same look with vertex normals but that’s another story). Otherwise, if you want smooth features to what you’re drawing you MUST use vertex normals. For example, you can make a mesh that has relatively few polys in it look like a sphere using vertex normals. If you use face normals, it will look more like a disco mirror ball (facets and all). However, to achieve the smooth effect with vertex normals, you have to be sure and average the normal in question with all adjacent face normals. So basically, you generate the face normals first, then you step through all the faces and find all the faces that use a single vertex and then average the normals for those faces to create the vertex normal. If you want the faceted look with vertex normals, simply don’t average the face norals. Instead, simply assign the value of the face normal to the vertex normal. The problem here is that you’ll need three vertices PER TRIANGLE. Or four in the case of quads. I’ve never tried this so I don’t know how/if it works. But unless I’m totally out of touch, I can’t see how anyone would make a blanket statement that vertex normals are not needed.

I agree with you Punchey!

But I guess you don’t need vertex normals if you are using objects like cubes, pyramids…

Seriously, I believe vertex normals provide smooth shading which is essential in certain applications.

Is it harder to texture an object if it’s verties are normalized?

You only normalize the “normal” of a vertex. So this has no bearing on texture coordinates. Here’s the deal:

Let’s say you have a single quad that is part of a larger model. But let’s just take that quad into consideration for now. You’d do the same for every other quad in the model. Okay, obviously, a quad is made up of four coplanar vertices (they all lay in the same plane). So, using immeadiate mode routines in OpenGL, you’d do the following to draw that single quad (pseudocode):
glBindTexture…(…);
glBegin(GL_QUADS);
glTexCoord2f(s,t);
glNormal3f(normal.x,normal.y,normal.z);
glVertex3f(vert.x,vert.y,vert.z);

… // repeat for the remaining three vertices.

glEnd();

As you can see, the vertex’s normal is seperate from the vertex itself. The vertex itself describes the position of one of the “corners” of the object being drawn. The normal tells OpenGL in which direction the object being drawn is facing. In the case of face normals, it tells the direction the quad is facing. In the case of smooth vertex normals, it’s telling where the average of the adjacent faces are facing. This averaging is what leads to the smooth look. But as far as textureing goes, it’s exactly the same. This is because texture coordinates have absolutely nothing to do with an object’s normals. The texture coordinates just describe where to “pin” the texture to the object. As far as the end result, OpenGL takes care of everything for you. The only thing you have to do is tell the normals where to “point”. Then, you just feed them to OpenGL and that’s it. If you already have a model or something you’re wanting to generate smooth vertex normals for, do something like this (psuedocode):

for(i=0; i<numVertices; i++)
{
for each face in model
{
if face uses vertices[i]
{
tempNorm += face.normal;
numFaces++;
}
}

vertexNorm[i] = tempNorm/=numFaces;
}