The obvious way to compute vertex normals is to base them on the average of the adjacent polygon face normals. But this does’nt always cut it. When my program loads a cube model (or anything else with perpendicualar faces), the shading is messed up at the edges. Sharp edges have an incorrect beveled look.
One idea is check if the normals of two adjacent faces are more than a certain amount apart and avoid calculating an average normal for the adjacent vertices if they are (and simply assign to them the same face normal). But I have no idea as to how to implement this.
My triangle class maintains 3 pointers to each vertex and each vertex maintains a list of array indexes (not pointers) to the adjacent polygons. Vertices are shared.
After the triangle normals are computed I have this code:
for each vertex {
reset normal
for each adjacent triangle referenced {
normal += adjacent triangle normal
}
normalize normal
}
Any ideas as to how I could adapt this code to calculate the normals accuratly? Speed isn’t too important as this takes place in the pre-processing stage prior to being raytraced.
OpenGL doesn’t suffer this problem even with immediate mode rendering; so how on earth does it calculate vertex normals, when the only data it has is the single polygon being passed between a glBegin() and glEnd() call!?
Any help appreciated.