Why a normal to a vertex?

This is a question that’s killing me - I’m a mathematician and therefore understand well the concept of a normal to a surface, but why in OpenGL is there suddenly the concept of normals to a VERTEX? Why in the world would you want to declare a normal to a vertex? What do you gain from being able to do that?

I would really appreciate any detail at all, this one’s killing me.

Thanks

Say you had a duodecahedron (polyhedron with 20 triangular faces) and you wanted to calculate the normals for it you would as you say have a normal per face. However, if that same duodecahedron were actually an attempt to model a sphere then you want the normal per vertex. Because continuous surfaces can only be emulated but lighting and other phenomena (perhaps - I don’t know what else uses normals) can be more realistic thereby making the entire image look more realistic.

if you are confused with settinh normals for each vertex, you could use something like this

glNormal3f(a,b,c);
render_my_polygon_now();
glNormal3f(d,e,f);
render_another_polygon_now();

hmm… it could be problem to use this with vertex array…

but assigning normal to each vertex is great. because normals don’t have to be perpendicular to polygon. for example when you are drawing spehere, normals lie on line connecting center of sphere and vertex which are you currently drawing. this produces smooth lighting which isn’t broken of face edges… uff (what a horrible english )

Thanks. That kinda makes sense I guess. I guess I just need to start playing around with it.

So if I’m rendering, for example, a rectangular prism would I have anything to gain from declaring vertex normals over surface normals? And to further muddy the waters, if I rendered the rectangular prism without doing any vertex sharing it would actually be 24 vertices so I could actually have 24 vertex normals - any reason to do 24 vertex normals as opposed to 8?

Thanks again

@DalTXColtsFan
“So if I’m rendering, for example, a rectangular prism would I have anything to gain from declaring vertex normals over surface normals?”

Well, theoretically speaking you would be better off in the above situation if you could render a normal per surface, but openGL doesn’t allow you to do this. So to render a rectangular prism (as oppposed to a really low res sphere :stuck_out_tongue: ) you should use your 24 vertices, 24 normals approach.

@shinpaughp
Nice Explanation

Thanks everyone.

I didn’t realize you really can’t define a normal to a surface. So when you write a block of code like this:

glBegin(GL_QUAD);
glNormal3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(1.0, 1.0, 0.0);
glVertex3f(1.0, 0.0, 0.0);
glEnd();

the Normal is actually propogated to each of the 4 vertices and not to the surface?

And lastly, whatif I call

glFront(GL_CCW);

but declare a normal that’s in the opposite direction? What would that affect? (theoretically)?

Thanks
Joe

Originally posted by DalTXColtsFan:
[b]This is a question that’s killing me - I’m a mathematician and therefore understand well the concept of a normal to a surface, but why in OpenGL is there suddenly the concept of normals to a VERTEX? Why in the world would you want to declare a normal to a vertex? What do you gain from being able to do that?

I would really appreciate any detail at all, this one’s killing me.

Thanks[/b]

One word Gouraud-Shading (ok two words).

While Gouraud-Shading is mathematicaly incorrect (it doesnt take perspective/depth into account) it allows you do a fast linear interpolation of color values to create an image thats “looks” 3d enough.

Lets say one normal per vertex gives you an fast approximation of an lit “rounded looking” 3d object composed of “big” triangles using linear color interpolation.

Good enough?

glFrontFace(GL_CW|GL_CCW) (that’s the function you mean, right?) is only used for backface culling and has nothing to do with the normal. If you invert the normal and change winding order of the vertices, you get exactly the same effect as inverting the normal and not changing the winding order.

And you can specify surface normals in OpenGL. glShadeModel(GL_FLAT) will calculate the lighting equation once and apply the result to all vertices of the primitive being drawn. Using smooth shading and specifying the same normal for each vertex is not the same thing.

THanks for the input everyone - I think I’m going to try playing around with it for awhile, trying different combinations of things and seeing what happens.

Or as David Lee Roth once said, “Raise it up the flagpole and see who salutes!”

Hi!

Normals in GL only affect lighting (thus colouring) of poligons. If you enable GL_SMOOTH shading model, unextended GL computes lit colour for each vertex as if the surface there had the normal specified for that vertex. Then it interpolates colour values between vertices. This is called Gouraud shading. It’s a kind of curved surface simulation. With GL_SMOOTH enabled GL does this even if vertices were given the same normals (they belong to a flat surface). Just try to light them with a point light relatively close to them. You will see variation in the colour of the surface you rendered, since light from a pointlight will hit vertices in different angles.

As I sad normals only affect lighting, so they’re independent from back face culling.

Regards
Tom

… unless you have two sided lighting on, in which case face winding can automatically flip the normal (and material of course).