From an online tutorial I learned how to add some simple lights to my program. Now in the tutorial he uses normals to generate the lighting on his model. I was lazy and trying to be experimental and I tried lighting without Normals.
Now the lighting doesn’t seem right, but yet it still seems to work overall. My model has a gradient of lighting from one side to the other but I can’t make out individuals faces.
I’m wondering why is this, and what will adding normals do to my model? Will it make faces easier to see?
FYI: I have a generated terrain of faces so adding normals is not a minor job, although I know the math and will implement soon) I just wanted to understand lighting a bit more.
In 3D rendering, normals describes the orientation of every points on a 3D surface. This information is necessary to determinate how incident light hits the surface and compute its radiance.
At least under the glBegin/glEnd paradigm, there is a notion of the “current normal”, which exists and has a value even if you never call glNormal*.
From the OpenGL 2.1 spec: “The initial current normal has coordinates (0, 0, 1).”
As I read the OpenGL 2.1 spec, glDrawArrays and the like are defined in terms of glBegin/glEnd in such a way that the “current normal” would be used for all vertices if you enabled lighting but didn’t enable the normal array.
I don’t know about the more recent GL specs.
Yes, adding normals makes the faces much more three-dimensional.