weird smooth shading, vertex normals?

Well, this is my first post in this forum and I don’t speak English very well, so I’d like to apologize for any mistake I could make.

I’m doing my first program using OpenGL with C (it may end up as a videogame, but at the moment it’s only a model loader and viewer).

The program loads a model from an .OBJ extension file, which i found the easiest for loading. The problem is that the OBJ file contains normal data, but there isn’t always a normal for each vertex neighter a normal for each polygon/face. But each face contains data about which normal that face uses, so I add that normal to each of the face vertexs, and after loading all the faces, I divide the vertex normals by the quantity that have been added. So I end up with the vertex normals (or at least that is what I think).

The problem comes when I test my program. The models loads fine, but the smooth shading is not working properly (the best example is the cube), because when i rotate the scene, the shading in the cube changes a bit, on a weird way.

Here is my source code with the compiled .exe (I’m using AllegroGL too, because in the future i will use some Allegro functions with OpenGL)
I’m sorry if the source code is not documented yet, but I didn’t have time for that.

I hope someone can tell me by the way the program looks if there is a problem with the vertex normals, or with the light.

Thank you very much, and sorry for the inconvenients.

http://www.megaupload.com/?d=AEDH4LR9

EDIT: to move around use the arrow keys and to rotate WASD keys.

(did not look at the source)

  1. there can be multiple normals for a given vertex, example, in the cube, each vertex has 3 different normals. This is not the case if you work with very smooth models, is this case your method is almost correct. It is not “average”, but rather “normalize” : indeed it is very important that each normal is unit length, which is not guaranteed by a simple averaging.
    sqrt(dxdx +dydy +dz*dz) = 1.0

  2. vertex shading is always a bit weird when tesseleation is not high enough.
    Read pitfall 2 here (others can help too) :
    http://www.opengl.org/resources/features/KilgardTechniques/oglpitfall/

Well, that is what i’d call a really fast answer. I’m now on my Vista session so I can’t get that fixed, but I will test it on a few minutes and edit this reply. Thanks, probably that’s the problem. Can you help me a bit with the “normalize” thing?
For example, if I have a vertex normal (2.3,0.3,0.6), how can i normalize it? (sorry for my ignorance, I’m only 16 years old and didn’t learn normals at school yet)

Thanks again =)

A normal sould be a 3D vector having its length equal to one.
So you first calculate the length of your current vector :

l = sqrt(xx + yy + zz)
l = sqrt(2.3
2.3 + 0.30.3 + 0.60.6) = 2.396;

Then use it to divise each of its coordinates to build the correctly normalized normal :
nx = x/l;
ny = y/l;
nz = z/l;

There you go. You can check that sqrt(nxnx + nyny + nz*nz) is indeed equal to 1.0

Thanks, I understood it correctly now. Now i normalize instead of average. I think one of the problems is solved, however, I think I still have a problem with the lighting.

As you can see, the lighting seems ok for the areas which are iluminated, but the dark areas look horrible, i mean, the shadows aren’t smooth enough. But I also noticed that the floor is rendered all of the same colour, like if the light is at the same distance from each vertex of the floor (which is not the case). May this have something to do with the position of the light? I read something about the light having four position variables instead of three (x,y,z,w?) so this is how i inicialize that light:

...
luz0.pos[0]=15.0;
luz0.pos[1]=20.0;
luz0.pos[2]=30.0;
luz0.pos[4]=1.0;
...
glLightfv(GL_LIGHT0,GL_POSITION,luz0.pos);
...

Thanks very much, and sorry again xD

It looks like your normals are still not correct. I advise you to display normals (draw them at each vertex as lines), the best for debugging.
About the 4th coordinate in the light position vector, it tells Opengl whether the light is directional (w=0) or omnidirectional (w=1).
The reason of this is that with homogeneous coordinates, when the 4th coordinate is 0 the 4d vector represents a 3D vector and when it is 1, a 3D position in euclidian space.

Thanks for the advice, I wrote the code to draw the normals.
This is how it looks with drawn normals:

So now I have two questions:

  1. Is thre any problem with the normals? I’m a beginner in this, but i see they are pointing to the right place, and their lenght is 1.0 i think.
    EDIT: The top and bottom normals (of the sphere) are clearly wrong, the others “seem” to be fine, but now i’m doubting about that too… i will check with other models
    EDIT2: I checked with a cube with 192 faces and almost all the normals are wrong, so there’s obviously something wrong with my code when generating normals, I will try to get that fix, thanks anyway.

  2. If my 4th cordinate for the light source is 1.0, why the shading doesn’t change for all the vertex in the floor?

Thank you all guys, I’m learning a lot from you.


I found the error while calcultaing the normals, now they are pointing better than before, but still not perfect, just look at this cube with 12 triangles (i used this example because maybe someone knows or can calculate the normals)

Thanks again

The result you get is logical looking at the normals. It is because you compute the average normal at a vertex between all triangles that use this one.
When you have this kind of shape, you need to create smoothing groups in which all triangles orientation do not differ to much with its neighborhood. I hope you see what I mean. If you are used to some 3D modelers like 3dsmax, I think you will. This kind of work (smoothing groups) is usually done by artists in modelers (and automatically, the model itself). Then you just have to export and import your model and all normals should be correct.

For sphere: Just a guess but, this looks like your material has really low specular exponent. Try raising it a bit to see what happens. For cube: You can’t use a single vertex for every face because with opengl you have a 1 vertex-1 normal relation. With smooth shading, this has the expected behavior but with models requiring flat shading, like the cube it doesn’t work. Instead you must
1)either detect at load time if a vertex id corresponds to more than one normal id’s and if it does create a different opengl vertex for each combination(remember, in opengl threre is a 1-1 vertex-normal relation)to use in your faces.
OR
2) use flat shading but in opengl this is a bit tricky as it requires you to supply your face normal as a vertex normal(see opengl sdk, glshademodel)or, update your normal state between each draw call which in turn increases your draw calls enormously and drops performance.

PS: your sphere and ground normals look fine to me, that’s why I think it’s a specular problem…

Confirmed it by looking at the source files. Increase shininess a lot, somewhere in the 10s-20s. the range is 0 - 128 but ranges close to zero produce these artifacts.
I also see you’re a blenderhead. Bear in mind that the values of materials within blender don’t exactly correspond to opengl values, since opengl uses a single specular and diffuse model(shader in blender).

Just a guess but, this looks like your material has really low specular exponent. Try raising it a bit to see what happens

I’m so ashamed, that was the real problem (solved when I wrote the code for loading the materials from the files instead of using always the same, which was wrong after all). Anyway this thread made me learn lot of things.

Can you tell me a little more about that thing of materials in Blender? I’m a begginer at modelling.

Thank you all guys, now my program seems to work fine, so I can go on.

It is just that for easier conversion of Blender materials to opengl, you should use the same color for diffuse and for specular inside your Blender material.

There are more advanced techniques to do separate specular color in GL, but you will see that later.

Well, there are some pitfalls if you use texturing(Specular must be added AFTER texturing and there is state to control this behaviour) but personally I don’t believe it’s that hard to specify both diffuse and specular color in OpenGL materials.

What I meant about materials is that OpenGL uses something like the lambertian diffuse and cook-Torr specular material shaders so if you use something different in blender you won’t get the same result in OpenGL unless you use shaders. Also, as noted blender values are usually in range 0-1 while OpenGL exponent is in range 0-128.

Blender specular exponent is called “Hard(ness)”, with default value at 50, and it is very similar to GL exponent.

Well, maybe this comparison helps, it’s a sphere made in blender and exported to opengl (with .*obj and .*mtl files). I know the Blender ones looks better, but the materials looks the same (but I’m not used to all this 3d stuff, maybe you can notice the differences):

The only difference in the two renderings above is that the sphere on right is shaded using per pixel lighting and on the left using gouraud shading, typical of OpenGL fixed pipeline (lighting is computed at each vertex and interpolated at each fragment.

Well, so the materials are loaded ok. Is there any form of using per pixel lighting in OpenGL??

Yes you can, using shaders. See GLSL or Cg:

A nice tutorial on GLSL here:
http://www.lighthouse3d.com/opengl/glsl/

For Cg, see the nvidia developers site.

Ok, I’ll try to implement that tomorrow in my loader. How much slower will my loader run if I use per pixel lighting?