Finally getting my normals to work

Hello everyone!

What I’m trying to do is using the normals I got from an obj file. I recently had to re-code large parts of my program to accomodate new standards. Before I did this, the normals worked. I was, however, using a weird mix of fixed pipeline and newer functions. I think I’m still using a pretty weird code, but at least it has worked (until now).
Here is how it looks:

Here is the model in Blender:

Here’s how I’m drawing:
First I get the data from my obj file as seen in this tutorial: Tutorial 7 : Model loading
Then I use it to render the arm:
once:

glGenBuffers(1, &vbo_arm_vertices);
    glBindBuffer(GL_ARRAY_BUFFER, vbo_arm_vertices);
    glBufferData(GL_ARRAY_BUFFER, temp_arm_coords.size(), temp_arm_coords.data(), GL_STATIC_DRAW);

and in the drawing loop:

 glBindBuffer(GL_ARRAY_BUFFER, vbo_arm_vertices);
    glBufferData(GL_ARRAY_BUFFER, armVertexFinalData.size()*sizeof(GLfloat), &armVertexFinalData[0], GL_STATIC_DRAW);
    glVertexAttribPointer(
        attribute_coord3d, // attribute
        3, // number of elements per vertex, here (x,y,z)
        GL_FLOAT, // the type of each element
        GL_FALSE, // take our values as-is
        0, // no extra data between each position
        0 // offset of first element
        );

    glDrawArrays(GL_TRIANGLES, 0, armVertexFinalData.size()/3);

basically.
Now I try to get the normals in there.

    glEnableVertexAttribArray(attribute_v_normal);
    glBindBuffer(GL_ARRAY_BUFFER, armNormalVBO);
    glVertexAttribPointer(
                attribute_v_normal, // attribute
                3, // number of elements per vertex, here (R,G,B)
                GL_FLOAT, // the type of each element
                GL_FALSE, // take our values as-is
                0, // no extra data between each position
                0 // offset of first element
                );

How would I have to modify my shader to use the normals?

Vertex shader:

attribute vec3 coord3d;
attribute vec3 v_color;
attribute vec3 v_normal;
uniform mat4 mvp;
varying vec3 f_color;
void main(void) {
gl_Position = mvp * vec4(coord3d, 1.0);
f_color = v_color;
}

Fragment shader:

varying vec3 f_color;
void main(void) {
gl_FragColor = vec4(f_color.x, f_color.y, f_color.z, 1.0);
}

I first tried to get my normals with glnormalpointer, like this:

glEnableClientState(GL_NORMAL_ARRAY);
    glBindBuffer(GL_ARRAY_BUFFER, armNormalVBO);
    glBufferData(GL_ARRAY_BUFFER, armNormalFinalData.size()*sizeof(GLfloat), &armNormalFinalData[0], GL_STATIC_DRAW);
    glNormalPointer(GL_FLOAT, 0, NULL);

But that didn’t work.
Now I don’t even know where to start looking so could anyone please help me.
I want to say that I really only need the simplest kind of ambient light; nothing fancy or good-looking at all. I need to finish this project and I’m always running into new problems. I had no hard time understanding fixed pipeline openGL, but the whole shader stuff just doesn’t work with my brain. :frowning:

In fixed function OpenGL, you provide a normal pointer, matrices through glMatrixMode using GL_MODELVIEW and GL_PROJECTION and then OpenGL does a bunch of smart stuff for you to do lighting. With shaders, we get to do all the fun stuff ourselves. You are right in trying to use glVertexAttribPointer to provide the normals.

I had trouble understanding this as well when I switched to shaders as well.

What you are missing is providing a “Normal Matrix” to change incoming normals from model space to eye space, since the eye position modifies how light plays on an object. See Lighthouse 3d’s tutorial.

Thanks a lot for the help. I read the tutorial you linked to but have trouble understanding where to provide said matrix.
Also, when I try the

attribute_v_normal = glGetAttribLocation(program, "v_normal");

line as I’m doing with v_color and coord3d, glGetAttribLocation() always returns -1.
Could you or one of you other guys please help me again with this?

If you don’t use a vertex input in your program, most GLSL compilers will optimize it out. The vertex shader in the original post does not use v_normal in any calculation that is exported to the next stage. You’d need to create a ‘varying vec3 N’ variable which contains the result of normal_matrix * v_normal, and then use that in the fragment shader for lighting calculations.

This is especially important to realize that this may also happen for uniform values. Long story short, verify your uniform and attribute locations are valid before using them the first time. This has bitten me personally multiple times.

Thank you guys. I now have an ok lighting situation. I’m not using the normals from the OBJ file at all now but rather have the vertex shader like this:

attribute vec3 coord3d;
attribute vec3 v_color;
attribute vec3 v_normal;

uniform mat4 mvp;

varying vec3 normals;
varying vec3 f_color;

varying vec3 Id;

void main(void) {

gl_Position = mvp * vec4(coord3d, 1.0);
Id = vec3(gl_LightSource[0].position - gl_Position);
normals = gl_NormalMatrix * gl_Normal;

f_color = v_color;

}

and the fragment shader like this:

varying vec3 normals;
varying vec3 f_color;
varying vec3 Id;

void main(void) 
{
vec3 lightDir = normalize (Id);
float intensity = dot(lightDir, normals);

float factor = 1.0;

gl_FragColor = vec4(factor*intensity, factor*intensity, factor*intensity, 1.0) + vec4(f_color.x, f_color.y, f_color.z, 1.0);
}

which looks alright (like this Pic-Upload.de - withlights.png) and is good enough for now, I guess.

If anyone has any ideas for how to make it look prettier, please be my guest. :slight_smile: