Modern Pipline without Matrix Stack Makes Objects Disappear

So I have recently been reading through the Wikibook on Modern OpenGL, and am looking to update my OpenGL code to conform to the more modern standards. Everything goes well until I hit the part where I need to send three matrices (projection, view, model) to the shader; my test object (a square, rather than a triangle) disappears.

My vertex shader looks like this:

attribute vec3 coord3d;

attribute vec3 v_color;
varying vec3 f_color;

uniform mat4 projection;
uniform mat4 model;
uniform mat4 view;

void main(void)
{
    gl_Position = projection * view * model * vec4(coord3d, 1.0);
    f_color = v_color;
}

And my fragment shader like this:

varying vec3 f_color;
uniform float fade = 0.1;

void main(void) 
{
	gl_FragColor = vec4(f_color.x, f_color.y, f_color.z, fade);
}

Note that I have also tried PMV, since sources conflict on which order is correct. Here is how I render my test square. m_Orientation is a quaternion that is used to store the orientation. Upon translation, I simply set the matrix’ elements 12, 13 and 14 to the X, Y, and Z coordinates of the object’s position, respectively.

void TestRect::Cycle()
{
    m_Orientation.RotationMatrix(m_RotationMatrix);

    glUniformMatrix4fv(ModelUniform,
                       1,
                       GL_FALSE,
                       m_RotationMatrix);

    // Enable alpha
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glUniform1f(FadeUniform, 0.5);

    glEnableVertexAttribArray(CoordinateAttribute);
    glEnableVertexAttribArray(ColorAttribute);

    glBindBuffer(GL_ARRAY_BUFFER, m_VertexBuffer);

    glVertexAttribPointer(
        CoordinateAttribute,   
        3,                     
        GL_FLOAT,              
        GL_FALSE,              
        6 * sizeof(GLfloat),   
        0                      
        );

    glVertexAttribPointer(
        ColorAttribute,                 
        3,                              
        GL_FLOAT,                      
        GL_FALSE,                       
        6 * sizeof(GLfloat),            
        (GLvoid*) (3 * sizeof(GLfloat)) 
        );

    glDrawArrays(GL_TRIANGLES, 0, 6);

    glDisableVertexAttribArray(ColorAttribute);
    glDisableVertexAttribArray(CoordinateAttribute);

    glDisable(GL_BLEND);

    glBindBuffer(GL_ARRAY_BUFFER, 0);
}

Now my previous system, which uses the apparently deprecated ftransform() as well as the definitely deprecated glEnableClientState(), works fine. I know I am loading and linking to the shaders properly, since I have done so previously in the same program.

Using only the model matrix, and not multiplying by projection or view, things work fine; I can rotate the square along any axis and translate it along the X and Y axes (Z-axis translations make it disappear, presumably because there is no projection calculated). So I can only assume the model matrix is correct. I update it every time the model is drawn, like this:

    glUniformMatrix4fv(ModelUniform,
                       1,
                       GL_FALSE,
                       TransformMatrix);

The view matrix is derived from my camera class, which works fine in the old OpenGL format when I call glMultMatrixf(Camera.GetInvertedMatrix()). I assume that if using the matrix in glMultMatrix() works, then I should be using the same matrix as view matrix passed to the shader, right? In the new approach I update the uniform representing the view matrix once per frame, like this:

    glUniformMatrix4fv(m_ViewUniform,
                       1,
                       GL_FALSE,
                       Camera.GetInvertedMatrix());

Finally, I used to use gluPerspective() for the projection, but since I am supposed to have my own matrices, I needed my own equivalent. I implemented code for a projection matrix like this:

void Game::SetPerspective(float FieldOfView, float Aspect, float zNear, float zFar, GLfloat* Matrix)
{
    float xyMax = zNear * tan(FieldOfView * 0.5 * PI/180);
    float yMin = -xyMax;
    float xMin = -xyMax;

    float Width = xyMax - xMin;
    float Height = xyMax - xMin;

    float Depth = zFar - zNear;
    float q = -(zFar + zNear) / Depth;
    float qn = -2 * (zFar * zNear) / Depth;

    float w = 2 * zNear / Width;
    w = w / Aspect;
    float h = 2 * zNear / Height;

    Matrix[0]  = w;
    Matrix[1]  = 0;
    Matrix[2]  = 0;
    Matrix[3]  = 0;

    Matrix[4]  = 0;
    Matrix[5]  = h;
    Matrix[6]  = 0;
    Matrix[7]  = 0;

    Matrix[8]  = 0;
    Matrix[9]  = 0;
    Matrix[10] = q;
    Matrix[11] = -1;

    Matrix[12] = 0;
    Matrix[13] = 0;
    Matrix[14] = qn;
    Matrix[15] = 0;
}

And I call that code exactly once, during program setup, doing this:

    SetPerspective(60.f, 1.33f, 0.1f, 512.f, m_Projection);
    //Used to be gluPerspective(60.f, 1.33f, 0.1f, 512.f);

    glUniformMatrix4fv(m_ProjectionUniform,
                       1,
                       GL_FALSE,
                       ProjectionMatrix);

Now somewhere I must be doing something wrong, because multiplying the projection and view matrices into the coordinates makes my square disappear entirely. But I can’t figure what’s wrong.

I have two, old-style shaders and matrix stacks also working in the background, so I am fairly sure I am not misreading attribute or uniform location.

So what might I be doing wrong?

[QUOTE=Boreal;1248733]So I have recently been reading through the Wikibook on Modern OpenGL, and am looking to update my OpenGL code to conform to the more modern standards. Everything goes well until I hit the part where I need to send three matrices (projection, view, model) to the shader; my test object (a square, rather than a triangle) disappears.

My vertex shader looks like this:

attribute vec3 coord3d;

attribute vec3 v_color;
varying vec3 f_color;

uniform mat4 projection;
uniform mat4 model;
uniform mat4 view;

void main(void)
{
    gl_Position = projection * view * model * vec4(coord3d, 1.0);
    f_color = v_color;
}

And my fragment shader like this:

varying vec3 f_color;
uniform float fade = 0.1;

void main(void) 
{
	gl_FragColor = vec4(f_color.x, f_color.y, f_color.z, fade);
}

Note that I have also tried PMV, since sources conflict on which order is correct. Here is how I render my test square. m_Orientation is a quaternion that is used to store the orientation. Upon translation, I simply set the matrix’ elements 12, 13 and 14 to the X, Y, and Z coordinates of the object’s position, respectively.

void TestRect::Cycle()
{
    m_Orientation.RotationMatrix(m_RotationMatrix);

    glUniformMatrix4fv(ModelUniform,
                       1,
                       GL_FALSE,
                       m_RotationMatrix);

    // Enable alpha
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glUniform1f(FadeUniform, 0.5);

    glEnableVertexAttribArray(CoordinateAttribute);
    glEnableVertexAttribArray(ColorAttribute);

    glBindBuffer(GL_ARRAY_BUFFER, m_VertexBuffer);

    glVertexAttribPointer(
        CoordinateAttribute,   
        3,                     
        GL_FLOAT,              
        GL_FALSE,              
        6 * sizeof(GLfloat),   
        0                      
        );

    glVertexAttribPointer(
        ColorAttribute,                 
        3,                              
        GL_FLOAT,                      
        GL_FALSE,                       
        6 * sizeof(GLfloat),            
        (GLvoid*) (3 * sizeof(GLfloat)) 
        );

    glDrawArrays(GL_TRIANGLES, 0, 6);

    glDisableVertexAttribArray(ColorAttribute);
    glDisableVertexAttribArray(CoordinateAttribute);

    glDisable(GL_BLEND);

    glBindBuffer(GL_ARRAY_BUFFER, 0);
}

So what might I be doing wrong?[/QUOTE]

At a cursory glance, it looks like you are missing a bind buffer for your color buffer.
I generally like to set up my buffers like so:

    
    glEnableVertexAttribArray(CoordinateAttribute);        // ENABLE
    glBindBuffer(GL_ARRAY_BUFFER, m_VertexBuffer);     // BIND
    glVertexAttribPointer(                                          // SET ATTRIB  
        CoordinateAttribute,   
        3,                     
        GL_FLOAT,              
        GL_FALSE,              
        6 * sizeof(GLfloat),   
        0                      
        );

    glEnableVertexAttribArray(ColorAttribute);     // ENABLE
                                                              // missing bind buffer                               
    glVertexAttribPointer(                             // ATTRIB
        ColorAttribute,                 
        3,                              
        GL_FLOAT,                      
        GL_FALSE,                       
        6 * sizeof(GLfloat),            
        (GLvoid*) (3 * sizeof(GLfloat)) 
        );

Makes it a bit easier to track them.

Thanks,
Steve A.

Thanks for the reply!

Unfortunately, adding a glBindBuffer(GL_ARRAY_BUFFER, m_VertexBuffer) and reorganizing things as you propose doesn’t help, though I’ll keep that in mine in the future.

The object is visible under certain circumstances - when not multiplying any matrices into it, and when multiplying only the model matrix into it - but not when multiplying the projection and view matrices into it.

I’ve just double-checked the validity of the matrices I generate in my old system, and completely removed calls to glTranslatef() so as to be using only matrices, and they all seem to be working. I’ve also double-checked that I am actually linking to the uniforms in the shader, and that also works.

Ideally I would like a way to test the projection and view matrices separately, to see which one is causing the problem. Using only one or the other of them always causes the object to disappear, though. Is there any easy way I could use just the View matrix, for instance?

I’ve tried setting the camera 12 units along the Z axis away from the object with no special rotation (which ought to be facing the camera; note that I’ve disabled backface culling and the object still disappears), in the expectation that the object would appear further from the camera but otherwise in the same place; that didn’t work.

I’ve also compared my projection matrix to what I get from glGetFloatv(GL_PROJECTION, Projection) (using GL_PROJECTION_MATRIX crashes the program) with the same values input into my SetPerspective() function and OpenGL’s gluPerspective() function, and the results differ significantly, but I’m not really sure why. Are there mistakes in my SetPerspective() function?

You can probably simplify it a bit. This is in a different language but it shows a reduced projection matrix setup:


public function make_perspective_matrix(atom fov, atom aspect, atom near, atom far)

	sequence res
	atom f

	f = (1.0/tan(fov*3.1415926/360.0))

	res = IDENTITY_MATRIX         -- copy the Identity matrix
	
	res[1] = f/aspect
	res[6] = f
	res[11] = (far+near) / (near-far)
	res[12] = -1
	res[15] = (2.0*far*near)/(near-far)
	res[16] = 0

	return res

end function

Above system works well for my projects.

Steve A.