gl_ModelViewMatrix / gl_ModelViewProjectionMatrix

What is the equivalent of gl_ModelViewMatrix and gl_ModelViewProjectionMatrix in modern OpenGL?

I have this code which is context “#version 330 core”

    gl_Position =   
 
        //    Create a Perspective view
        PerspectiveViewMatrix(90.0, AspectRatio, 0.01, 1000.0  )
        
        //    Move back 4 + whatever the wheel says:
        *    TranslationMatrix(0, 0, -4 -0.35*MouseWheel)        
                    
        //    Rotate X axis with Y:
        *    RotationMatrix(MouseMovement.y, X_AXIS)        
        *    RotationMatrix(-MouseMovement.x, Y_AXIS)    
        *    RotationMatrix(float(Strafing*3), Z_AXIS)    
                
        //    Apply transformation matrix:
        *    TransformationMatrix

        *    in_Vertex;

Which part of it is old gl_ModelViewMatrix and which part is gl_ModelViewProjectionMatrix? (What is gl_ProjectionMatrix that was used to create ModelViewProjection?)

In modern opengl you handle the matrices yourself rather than opengl since gltranslate and the others are deprecated assuming you’re using a core profile.

since gl_ModelViewMatrix and gl_ModelViewMatrixProjection got deprecated in the core it means that you will have to pass the matrices in as a uniform.

glm::mat4 m_projectionMatrix = glm::perspective(60.0, 1.333, 0.1, 1000.0);

glm::mat4 m_modelViewMatrix = glm::lookat(…);
m_modelViewMatrix *= glm::translate(…) * glm::rotate(…) *(etc.);

//pass this matrices in as uniforms to the shaders.

[shader :vertex]

uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;

in vec4 in_vertices; //attribute for the vertices

void main()
{
gl_Position = projectionMatrix * modelViewMatrix * in_vertices;
}

You can think of the matrices like this too:
gl_Position = projectionMatrix * viewMatrix * worldMatrix * in_vertices;

Or create them in the Shaders GPU-side, which I think is closer to the spirit of current APIs/Hardware.

I think hardcoding it with a GLM way for all is a bit against the spirit of the API which appears to be “do it your own way”.

Anyway, thanks for the example, it might offer an insight on an alternative way.

EDIT:

What I mainly asked is what would be the equivalent of those old matrices on the new code posted above (on the OP). Completely equilizing the old ones with an external library defies the purpose of the question a bit.

The equivalent matrices can be found in the OpenGL Redbook.

If you look through the posts and read the new 5th edition OpenGL Bluebook (book focusing on openGL 3+ by Richard Wright Jr. et al), the odds of finding the matrix math done completely on the GPU is less than a percent. Most text and samples for openGL 3 all use some sort of “GLM” like library. As for it being against the spirit of the new API, I don’t really see that in practice, texts, or examples. The API seems to be striving to be minimal with code that relates to the GPU alone? Hence leaving out things like matrix stacks, lighting and material, convolution filtering, display lists, evaluators. But maybe there are cases where you want to do the matrix operations on the GPU – but why would you want to compute something thousands of time on the GPU for each vertex rather than once on the CPU and have it simply applied directly on the GPU as the last step as a simple single matrix multiplication? For instance in Rotate* you could avoid recomputing costly sin/cos ops by having it done once on the CPU and sent as a uniform to the GPU where the GPU would not have to compute costly sin/cos ops at all then.

Or create them in the Shaders GPU-side, which I think is closer to the spirit of current APIs/Hardware.

Create them from what? The CPU is the one that knows all of the relevant information. Unless all your transforms are simple, you’re looking at a complicated set of rotations, scales, and translations (possibly among other things). How would you even pass this information up to the GPU, let alone have the GPU process it?

It’s just a waste of time when a simple 16 float matrix can do the job faster and more efficiently.

I think hardcoding it with a GLM way for all is a bit against the spirit of the API which appears to be “do it your own way”.

No, the spirit of the API is “do what works.” And matrices, however old they may be, work. That’s why they were used for so long, and it’s why they continue to be used. They’re an efficient, compact representation of a change in coordinate system.

There will be times when you need other solutions. Dual-quaternions are a good solution for skinning needs; they have some good properties as a skinning solution. But the default case is matrices.

I now send perspective, rotation, translation, transformation matrices to GLSL that change in OGL client only if the corresponding input changes.

I now consider whether there would be significant advantages if they were send pre-multiplied.

In every possible case, there are advantages to pre-multiplying. Whether it’s significant is simply a function of how many vertices your vertex shader processes. If your vertex shader processes so few vertices that you are never vertex shader bound, then it doesn’t make a significant difference.

Think about it: do you multiply your matrices once, on the CPU, and send just 16 (or maybe just 12; if the fourth row is always (0 0 0 1) as for most applications it will be, you don’t even need to send it) floats to the GPU, or do you send all the floats for each of those matrices to the GPU and multiply all your matrices for every vertex (even if none of the matrices have changed since the previous frame)? Also, you have a very limited number of uniforms that can be sent to the GPU, do you want to consume lots of them with all this data that can be distilled down to just one or two matrices?

Ye I know I just can’t seem able to do it without screwing it up, hehe. My math is rusty (well, it was never there till now).

I’m trying to update it only if a single ‘sub-matrix’ changes but I suspect that’s not easily possible, unless I miss something.