PDA

View Full Version : Calculating the Final Vertex Position with MVP Matrix and another Object's Trans, Rot



tmason
05-08-2014, 06:09 AM
Hello,

So I figured out the MVP matrix thing and I get things on the screen all well and good.

I searched for this additional piece of information but I didn't find it as of yet.

The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

So I know I will need another uniform in my vertex shader code below but how do I alter the near-last line to incorporate the "ObjectOffsetMatrix" which contains the translation, rotation, and scale?

Thank you for your time.



#version 140

#extension GL_ARB_explicit_attrib_location : require

in vec4 vPosition;
in vec3 vNormal;
in vec2 vUV;

out vec3 SurfaceNormal;

uniform mat4 ModelViewProjectionMatrix;
uniform mat4 ObjectOffsetMatrix;
uniform mat3 NormalMatrix;

void main () {
SurfaceNormal = normalize(NormalMatrix * vNormal);
gl_Position = ModelViewProjectionMatrix * vPosition;
}

Agent D
05-08-2014, 01:08 PM
Why don't you just use a modelview matrix and a normal matrix per object and a seperate projection matrix?

tmason
05-08-2014, 03:29 PM
Why don't you just use a modelview matrix and a normal matrix per object and a seperate projection matrix?

OK, so something like this?



#version 140

#extension GL_ARB_explicit_attrib_location : require

in vec4 vPosition;
in vec3 vNormal;
in vec2 vUV;

out vec3 SurfaceNormal;

//
//uniform mat4 ModelViewProjectionMatrix;
//

//
//uniform mat4 ObjectOffsetMatrix;
//

uniform vec4 ObjectRotationVector; -> Calculated CPU Side
uniform vec4 ObjectTranslationVector; -> Calculated CPU Side
uniform vec4 ObjectScalingVector; -> Calculated CPU Side
uniform mat4 ProjectionMatrix; -> Calculated CPU Side

uniform mat4 ViewMatrix; -> Calculated CPU Side?

uniform mat3 NormalMatrix; -> Where would this be calculated? CPU side?

void main () {
vec4 vectorPosition;
SurfaceNormal = normalize(NormalMatrix * vNormal);
vectorPosition = ObjectScalingVector * ObjectRotationVector * ObjectTranslationVector * vPosition;
gl_Position = ProjectionMatrix * ViewMatrix * vectorPosition;
}

Agent D
05-09-2014, 05:40 AM
Something like that.

The way I do transformations is pretty much like this:


....
uniform mat4 m_modelview;
uniform mat3 m_normal;
uniform mat4 m_projection;
....
in vec4 v_position;
in vec3 v_normal;
....
void main( )
{
....
vec4 P = m_modelview * v_position;
vec3 N = normalize( m_normal * v_normal );
....
/* use P and N for whatever, tangents and bitangents can be transformed similar to v_normal */
....
gl_Position = m_projection * P;
}


Used in the shader:

m_modelview transforms from models space into viewspace
m_normal transforms model space normal to view space normals
m_projection contains the camera projection matrix


m_modelview is calculated per object on the CPU using double precision arithmetic as the result of:
m_modelview = m_worldview * m_modelworld
m_modelview is sent to the shader as floats.

m_normal is computed on the CPU as this: m_normal = transpose( inverse( mat3( m_modelview ) ) )

The reason to do this with double values on the CPU:

Think of a scenario where the camera is looking at an object close to it.
Both the camera and the object can be very far from the world space origin.
The object has a world space position stored.
The camera has a world space position stored.
Transforming from model space to world space first, yields a matrix with large values in the last column.
Same goes for transforming from world space to view space.
Can result in floating point inaccuracies, altough the object is actually close to the camera.
Transforming from world space to view space in one step yields a matrix with small values, as the object is close to the camera.


Of course, you could combine the m_modelview with the m_projection like this:

MVP = m_projection * m_modelview

and only send the MVP matrix to the shader.