Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Calculating the Final Vertex Position with MVP Matrix and another Object's Trans, Rot

  1. #1
    Intern Contributor
    Join Date
    Apr 2014
    Posts
    55

    Calculating the Final Vertex Position with MVP Matrix and another Object's Trans, Rot

    Hello,

    So I figured out the MVP matrix thing and I get things on the screen all well and good.

    I searched for this additional piece of information but I didn't find it as of yet.

    The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

    So I know I will need another uniform in my vertex shader code below but how do I alter the near-last line to incorporate the "ObjectOffsetMatrix" which contains the translation, rotation, and scale?

    Thank you for your time.

    Code :
    #version 140
     
    #extension GL_ARB_explicit_attrib_location : require
     
    in vec4 vPosition;
    in vec3 vNormal;
    in vec2 vUV;
     
    out vec3 SurfaceNormal;
     
    uniform mat4 ModelViewProjectionMatrix;
    uniform mat4 ObjectOffsetMatrix;
    uniform mat3 NormalMatrix;
     
    void main () {
    	SurfaceNormal = normalize(NormalMatrix * vNormal);
    	gl_Position = ModelViewProjectionMatrix * vPosition;
    }

  2. #2
    Junior Member Regular Contributor Agent D's Avatar
    Join Date
    Sep 2011
    Location
    Innsbruck, Austria
    Posts
    146
    Why don't you just use a modelview matrix and a normal matrix per object and a seperate projection matrix?

  3. #3
    Intern Contributor
    Join Date
    Apr 2014
    Posts
    55
    Quote Originally Posted by Agent D View Post
    Why don't you just use a modelview matrix and a normal matrix per object and a seperate projection matrix?
    OK, so something like this?

    Code :
    #version 140
     
    #extension GL_ARB_explicit_attrib_location : require
     
    in vec4 vPosition;
    in vec3 vNormal;
    in vec2 vUV;
     
    out vec3 SurfaceNormal;
     
    //
    //uniform mat4 ModelViewProjectionMatrix;
    //
     
    //
    //uniform mat4 ObjectOffsetMatrix; 
    //
     
    uniform vec4 ObjectRotationVector; 						-> Calculated CPU Side
    uniform vec4 ObjectTranslationVector; 					        -> Calculated CPU Side
    uniform vec4 ObjectScalingVector; 						        -> Calculated CPU Side
    uniform mat4 ProjectionMatrix;  						        -> Calculated CPU Side
     
    uniform mat4 ViewMatrix;								-> Calculated CPU Side?
     
    uniform mat3 NormalMatrix;								-> Where would this be calculated? CPU side?
     
    void main () {
    	vec4 vectorPosition;
    	SurfaceNormal = normalize(NormalMatrix * vNormal);
    	vectorPosition = ObjectScalingVector * ObjectRotationVector * ObjectTranslationVector * vPosition;
    	gl_Position = ProjectionMatrix * ViewMatrix * vectorPosition;
    }

  4. #4
    Junior Member Regular Contributor Agent D's Avatar
    Join Date
    Sep 2011
    Location
    Innsbruck, Austria
    Posts
    146
    Something like that.

    The way I do transformations is pretty much like this:
    Code :
    ....
    uniform mat4 m_modelview;
    uniform mat3 m_normal;
    uniform mat4 m_projection;
    ....
    in vec4 v_position;
    in vec3 v_normal;
    ....
    void main( )
    {
        ....
        vec4 P = m_modelview * v_position;
        vec3 N = normalize( m_normal * v_normal );
        ....
        /* use P and N for whatever, tangents and bitangents can be transformed similar to v_normal */
        ....
        gl_Position = m_projection * P;
    }

    Used in the shader:
    • m_modelview transforms from models space into viewspace
    • m_normal transforms model space normal to view space normals
    • m_projection contains the camera projection matrix


    m_modelview is calculated per object on the CPU using double precision arithmetic as the result of:
    m_modelview = m_worldview * m_modelworld
    m_modelview is sent to the shader as floats.

    m_normal is computed on the CPU as this: m_normal = transpose( inverse( mat3( m_modelview ) ) )

    The reason to do this with double values on the CPU:
    • Think of a scenario where the camera is looking at an object close to it.
    • Both the camera and the object can be very far from the world space origin.
    • The object has a world space position stored.
    • The camera has a world space position stored.
    • Transforming from model space to world space first, yields a matrix with large values in the last column.
    • Same goes for transforming from world space to view space.
    • Can result in floating point inaccuracies, altough the object is actually close to the camera.
    • Transforming from world space to view space in one step yields a matrix with small values, as the object is close to the camera.


    Of course, you could combine the m_modelview with the m_projection like this:

    MVP = m_projection * m_modelview

    and only send the MVP matrix to the shader.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •