GL 3.1 and glsl 1.4

Hello,

In our current project we use Opengl via JOGL ( Java bindings for OpenGL ). I was asked to test GL 3.1 functionality. My problem is : when i provide required matrix for 3D object transformation i get some clipping artifacts. I know that ftransform( ) removed with glsl 1.4.

What i do is:

In Application side:

  • Calculate View and Projection matrices for current frame
  • Calculate World matrix of the 3D object for current frame
  • Calculate ViewProj = Projection * View matrix for current
    frame and for each object calculate WorldViewProj =
    ViewProj * World.
  • Pass World and WorldViewProj as uniforms to vertex shader

My Vertex shader is something like that :
( it transforms objects from object space to clip space and performs some calculations for lambertian diffuse shading)


#version 140
uniform mat4 WorldViewProj;
uniform mat4 World;
uniform vec3 lightPos;
in vec3 position;
in vec3 normal;
out   vec3 outNormal;
out   vec3 lightVector;

void main()
{
  gl_Position     = WorldViewProj * vec4(position.xyz , 1.0);
  vec3 worldPos   = vec3(World * vec4(position,1.0));
  outNormal       = vec3(World * vec4(normal , 0.0));
  lightVector     = lightPos - worldPos;
  outNormal       = normalize(outNormal);
  lightVector     = normalize(lightVector);
}


This code produces an image something like that :

i coded glsl 1.3 and glsl 1.2 versions of that vertex shader. Results were the same. Then i tried ftransform( )with glsl 1.3 and glsl 1.2 everything was ok ( no clipping artifacts ).

Last thing i tried with GL 3.1 and glsl 1.4 was : passing world , view and projection matrices to vertex shader and do matrix multipications in the shader.


uniform mat4 World;
uniform mat4 View;
uniform mat4 Proj;

in vec3 position;
in vec3 normal;
out   vec3 outNormal;
out   vec3 lightVector;


void main()
{
  gl_Position = (Proj * View * World ) * vec4(position.xyz , 1.0);

   // rest is the same..

}


This code gives correct output but doing 2 matrix muls + 1 matrix - vector mul per vertex is overkill.

I use WinXp pro , Nvidia GTX 285 , 182.52 drivers installed.

Am i missing something too obivous ?

Thanks.

scg.

Just a quick thought: obviously the result of your CPU and GPU matrix multiplication differ.
One explanation could be a fault in your matrix multiplication implementation.
Did you make sure you use the same matrices when computing the matrix product and setting uniforms ?

Then dont do it. Multiply those three matrix on CPU, and pass only one additional matrix. OpenGL <3.1 does this for you automatically when you use gl_ModelViewProjetionMatrix built in uniform.

Also be aware that Opengl uses matrix column major order convention in fixed functionality. Perhaps it is not the case in your matrix class which may perform operations using the row major order convention.

S. Seegel : Yep , you were right. My matrix multiplication code was faulty. I found the mistake and corrected it. Everything works properly now. Thanks for the replies.

scg.