scg__

07-06-2009, 02:13 AM

Hello,

In our current project we use Opengl via JOGL ( Java bindings for OpenGL ). I was asked to test GL 3.1 functionality. My problem is : when i provide required matrix for 3D object transformation i get some clipping artifacts. I know that ftransform( ) removed with glsl 1.4.

What i do is:

In Application side:

- Calculate View and Projection matrices for current frame

- Calculate World matrix of the 3D object for current frame

- Calculate ViewProj = Projection * View matrix for current

frame and for each object calculate WorldViewProj =

ViewProj * World.

- Pass World and WorldViewProj as uniforms to vertex shader

My Vertex shader is something like that :

( it transforms objects from object space to clip space and performs some calculations for lambertian diffuse shading)

#version 140

uniform mat4 WorldViewProj;

uniform mat4 World;

uniform vec3 lightPos;

in vec3 position;

in vec3 normal;

out vec3 outNormal;

out vec3 lightVector;

void main()

{

gl_Position = WorldViewProj * vec4(position.xyz , 1.0);

vec3 worldPos = vec3(World * vec4(position,1.0));

outNormal = vec3(World * vec4(normal , 0.0));

lightVector = lightPos - worldPos;

outNormal = normalize(outNormal);

lightVector = normalize(lightVector);

}

This code produces an image something like that :

http://img30.imageshack.us/img30/6523/clippedcube.jpg (http://img30.imageshack.us/i/clippedcube.jpg/)

i coded glsl 1.3 and glsl 1.2 versions of that vertex shader. Results were the same. Then i tried ftransform( )with glsl 1.3 and glsl 1.2 everything was ok ( no clipping artifacts ).

Last thing i tried with GL 3.1 and glsl 1.4 was : passing world , view and projection matrices to vertex shader and do matrix multipications in the shader.

uniform mat4 World;

uniform mat4 View;

uniform mat4 Proj;

in vec3 position;

in vec3 normal;

out vec3 outNormal;

out vec3 lightVector;

void main()

{

gl_Position = (Proj * View * World ) * vec4(position.xyz , 1.0);

// rest is the same..

}

This code gives correct output but doing 2 matrix muls + 1 matrix - vector mul per vertex is overkill.

I use WinXp pro , Nvidia GTX 285 , 182.52 drivers installed.

Am i missing something too obivous ?

Thanks.

scg.

In our current project we use Opengl via JOGL ( Java bindings for OpenGL ). I was asked to test GL 3.1 functionality. My problem is : when i provide required matrix for 3D object transformation i get some clipping artifacts. I know that ftransform( ) removed with glsl 1.4.

What i do is:

In Application side:

- Calculate View and Projection matrices for current frame

- Calculate World matrix of the 3D object for current frame

- Calculate ViewProj = Projection * View matrix for current

frame and for each object calculate WorldViewProj =

ViewProj * World.

- Pass World and WorldViewProj as uniforms to vertex shader

My Vertex shader is something like that :

( it transforms objects from object space to clip space and performs some calculations for lambertian diffuse shading)

#version 140

uniform mat4 WorldViewProj;

uniform mat4 World;

uniform vec3 lightPos;

in vec3 position;

in vec3 normal;

out vec3 outNormal;

out vec3 lightVector;

void main()

{

gl_Position = WorldViewProj * vec4(position.xyz , 1.0);

vec3 worldPos = vec3(World * vec4(position,1.0));

outNormal = vec3(World * vec4(normal , 0.0));

lightVector = lightPos - worldPos;

outNormal = normalize(outNormal);

lightVector = normalize(lightVector);

}

This code produces an image something like that :

http://img30.imageshack.us/img30/6523/clippedcube.jpg (http://img30.imageshack.us/i/clippedcube.jpg/)

i coded glsl 1.3 and glsl 1.2 versions of that vertex shader. Results were the same. Then i tried ftransform( )with glsl 1.3 and glsl 1.2 everything was ok ( no clipping artifacts ).

Last thing i tried with GL 3.1 and glsl 1.4 was : passing world , view and projection matrices to vertex shader and do matrix multipications in the shader.

uniform mat4 World;

uniform mat4 View;

uniform mat4 Proj;

in vec3 position;

in vec3 normal;

out vec3 outNormal;

out vec3 lightVector;

void main()

{

gl_Position = (Proj * View * World ) * vec4(position.xyz , 1.0);

// rest is the same..

}

This code gives correct output but doing 2 matrix muls + 1 matrix - vector mul per vertex is overkill.

I use WinXp pro , Nvidia GTX 285 , 182.52 drivers installed.

Am i missing something too obivous ?

Thanks.

scg.