RealtimeSlave

06-01-2012, 02:23 PM

Hi there,

let's say I have world coordinates stored in a variable Position and I want to calculate

the Normalized Device Coordinates (NDC) for this Position on the CPU in C++ code.

Using a ModelViewProjectionMatrix which contains a perspective projection matrix I would do it like this:

// Calculate clip space position

s_Position= ModelViewProjectionMatrix * glm::vec4(Position[0], Position[1], Position[2], 1.0f);

// Transform from clip space to normalized device coordiantes by doing the "perspective divide" [-1,1]

// See here: http://www.songho.ca/opengl/gl_transform.html

s_Position.x= s_Position.x / s_Position.w;

s_Position.y= s_Position.y / s_Position.w;

s_Position.z= s_Position.z / s_Position.w;

// Calculate window depth coordinate which would be the value written to the depth buffer [0,1]

s_Position.z= s_Position.z * 0.5f + 0.5f;

This should give me s_Position's x and y in a range [-1,1] and z in a range [0,1].

I hope that is correct so far.

Now I want to achieve the same thing but with a ModelViewProjectionMatrix which contains

an orthographic projection matrix.

But in this case w is always 1 so there is no "perpsective-divide".

So which step is performed between vertex and fragment shader to

get the NDC and how can I achieve the same result on CPU side?

Help is really appreciated!

let's say I have world coordinates stored in a variable Position and I want to calculate

the Normalized Device Coordinates (NDC) for this Position on the CPU in C++ code.

Using a ModelViewProjectionMatrix which contains a perspective projection matrix I would do it like this:

// Calculate clip space position

s_Position= ModelViewProjectionMatrix * glm::vec4(Position[0], Position[1], Position[2], 1.0f);

// Transform from clip space to normalized device coordiantes by doing the "perspective divide" [-1,1]

// See here: http://www.songho.ca/opengl/gl_transform.html

s_Position.x= s_Position.x / s_Position.w;

s_Position.y= s_Position.y / s_Position.w;

s_Position.z= s_Position.z / s_Position.w;

// Calculate window depth coordinate which would be the value written to the depth buffer [0,1]

s_Position.z= s_Position.z * 0.5f + 0.5f;

This should give me s_Position's x and y in a range [-1,1] and z in a range [0,1].

I hope that is correct so far.

Now I want to achieve the same thing but with a ModelViewProjectionMatrix which contains

an orthographic projection matrix.

But in this case w is always 1 so there is no "perpsective-divide".

So which step is performed between vertex and fragment shader to

get the NDC and how can I achieve the same result on CPU side?

Help is really appreciated!