mast4as

01-07-2015, 03:02 PM

I clarify my question:

in the old fixed-function pipeline we used to pass the perspective projection matrix by setting GL_PROJECTION. Now I also understand that by multiplying a point in camera space by this matrix, we would end up having a point with homogeneous coordinates defined in clip space. The clipping would occur, and finally, the points would be transformed back from homogeneous to cartesian coordinates by divided the x y and z coordinates respectively by the point's w coordinates. Let me know if I don't get that right, but that seems roughly correct.

This seemed possible to me in this old pipeline because the multiplication of the vertex coordinates by the matrix was taken care of by the GPU. So in essence the GPU could multiply the point by the matrix, then do the clipping, then do the perspective divide.

But how does that work in the new pipeline, now that the vertex transform is done in the shader. I believe I understand the principle but just want someone to confirm:

- so the point in camera space is "technical" in cartesian space. However after you have multiplied it by the proj matrix, it ends up being a vec4 point ... in other words a point in homogeneous coordinates, which is in clip space. The point is then taken care of by the GPU which does the clipping, and then finally convert the vertex coordinates back to Cartesian by doing the division by w. Is that correct?

And final question since I have this opportunity to reach experts.

- the "canonical viewing volume" is sometimes referred to as the unit cube. Two questions regarding this. Is the "canonical viewing volume" referring to the "cube" when points are in clip space (what is the dimension of that cube by the way if it is one?), or to the cube after the perspective divide (the cube has dimension (-1,-1,-1) (1,1,1))?

- isn't it wrong to call this the unit cube? (a unit cube as length 1 not 2?).

- NDC coordinates generally refer to coordinates in the range [0,1]. Isn't the terminology NDC misused in the GPU world when it refers to the space in which points coordinates are in the range [-1,1]?

- Finally could someone confirms in a short answer, WHY clipping occurs in "clip space" rather than after the perspective divide (or in other words in what we call NDC space in the GPU world?). Aren't the planes of the volume still defining a cube in NDC space? Is it only for arithmetic reasons. Someone told me that in clip space coordinates are defined as integers and that it simplified the computation of the Sutherland algorithm for clipping. It would great if someone could shed some light on this.

Thank you so much.

in the old fixed-function pipeline we used to pass the perspective projection matrix by setting GL_PROJECTION. Now I also understand that by multiplying a point in camera space by this matrix, we would end up having a point with homogeneous coordinates defined in clip space. The clipping would occur, and finally, the points would be transformed back from homogeneous to cartesian coordinates by divided the x y and z coordinates respectively by the point's w coordinates. Let me know if I don't get that right, but that seems roughly correct.

This seemed possible to me in this old pipeline because the multiplication of the vertex coordinates by the matrix was taken care of by the GPU. So in essence the GPU could multiply the point by the matrix, then do the clipping, then do the perspective divide.

But how does that work in the new pipeline, now that the vertex transform is done in the shader. I believe I understand the principle but just want someone to confirm:

- so the point in camera space is "technical" in cartesian space. However after you have multiplied it by the proj matrix, it ends up being a vec4 point ... in other words a point in homogeneous coordinates, which is in clip space. The point is then taken care of by the GPU which does the clipping, and then finally convert the vertex coordinates back to Cartesian by doing the division by w. Is that correct?

And final question since I have this opportunity to reach experts.

- the "canonical viewing volume" is sometimes referred to as the unit cube. Two questions regarding this. Is the "canonical viewing volume" referring to the "cube" when points are in clip space (what is the dimension of that cube by the way if it is one?), or to the cube after the perspective divide (the cube has dimension (-1,-1,-1) (1,1,1))?

- isn't it wrong to call this the unit cube? (a unit cube as length 1 not 2?).

- NDC coordinates generally refer to coordinates in the range [0,1]. Isn't the terminology NDC misused in the GPU world when it refers to the space in which points coordinates are in the range [-1,1]?

- Finally could someone confirms in a short answer, WHY clipping occurs in "clip space" rather than after the perspective divide (or in other words in what we call NDC space in the GPU world?). Aren't the planes of the volume still defining a cube in NDC space? Is it only for arithmetic reasons. Someone told me that in clip space coordinates are defined as integers and that it simplified the computation of the Sutherland algorithm for clipping. It would great if someone could shed some light on this.

Thank you so much.