a promblem about view frustum culling

i read graphics book, book says view frustum culling occurs between projection matrix transform and project divide.
it means that the point go to view frustum culling is nonhomogeneous coordinate.

then , for example it does culling by using Sutherland-Hodgeman algorithm, but i see some graphics books which describe this algorithm, it processes by the 3d coordinate. is it means that we need divide w to get the 3d coordinate? this way repeatly uses the action of projection divide in next step. maybe my thinking is wrong. but i don’t know where is wrong.

can u help me solve my problem? thanks a lot!

[QUOTE=jxaa031757;1253259]i read graphics book, book says view frustum culling occurs between projection matrix transform and project divide.
it means that the point go to view frustum culling is nonhomogeneous coordinate.

then , for example it does culling by using Sutherland-Hodgeman algorithm, but i see some graphics books which describe this algorithm, it processes by the 3d coordinate. is it means that we need divide w to get the 3d coordinate? this way repeatly uses the action of projection divide in next step. maybe my thinking is wrong. but i don’t know where is wrong.[/QUOTE]

I suspect you’re talking about two different use cases for “culling” here.

Let’s be sure about what you’re talking about. Is this culling or clipping (culling is rejecting out-of-frustum objects or primitives so we don’t waste time rasterizing or otherwise processing them; clipping, which comes after that when rasterizing, is potentially modifying primitives which overlap the frustum so they are completely on-screen).

Assuming culling and assuming primitive (e.g. triangle) culling like the GPU does, then this is typically performed in CLIP-SPACE (what you refer to as “between projection matrix transform and project divide”; a 4D homogenous space).

However, if we assume culling and assume whole object or group of objects culling (which is typically performed in the application prior to GPU submission with bounding volumes such as spheres), then this is typically performed in EYE-SPACE (a 3D space). One reason for this is that bounding volumes such as spheres in your OBJECT-SPACE or WORLD-SPACE are still spheres in EYE-SPACE (so long as you don’t use non-uniform scales or shears in your MODELVIEW transform), whereas when you apply the perspective projection transform they are not.

yeah, thank u for replying my question. maybe i don’t describe my question very well, i want to talk about culling in above.

I am writing a simple based on Cpu graphics pipeline program, but i don’t understand how to do culling in clip space for the points which is nonhomogeneous after projection transform.
can u give a simple example or link which culls triangle primitive in clip space?

thank u!

[QUOTE=jxaa031757;1253349]I am writing a simple based on Cpu graphics pipeline program, but i don’t understand how to do culling in clip space for the points which is nonhomogeneous after projection transform.
can u give a simple example or link which culls triangle primitive in clip space?[/QUOTE]
You don’t “cull” primitives in clip space, you clip them. Culling refers to discarding an entire primitive, while clipping constructs a primitive which is the intersection of the original primitive with the clip space.

Also, clip coordinates are homogeneous coordinates; conversion to Euclidean coordinates (by dividing by w) results in normalised device coordinates (NDC). Clipping is performed in clip coordinates (i.e. before division by w).

A perspective transformation transforms the frustum (a pyramid with the top cut off) to a unit cube. An orthographic transformation transforms an arbitrary axis-aligned cuboid to a unit cube. The advantage of clipping after transformation is that the same equations will work for either a perspective or orthographic projection.

A point lies inside the cube if

-1 <= (x/w) <= 1
-1 <= (y/w) <= 1
-1 <= (z/w) <= 1

This can be re-arranged as:

-w <= x <= w
-w <= y <= w
-w <= z <= w

If you have an edge from P1=(x1,y1,z1,w1) to P2=(x2,y2,z2,w2), any point on the edge can be expressed as a linear interpolation of the endpoints:

P = P1 + t*(P2-P1)

For the individual components:

x = x1 + t*(x2-x1)
y = y1 + t*(y2-y1)
z = z1 + t*(z2-z1)
w = w1 + t*(w2-w1)

This will intersect the x=w plane when

   x = w
=> x1 + t*(x2-x1) = w1 + t*(w2 - w1)
=> x1 - w1 = t*((w2 - w1) - (x2 - x1))
=> t = (x1 - w1)/((w2 - w1) - (x2 - x1))

Substituting t into the above equations gives the point of intersection of the edge with the plane. The equations for the other 5 planes are similar.

Significantly, the equations for other vertex attributes (e.g. colour, texture coordinates) are also similar (i.e. the same value of t is used for interpolating texture coordinates as for interpolating vertex coordinates). If you were to divide by w prior to clipping, this wouldn’t be true (you’d be interpolating texture coordinates etc in screen space rather than in world space, which gives the wrong result).

Wow, your reply is very useful for me. thanks!
I understand “nonhomogeneous coordinate” in a wrong way before, i want to mean the coordinate which w != 1.
I think i need to learn a lot of things in opengl and graphics.

thank u for your kind help again!