Help to find which polygon is furthest from camera

I need to sort my polygons and draw them back to front because there will be cases where alpha transparent polygons (like windows in a house) will overlap each other in my software.

I have tried to take the nearest point of each polygon and calculate the distance from the camera using my calcdist function, but there are some cases where that doesn’t work such as when close up and two polygons share the same nearest point like in the case where two walls meet at a corner yet one is clearly behind the other in reference from camera position

I am not sure what to do now as I don’t think calculating the polygon distance from the camera is going to work. So my question is, is there a standard algorithm to determine which polygon from my code below (ppoints1 or ppoints2) is furthest and which one is nearest based on the current camera position and direction?

Thanks,
Greg



struct Point
{
	float x;
	float y;
	float z;
};

float calcdist(Point v1,Point v2)
{
	Point v;
	v.x = v1.x - v2.x;
	v.y = v1.y - v2.y;
	v.z = v1.z - v2.z;
	return sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}

GLdouble m[16];
float camera_pos[3];
float camera_dir[3];

glGetDoublev(GL_MODELVIEW_MATRIX, m);
camera_pos[0] = -(m[0] * m[12] + m[1] * m[13] + m[2] * m[14]);
camera_pos[1] = -(m[4] * m[12] + m[5] * m[13] + m[6] * m[14]);
camera_pos[2] = -(m[8] * m[12] + m[9] * m[13] + m[10] * m[14]);

camera_dir[0] = m[8];
camera_dir[1] = m[9];
camera_dir[2] = -m[10];


Point ppoints1[] = {{-10,10,40.0f}, {-10,10,0.0f}, {10,0.0f,0.0}, {-10,0.0f,40.0f}};

Point ppoints2[] = {{-10,10,0}, {10,10,0}, {10,0,0}, {-10,0,0}};



It all depends on the level of fidelity that you need.

The biggest insight is that you don’t need to sort your opaque polygons (triangles). Just flip on the Z buffer and throw them at the GPU in whatever order – let the hardware sort out the mess. This works great (ignoring ZCULL and EarlyZ optimizations for now).

It’s only the alpha polygons (I’ll call them triangles for now, since every area primitive you render ends up rendered as triangles), and more specifically alpha blend polygons (i.e. those rendered with GL_BLEND enabled), that you need to sort.
And specifically, sort them in eye-space.

Note here that you don’t need in general a full depth sort. If you know something about potential overlap of alpha groups, you can just sort the groups that might overlap each other. But you may not know that or can’t assume there are such groups.

For a course but fast level of fidelity, put each group of connected alpha triangles in one group (call it a batch) and then just depth sort the groups. This doesn’t cover all cases but it may be good enough for you.

For instance, one case this fails is if you have a transparent sphere or box. You kinda need a subsort to get the order right.

And worst case (intersecting alpha triangles), there is no way to sort the alpha triangles to get a complete furthest-to-nearest order, without splitting them.

So it really depends on your scene, what you know about it, what you can enforce upon it, and the level of fidelity that you need.

Not also that there is also what’s called “screen door transparency” (aka alpha-to-coverage) which, while it can reduce the quality of the result, kinda lets you “cheat” and avoid the sort altogether. The higher the number of multisamples you use, the higher the quality, but the higher the fill.

I’m sure this transparency/alpha sorting/blending thing is touched on in a lot of graphics books. Realtime Rendering for example discusses it and other approaches to transparency, such as A-buffer and depth peeling.

Thank you so much for the reply and information. I have learned a lot reading your post.

I just have one more question, you mentioned that in order to get a perfect nearest to furthest result for displaying overlapping alpha polys they must be split. Is this something that an axis aligned BSP tree could solve for me?

Sure thing.

I just have one more question, you mentioned that in order to get a perfect nearest to furthest result for displaying overlapping alpha polys they must be split. Is this something that an axis aligned BSP tree could solve for me?

Right. BSPs split polygons along half-space planes which lets you get the “draw order” right, for painter’s algorithm, alpha blending, or whatever.

Also, consider something like depth peeling, which takes this split/sort problem and puts it on the GPU, doing the splitting/sorting per pixel. While not super cheap, it is GPU accelerated. Saves you all the CPU trouble of splits/BSPs, which don’t like dynamic geometry much.

An expensive method but with higher degree of accuracy is to use ray tracing. On GPU it is faster than on CPU. To expedite ray tracing you can use beam tracing.

Cases like intersecting triangles and the wall example you mentioned is easily solved. Ray tracing renders pixel by pixel.