Sorting objects by distance for correct blending

This is a bit of a tricky question as I suspect the problem is a more technical C++ issue than a algorithm issue.

I understand that when drawing transparent objects it is desirable to sort them by distance from the viewer so that they are drawn on top of each other in the correct sequence i.e. draw the furthest away first followed by the next furthest away etc.

I have a C++ algorithm at work to do the sorting. I am making use of the standard template library. I have code which will calculate the distance of an object from the viewer. The transparent objects are stored in a vector. The objects are then taken out of this vector and put one by one into a map with the distance as the key. This results in my objects stored sorted by their distance. I then use a reverse iterator on the map to get the objects back out in distance order and put them back into the vector they came from.

The vector is iterated over in sequence when drawing. The problem is that my transparent objects start dissappearing one by one until there is only one left. Not good. I have used the debugger to trace through the algorithm and it works properly. The problem I think could be some kind of wierd concurrent modification problem where the timer animation routine and the rendering routine may both be trying to work on the same vector and the vector is going wrong.

I am going to try storing in a plain array and perhaps that will fix the problem. It makes me wonder how others go about sorting their transparent objects.

Hi,
I have used several methods and the best in my opinions are:

  1. Bucket Sort - create a 1024-2048 buckets (z slices) and put each primitive in the correct bucket. This is very fast and gives nice results.
  2. Radix sort: check Google for optimized float radix sort.
  3. Spatial hierarchy.

Ido

Remember all iterators become invalid for vectors if an object is added or removed it.

You say “concurrent modification”, so is your program threaded?

As an aside, STL maps are not efficient if you are constantly creating/updating them (if it’s static data or few updates then fine). They use a Red Black tree underneath which gets rebalanced at every change.

You would probably be better with a STL list and just traverse it until the objects Z is larger than the current position. Or, throw them all in and use sort.

Of course, with all performance changes, never assume anything. Measure then change.
–Stuart.

Few words about sorting:

  1. Do not compute distance - compute dot product along camera’s Z axis - not only faster but also more accurate (z-buffer is not “spherical”)

  2. After you have sorted by Z component you can perform additional sort. Instead of comparing Z coordinates of polygon’s center compare two polygons by testing if one is on the clock wise or counter clock wise side of another. You can even use bubble sort this time since most polyons are in proper order anyway.

  3. Some polygons can be merged into groups that never occlude each other. Imagine car windows - if you first draw all windows from the inside and then all windows from the outside then you get corret result. So instead of sorting all polygons you only have 2 groups which additionally are in constant relation with each other.

I did get 97.6% when I did Data Structures and Algorithms so I thought I would put my knowledge to the test. I solved the problem by implementing my own special binary tree. The objects are drawn using inorder traversal of that binary tree. The moral or the story? If you want a job doing right, do it yourself. The performance seems ok. There might be scope for optimisation but at least it works which is the important thing. As regards the distance. I am only using a very simple scheme of x difference + y difference + z difference. Because of the modular, object oriented nature of my program though I could change the way the distance is calculated to a more acurate one. There is always going to be some inaccuracy in my system anyway as I am only calculating distance to the centre of actor objects and not surface by surface. This is sufficient for this piece of coursework but I might have to find a more optimised way of doing things in the future.

I am not entirely sure how I would go about calculating the dot product down the z axis of the camera in relation to the coordinates of other objects.

cp - camera position
cz - z axis of the camera
x - tested point

To compute cz simply create a vector (0, 0, 1) or (0, 0, -1) - depending on how your camera works and rotate it by camera angles.

cp and x do not require any additional comment.

The formula is smple:
z = dot(x - cp, cz);

This way of calculating z coordinate fits well the way depth buffering works.

My Camera uses a pure matrix approach with a position and forward and up vector. It looks like then you are saying that I should create a distance vector of the difference in position between the camera and the object and then do the cross product of that with the camera’s forward vector. I suppose if I preserved the positive and negative values of distance I could also use that to determine whether to draw a shape based on whether it was behind the camera.

Yes, but it was actually a dot product, not cross product :slight_smile:
So in my equation cz is actualy your camera’s forward vector.

Dot product…right.
I think I can handle that.
My coursework is on hold again pending my Digital Image Processing exam. Lot’s to remember but at least it is a lot more interesting than Advanced Database System.