LOD Selection

Hello I´m searching for an efficient algorithm for level of detail selection.

As I understand it there are basically two ways to do it.

  1. The distance from the object to the point of view in word space.
  2. The size in pixel of the object in screen space.

The first way should be simple to calculate. I think the only problem is which point to take for the calculation of the distance the center of the object or the nearest point of the bounding box.
The disadvantage of this way is that you don´t know how large the object will be on the screen. This will depend on the size of the object resp. on the size of the field of view. So when you zoom into the scene and change the field of view the distance of the object won´t change but it´s size on the screen.

The second way I think would be the best because you can select the level of detail depending on the size in pixel on the screen. This would be the most naturall way and you are independent of the field of view of the camera.
The problem I have is how to efficiently calculate the screen area of the object resp. its bounding box. You have to test all the 8 corners of the bounding box and take the min/max-values to calculate the screen space area. I think this would be very expensive especially when you have a lot of objects in your scene. You can use gluProject but how expensive is this?

Did anyone knows a good way to do this or a good algorithm maybe with sourcecode?

Thanks in advance

well im using the first technique you mentioned since its the simplest thing to do… but i have tought about using occlusion queries to do lod determination and culling at the same time… with this extencion you can retrieve the number of pixels drawn for an object (like a bounding box). depending on this number you can do the lod selection…

Use the first methode and multiply the distance with 2 factors:

  • Object Size:
    you can precalculate this for many objects, otherwise you simply take the maximum dimension of the bounding box.
  • Global multiplier:
    The global multiplier is equal for all objects. It one should be calculated by
    — a user defined factor (If he has a slow hardware)
    — FOV (i.e. zoom)
    — screen resolution (lower resolution => fewer triangles)

You can do the first method and take into account the size of the object. you dont need to be exact (transforming the bounding box to screen space). Most of the time it is enough to use an approximation. Choose the LOD Level on distance, size and FOV. Just implement some sliders and adjust the correct values by testing.

Using occlusion queries seems to be nice, but it depends on your scenario. A flight simulator for example does not have much occluding objects. So it would get very expensive to do a bounding box pass with querys. The other problem with the occlusion query alone is that when a character hides behind a box and only his head is visible a low res LOD is choosen and you have an ugly quadratic head.
Distance always maters.

Lars

What about radius of bounding sphere divided by homogenuous w at object center?
Can also be scaled by a “tune” factor if you want to.

This requires you to keep track of the current modelview and projection matrices. Modelview tracking can be optimized away under usual “camera”-like conditions, and if you have your object centers in world space. You’d then only need to construct an mvp matrix once per frame.

Good things about this (I think)
1)perfect correlation between computed “lod fudge factor” and on-screen size
2)doesn’t depend on fov
3)doesn’t depend on aspect ratio

Those are all pretty much equivalent, once you add all the knobs that artists are going to want (like, setting LOD distances differently for each mesh kind, etc).

You don’t necessarily need to change the LOD distances just because an object is bigger, as that object may have more triangles.

The best way to do it is to pre-process some information, such as measuring the biggest triangle in the mesh, or the curviest surface, or whatever, and then derive LOD based on this data.

Or, for the uber-l33t, implement sliding window viewer independent progressive meshes :slight_smile:

Moving to math and algorithms…