mixing float and double (precision problem)

Hello,

I’m facing the problem with lack of precision from floats (due to very large scene).

Assume I render an object with very large world position coordinates (taking up most of the floating point precision).
Lets also assume that object is rather small but with high precision (worst case).

We know that if the world-position coordinates are big enough the world coordinates for the object’s geometric points will collaps upon doing the model to world transformation (due to lack of precision).

I have read some posts about doing everything in double precision, but this doesn’t fit me (due to very large memory consumption).

My question is simple:
What happens if the GL_MODELVIEW matrix is in double precision and the geometric points of the object are in single precision?
Will the transformed coordinates be in double precision (and thus there will be no truncation due to lack of precision)?

Does the 3d-card internally handle floating point in 80-bit precision (as most fpu’s does)?

Regards,
Jens

I think all cards, except maybe some pro cards, uses 32bit floats in the vertex shader.

Originally posted by Jens B:
Assume I render an object with very large world position coordinates (taking up most of the floating point precision).
Don’t. Center your objects roughly around (0;0;0) and use matrix translation to put it “far away”.

glVertex3d and glLoadMatrixdv do not tell the implementation to use double precision calculations.

There’s no way you can force the use of doubles. If you happen to have some exotic graphics card that implements vertex operations w double precision floating point, great. But most hardware uses single precision floats. Better work on your model data instead.

Originally posted by zeckensack:
[b] [quote]Originally posted by Jens B:
Assume I render an object with very large world position coordinates (taking up most of the floating point precision).
Don’t. Center your objects roughly around (0;0;0) and use matrix translation to put it “far away”.

glVertex3d and glLoadMatrixdv do not tell the implementation to use double precision calculations.

There’s no way you can force the use of doubles. If you happen to have some exotic graphics card that implements vertex operations w double precision floating point, great. But most hardware uses single precision floats. Better work on your model data instead.[/b][/QUOTE]Zeckensack is right, but depending on the size of your scene, you may need even more than he is suggesting. For very large scenes, once your models and/or terrain tiles are individually centered on 0,0,0, maintain the true coordinates of their local origins in double precision (just the local origins, not each vertex). Keep the camera position in DP as well. Subtract the DP positions from the camera (or vice versa, depending in your displacement method), then upload the resulting camera-centric local origins as float matrices before drawing geometry in single precision. This subtraction need not be done every frame, but as needed to prevent visible errors (if not every frame, then apply the small delta from the current camera and the last subtraction snapshot to compensate).

The above is simplified for ease of explanation. It’s possible I forgot a step, but without having the code in front of me, I’m pretty sure that’s what I’ve done in the past. If you can find the Performer Mailing List Archives (find via google), I believe there’s a post by Michael Jones explaining this in much more detail.

We run into this problem alot in the GIS world. Since everything is in a geographically aligned coordinate system. You can try the keep everything centered around the model/terrain’s origin, but you will soon find out, that it may not be enough as previously stated.

The solution (which is the same as the previous I believe), would be to maintain everything as doubles in your code. As the view updates, subtract your viewing position from your model/terrain’s coordinates. Set up your camera matrix with just the orientation (because everything is now centered around you, you don’t have to specify position), and render your subtracted coordinates.

Here is a link which details this issue more and has these fixes in it as well (you may need to log in).
http://www.gamasutra.com/features/20020712/oneil_01.htm

I too had this problem with using GIS data. I too used a similar solution by adding another coordinate system to my engine’s coordinate system manager. Here is how it works …

Let :

RCS = Rendering Coordinate System, the OpenGL native coordinate system.

WCS = World Coordinate System, an east-north-up system that my data is native to.

The WCS is related to the RCS through a SIMPLE TRANSLATION SELECTED SO THAT THE SCENE CENTER IS LOCATED AT (0,0,0) IN RCS

UCS = UTM Coordinate System …

WCS is related to UCS in that it has a latitude designation, a longitude designation, and the easting/northing/altitude is the E/N/U components of UCS

GCS = Geodetic Coordinate System …

GCS is related to UCS through one of 20 some odd earth geoid models … (WCS 84, etc…)

SO … I Load my data, find its bounding volume, compute the scene center as the bounding volume center, and use this as the offset between the WCS and RCS systems … Voila! Problem is gone when I render stuff …

Correction:
I think I get what you’re trying to say clearer now. You are using a fixed translation, but its fixed to that projection systems center location. Why not just fix the translation to its upper left point, and use that to translate?

Old Post:
That would be similar to the translation fix proposed by zeckensack. Since you are familiar with GIS, suppose you had the whole world being visualized. This would center your scene around (0,0,0) (if using LL coordinates). Now what if you wanted to add an extremely high resolution raster in a very small area, which is often the case. Since your offsets are constant, you will run into this same accuracy problem again, even with the translation fix in.

I have used the fixed translation in the past, and it works nicely, if you are only dealing with 1 raster with large world coordinates. However, once you start adding in many more rasters of higher resolution or dealing with things on a planet/universe level, that logic falls apart. I think this is also stated in that real time rendering paper form my previous link.

Thanks for ideas, currently I use the method that was described by Cyranose. ie localizing everything around the camera and hope that objects in frustrum will not span over a too large space (in my case, this is not a problem).

regards,
Jens