OpenGL game: float or double?

Hi,

I need to code a virtual world for a game and I would like to know: what’s the best way to represent a point in the space: float or double? I’m stuck with the “precision vs speed” dilemma! I need precision and the programm must be fast (of course !

Thx!

I think float is the standard right now.

The only time I use a double is to see if a point is within a triangle on the triangle plane.

keep in mind that the driver is probably using floats internally (on a PC).

-Blake

Heck, use doubles and let the industry adapt, instead of u reverting to old floats …

Think about a world where a guy like Mr Carmack did what everybody else told him…

The future surely allows you to use doubles.

May your code never throw an exception and be free of bugs.
Good Bug Hunt.


Better yet. Define your own float type to be either float or double and try them both out.

#define USE_DOUBLES // Unmark this if using float instead of doubles

#ifdef USE_DOUBLES
#define MYFLOAT double
#else
#define MYFLOAT float
#endif

And in your code at type specific locations:

#ifdef USE_DOUBLES
glVertex3d(…)
#else
glVertex3f(…)
#endif

Give us some benchmarking later.

[This message has been edited by Hull (edited 05-01-2001).]

I’m no hardware expert but you want to think about the space of your data structures versus speed of your program.

There are all sorts of optimising data structures on nVidia hardware (that most posters on this board use) to speed up your programs. It may well be that they treat floats differently to doubles internally, or that you are limited to a smaller data structure using doubles.

I don’t know much about the nVidia OpenGL extensions yet (haven’t used them) but there are a lot of posts on them on this board and a lot of experienced users of them here too. mcraighead’s the main man when it comes to hardware questions.

I have done some work using double precision surveying coordinates and found that if the numbers are very large all the matrix math in OpenGL will be come unpredictable. when the numbers are small (but still double precision) I can generate a smooth rotation but once the numbers have nore than 5 significant digits the same rotation become jittery. Its all to do with using single precision internally (Windows)

That was quite important to know, thanks for that info.

Is there any specs to read about these kinds of precision losses and how they are handled in OpenGL? Anyone?

Originally posted by Hull:
Better yet. Define your own float type to be either float or double and try them both out.

#define USE_DOUBLES // Unmark this if using float instead of doubles

#ifdef USE_DOUBLES
#define MYFLOAT double
#else
#define MYFLOAT float
#endif

And in your code at type specific locations:

#ifdef USE_DOUBLES
glVertex3d(…)
#else
glVertex3f(…)
#endif
(edited 05-01-2001).][/b]

I assume people are using C++ and not C these days, so I suggest taking advantage of overloading for more readable code. First define a simple library of new functions:

inline glVertex3(float …)
{
glVertex3f(…);
}

inline glVertex3(double …)
{
glVertex3d(…);
}

and so on. Then in your application:

#define MYFLOAT float // or double

glVertex3(…)

The compiler will automatically inline glVertex3f or 3d depending on the type of argument. No need for #ifdefs and less headache. :slight_smile:

If floats cause errors it is usually due to a crap algorithm. If you have a problem with numerical accuracy and switching to doubles makes it go away you haven’t actually solved the problem, just covered it up.

Originally posted by foobar:
If floats cause errors it is usually due to a crap algorithm. If you have a problem with numerical accuracy and switching to doubles makes it go away you haven’t actually solved the problem, just covered it up.

I would generally have to agree. If I tried to create an accurate model of the solar system and specified all my dimensions/distances in millimeters and my timescale in milliseconds since the year 0, and tried to andvance the model to the current date, I could probably expect “some” precission problems.

Floats are really more than sufficient for most purposes. I just did a quick test on the precission of a 0 based coordinate system. My coords had to get up to 16777217 before the precission error because more than 0.5 units. This means I could model a world/system that is 530 miles across and have position/movement accurate to 0.5 inches. But you might say “how would you model something like the earth, where things are more than 530 miles apart?” The answer…use local coordinate systems. I dont think there is any object on the planet that is more than 530 miles across that needs a resolution accurate to the inch. If you defined each city in its own coordinate system (scaled in inches), and placed each city into the global coordinate system (scaled in miles) you should have more than enough resolution for most needs. As for large land features they might only need to be accurate to the foot or 10th of a mile, so scale their coordinate systems accordingly.

And for those FEW rare cases, thats what doubles are for.

I had the same question as you had. Except that I am trying to display real world data and I seem to lose precision when my values get to be larger than 10 million even though I specify the glVertex3d() to send double-precision values into gl.

I am guessing that gl does store values with only single precision. In the posts that I have read, people shun the use of double precision values. This is where one can see the obvious difference between engineers with real world experience and computer science majors without it. News Flash: not everybody uses GL for games or other fictional world locations.

Not to make any of you feel bad, but the software I develop represents real world locations using real world units (UTM, State Plane, etc.). You cannot just say “move your city closer to the equator” so that GL can handle your coordinate system.

What it appears that I will be required to do, and what you will probably also need to do, is to keep track of a coordinate system offset value (x,y,z), and subtract this value from each vertex that you give to GL. This way, you can allow the user to work with his data in a real coordinate system, but you can handle these unfortunate GL limitations by giving GL values within single precision tolerances.

Perhaps someone else knows a trick to force GL to work with double precision values.

>>>>>>>Perhaps someone else knows a trick to force GL to work with double precision values.
<<<<<<<

Nope, there is no forcing to be done. Just change the implementation. Since all of the PC cards use 32 bit floats, you dont have the hardware option. You can maybe use MESA. Not sure about MESA, but maybe a slight code modification will get you what you want.
Maybe you can specify long double (80 bit)

V-man

Originally posted by V-man:
Maybe you can specify long double (80 bit)
Except in VC++, where a long double is the same as a normal double (64 bits).

If you need a double with 80 bit or even more, then make your own class double80. It will behave like a real double then, but will be damn slow as you have to do every operation by yourself :wink:

Originally posted by Don’t Disturb:
[quote]Originally posted by V-man:
Maybe you can specify long double (80 bit)
Except in VC++, where a long double is the same as a normal double (64 bits).
[/QUOTE]

AFAIK, VC++ 6.0 stores long double on 64bits in memory but uses the FPU in 80bits.

To get back to the topic, I think you should send the data to OpenGL using float.
The lost of precision is the greatest during heavy calcul, so you should only use higher precision when you have to.
When you load a 3D model and then directly send it to OpenGL (then using the 3D card via default T&L or Vertex programs to do geometric computing) there is no use for higher precision than float. Because data will be forced to the 3D card internal format (I think, but I’m not sure, that the GeForces computes floating point values on 64bits).
And if the 3D card uses simple precision float then it’s really no use sending the data in double precision.

I think that you should only use double when you perform sensible operations on the data before sending them to the 3D card. For example, you could use double for the collision detection system for higher precision. Or when you perform yourself transformations on the geometry (you can see in Quake II and Unreal Tournament that 3D models have precision problems, when you watch closely the models seems to “blob” sometimes when moving)

For the Solar System type environment, where precision matters, try do all the calculations with doubles, but then reduce them to floats before sending to the driver. This final rounding down of doubles will be less apparent than only using floats to begin with.

BTW … there is a INT96 class on the net somewhere … I’m sure it tested the times to do various calulations using a normal integer, a float, a double and the 96 bit integer … if memory serves, there was bugger all difference between using doubles and floats!

Try various routines using floats and doubes, and time them - I think you’ll find that the time saving over MANY itterations would not be that significant … not sure though, haven’t tried it.

For what it’s worth, there is an interesting little program on nvidia’s web site which times how long it takes to execute various timing functions … the results are quite interesting … Here’s the output from my machine (PII 350)

Report file for timing the various timers.

*** Key number is the avg time.
The smaller this number, the faster the timer.

QueryPerformanceFrequency() freq = 0 1193182

method 0:
QueryPerfCntr…() 100 times
tot: 0 498
avg: 4.980000
avg time: 4.17371e-006
method 0:
QueryPerfCntr…() 500 times
tot: 0 2466
avg: 4.932000
avg time: 4.13349e-006
method 0:
QueryPerfCntr…() 1000 times
tot: 0 5147
avg: 5.147000
avg time: 4.31368e-006
method 0:
QueryPerfCntr…() 10000 times
tot: 0 49666
avg: 4.966600
avg time: 4.16248e-006

method 1:
GetTickCount() 100 times
tot: 0 9
avg: 0.090000
avg time: 7.54286e-008
method 1:
GetTickCount() 500 times
tot: 0 22
avg: 0.044000
avg time: 3.68762e-008
method 1:
GetTickCount() 1000 times
tot: 0 38
avg: 0.038000
avg time: 3.18476e-008
method 1:
GetTickCount() 10000 times
tot: 0 338
avg: 0.033800
avg time: 2.83276e-008

method 2:
TimeGetTime() 100 times
tot: 0 52
avg: 0.520000
avg time: 4.35809e-007
method 2:
TimeGetTime() 500 times
tot: 0 170
avg: 0.340000
avg time: 2.84952e-007
method 2:
TimeGetTime() 1000 times
tot: 0 336
avg: 0.336000
avg time: 2.816e-007
method 2:
TimeGetTime() 10000 times
tot: 0 3320
avg: 0.332000
avg time: 2.78248e-007

method 3:
Pentium internal high-freq cntr() 100 times
tot: 0 16
avg: 0.160000
avg time: 1.34095e-007
method 3:
Pentium internal high-freq cntr() 500 times
tot: 0 60
avg: 0.120000
avg time: 1.00571e-007
method 3:
Pentium internal high-freq cntr() 1000 times
tot: 0 114
avg: 0.114000
avg time: 9.55428e-008
method 3:
Pentium internal high-freq cntr() 10000 times
tot: 0 1101
avg: 0.110100
avg time: 9.22743e-008

WOW - I’ve just noticed i’ve been promoted to Frequent Contributor!

[This message has been edited by Shag (edited 11-09-2001).]

Use floats. That’s what all PC hardware uses, even high end SGI kit only uses floats.

If you want to get double precision on 32 bit hardware it can easily be done. Floating point has a varying exponent, that’s why it’s called ‘floating’ point. This means that for small numbers the significant bits in the mantissa store more accurate spatial information than large numbers with a big exponent. If you store big numbers in a float (or send big doubles to floating point hardware) the scene and information in it will jump around. The key to solving this problem is to keep the numbers in the modelview matrix small. That way you always have a lot of accuracy.

So how do you do this? You have a big ol’ database and you want to travel everywhere in your world. The secret is to move the database to the eye, not the other way around. You use double precision in your flight model for moving around. When you are ready to draw you subtract the same big number from the eye transformation and the model transformations and they cancel each other out because +ve viewing xform on the modelview = -ve model xform on the modelview.

The other part to ensuring this works is that you store the vertices in the scene (terrain, tanks, planets, balrogs etc.) in 32 bit precision but for objects which move or are a long way from the database origin you store their location in double precision w.r.t. the global origin. You move all objects with double precision and subtract the eye xyz from their xyzs in double precision before casting to a float for the model transformation. If you are subtracting the eye pos from the objects you subtract the same number from the eye, leaving the eye at the origin, 0, 0, 0.

[This message has been edited by dorbie (edited 11-09-2001).]

Hey, I’m with Hull here. The arguments for float remind me of the people who thought I was strange for wanting more than 8.3 chars for filenames, where obviously the only reason I needed that was because my directory structure was wrong.

use doubles, you will get nearly the same speed as with floats, some 1%-2% slower, but the precision is better, and memory isnt important
and if you want to use coordinates far away( 10millions and more) from origin you have to calc a part by yourself (something like coords*matrix before passing to gl!!) because you dont get any better precision if your matrix hold coordinates, which will loose precision when gl grabs em, and after this the coords will loose precision too, before they will be multiplied by the matrix…

[This message has been edited by T2k (edited 11-10-2001).]