use double precision inside GL

It sure would be nice if there were a way for GL to internally use double precision floating point values.

I am trying to display real world data and I lose precision when my values get to be larger than 10 million even though I specify the glVertex3d() to send double-precision values into gl.

I am guessing that gl internally stores values with only single precision. In the posts that I have read, people shun the use of double precision values. This is where one can see the obvious difference between engineers with real world experience and computer science majors without it. News Flash: not everybody uses GL for games or other fictional world locations.

Not to make any of you feel bad, but the software I develop represents real world locations using real world units (UTM, State Plane, etc.). You cannot just say “move your city closer to the equator” so that GL can handle your coordinate system.

What it appears that I will be required to do, is to keep track of a coordinate system offset value (x,y,z), and subtract this value from each vertex that I give to GL. This way, I can allow the user to work with his data in a real coordinate system, but handle these unfortunate GL limitations by giving GL values within single precision tolerances. What a nightmare this will be!

Perhaps someone else knows an existing trick to force GL to work (INTERNALLY) with double precision values??

Hello,

hmmm. I’d argue that there is a strong distinction between internal data formats and graphics data formats. I conceed that sometimes these two formats can co-exist (in games, for example), but this is not always the case. Furthermore, jsut because some internal data structure does not have a DIRECT mapping to the graphics capabilities is not an argument for extending the graphics capabilities.

You could have an agument for double precision if you were using some very high resolution device and you could ~not~ get pixel accuracy. ie. the distance between glVertex(x) and glVetrex(x+delta) is > 1 pixel even for very small values of delta.

My argument is that the inability to map an internal data structure to the graphics h/w is not a compelling argument to change the graphics h/w. Boeing engineers have designed airplanes on graphics machines, and i’d wager that they deal with VERY high precision internal models.

cheers,
John

Hmmm.

Firstly, several regulars on these boards get quite distressed when people post the same thing in both Beginner and Advanced forums. Posting on both of those and in Suggestions as well… yeesh.

Secondly, the internal precision used by an OpenGL implementation is not mandated by the spec, and I very much doubt it ever will be. Given that a lot on math is done in hardware these days, it’s quite hard to be flexible. I suspect that we’ll be using floats for a long time yet; the bandwidth consumed by vertex data is a much bigger problem than precision.

Thirdly, doubles are not a panacaea. Floats give you about 6 decimal places of precision. You need a few more than that, so suddenly you’re a serious experienced real-world engineer and everyone else is just a wet-behind-the-ears CS major. Newsflash: up the scale a bit more and suddenly doubles don’t give you enough precision either. Maybe OpenGL should make 256-bit math compulsory so that the folks doing galactic-scale simulation don’t have to work too hard?

At the end of the day this is not OpenGL’s problem, it’s the app’s data modelling problem. Basically you need a hierarchical scenegraph, you need to keep your scenegraph nodes spatially coherent and you need to be careful to take the shortest path from viewpoint node to rendered node when accumulating modelview transforms. Mildly tricky, but hardly rocket science. You certainly don’t need to be offsetting every vertex.

Sorry if this comes across as hostile. It’s not meant that way, but this board gets soooo many posts demanding that OpenGL solve all their programming problems for them (sound, physics, input, maths, you name it) and 99% of the time the implied criticism is unjustified.

MikeC,

I take no offense at your response, in fact I respect your boldness. I also apologize if my message had the appearance of greater-than-all. That was not my intent. I wanted to get some real answers to my problems. As I browsed similar posts, everyone had pretty much the same response, which sounded to me like: “why are you being foolish enough to use doubles to represent your data anyway?” I was trying to make the point that in real world problems, single precision floats cannot represent all data.

The fact of the matter is, I thought someone might be able to help me with my problem - which is one of the reasons I posted my original message.

As far as my posting this question to this “suggestions for new GL features” list, I think it would be valuable to have a GL mode where double precision floats are used for internal computations. I don’t think that it would be good to forget single prec floats altogether. I was thinking more of a way to request GL to use doubles (albeit the default would be floats). But perhaps this would be asking too much, slow it down, or cause other problems.

Originally posted by BigD:
[b]I was trying to make the point that in real world problems, single precision floats cannot represent all data.

No, and nobody’s claiming that they can. The point is that single-precision floats can represent all data well enough for a graphics API. OpenGL is there to help visualize your data, not to define it. If numerical issues are spoiling that visualization, something’s wrong with your scenegraph.

I was thinking more of a way to request GL to use doubles (albeit the default would be floats). But perhaps this would be asking too much, slow it down, or cause other problems.

Well, you could handle all the transform and projection math yourself, set all the OpenGL matrices to the identity and just hand OpenGL a bunch of homogenous vertices for rasterization. A lot of people did this in the early days of consumer 3D hardware, and if you dig back a way there’s a fair amount of info out there on using OpenGL as a rasterization-only API. But it wouldn’t be much fun, and it’s really not necessary.

cheers,
Mike

First of all, BigD, as I understand it, this forum is for suggesting OpenGL features. The OpenGL spec already supports double precision values (which is why you can used glVertex3d), so this is not a real feature. Your argument is with the chip makers, and some of them make most of their money in the games market. Others don’t, but you didn’t specify what chip you’re using.

I think that Mike’s idea is a good one. You could probably leave the projection to OpenGL, but do the other transformations yourself using doubles. Shouldn’t be too much of a hassle.

If you’re using OpenGL to display real-world data that goes into the tens of millions, you should scale it WAAYYY the hell down before sending it to OpenGL for rasterization. There’s no reason to send in numbers that big, anyway…

And if you’re so concerned about precision, why not use a custom data type consisting of a 32bit long for the part >=1 and another long for the part of the number that is <1 and convert it to a float in the process of scaling it down?

It doesn’t matter how much precision you are using to store your data, as soon as you convert it to a float, you loose all the extra precision you store. If an OpenGL implementation uses 32-bit IEEE floats internaly (that is, 23 bit mantissa, and therefore 23 bits precision), it’s mathematically impossible to get higher precision by storing the data in another way outside OpenGL.

Bob, yeah, I didn’t get that comment myself. However, it’s still valuable to store with higher precision if you do some calculations before the conversion (such as do coordinate changes in double instead of leaving them to OpenGL’s floats).

First, I apologize – this is a reply over a year after the original post (but I just became aware of this discussion group). And it comes after a posting to the beginner’s forum, but I didn’t know this was really the issue until I saw someone’s response to my posting. :0

As a practicing astronautical engineer looking to represent real-world data (but perhaps in the context of a game as well), I too would like to see at least double precision handled internally. I recognize that this may be difficult to do in hardware due to bandwidth limitations, but can it be made part of the standard for the software-only implementations of OpenGL? I am willing to sacrifice speed for accuracy, and perhaps folks like BigD might be willing to do the same. Right now, an attempt to use large distances results in significant rendering artifacts.

I have tried scaling, and it works fine for those large distances. But it messes up the relative sizing of near-field objects. A 1m object viewed from 30m away looks the same as a 1 km object viewed from 30 km away, and the object 10 million m away (i.e. the Earth) still looks OK. But, when I want to draw another 1m object 30m away from the other 1m object (i.e. the distance between two astronauts maneuvering around the International Space Station on EVAs), scaling doesn’t work because I am actually forced to draw two 1 km objects 30 m apart, meaning they almost completely overlap! In addition, scaling distances also changes the physics of the relative motion of the objects in space.

I am currently doing all calculations using doubles. But as Bob points out, that precision is lost when truncated to floats, resulting in serious rendering artifacts.

I can probably eventually figure out a work around that solves my problems, but boy, it would sure be nice if doubles were use internally – it would solve a lot of problems and save a lot of time – and isn’t that the intention of new features? It just makes OpenGL internal data types consistent with the C language that is used to implement it. At least I’m not asking to have OpenGL calls consistent with FORTRAN calling conventions…

You can download Mesa , an open source implementation of OpenGL, and modify it to use double precision (unless there already compile options for it).

I have figured out a way to work around my problem, although it probably will not help in the space simulation. I just thought it might be good to mention it here in case someone else needs to do this.

I have started maintaining a double precision data offset. I set this offset value to be the center of my data. When sending a point to GL, I subtract my offset from the point and give GL the modified one (my view volume also takes into account the data offset). In this way, I’m giving OpenGL coordinates that it can handle. Although my data can have large coordinate values, I am fortunate in that the bounds of my data are small enough for this method to work.