With newer hardware is there any way to make the whole OGL stack double precision?

Specifically I’m using a nVidia Fermi card (GF110 GPU) and I would like to get the whole stack running with double precision. That is, all the matrices (MODELVIEW, PROJECTION, etc), immediate calls (glVertex3d), vertex buffers, EVERYTHING using doubles internally to the GPU. I realize this would no doubt cause a performance hit on such a “low-end” card (versus Quadro/Tesla) but I have to think it’s not as bad as scaling and translating millions of vertices on the host CPU which is what I currently have to do.

I’m doing scientific visualization and the data spans values too large to put in 32-bit floats so what I do now is process the data on the host CPU to generate vertex data that fits in 32-bits, then send it to the card. Every time the scaling or translation changes and I have to redo all the calculations and resend to the card. This is very slow for large datasets and isn’t taking advantage of the GPU hardware at all because all the time is spent on the host CPU.

Is this even possible? I have been searching around for days but so far haven’t stumbled on to any solid answers. There is a bunch of stuff on GPGPU/CUDA/shaders using double precision. Some vague references to glVertexAttribLPointer/glVertexAttribPointerARB (what about the transformation matrices?), etc. Nothing about just telling the OpenGL stack “use double precision internally”. Maybe it’s a question for the nVidia group, I don’t know.

I would appreciate any pointers.

The answer to your question is both yes and no.

You can most of your “entire stack” use double-precision. To supply vertex attribute data, you use glVertexAttribL*d for immediate mode or glVertexAttribLPointer for arrays. You can supply matrix uniforms via glUniform*d and glUniformMatrix*d calls.

However, you cannot force OpenGL to do double-precision operations on the gl_Position output to the vertex shader. This means that the viewport transform (glViewport/glDepthRange) will still use single-precision math, as well the depth buffer. There also are no double-precision image formats.

Also, you may have noticed that the above discussion made reference to shader-based tech. None of this double-precision stuff works with fixed function OpenGL. So you cannot use the fixed-function matrices, fixed-function vertex attribute, or anything fixed-function and still get double-precision math. You must use shaders.

Thanks for the pointers. I wish there was just a simple way to switch on double mode using the fixed API. I always wondered what the point of all the gl*d functions were when they didn’t even really use doubles and I was hoping some day when hardware caught up we would get higher precision using the same old API (I learned OpenGL on SGI hardware when that’s all there was). I just need to slam my data out into a simple visual context without frills. In my case shaders are just more complexity for no good reason. But I digress; I’m getting old. :slight_smile:

Thanks again!