PDA

View Full Version : GLSL double precision range



eodabash
03-12-2014, 04:33 PM
I'm trying to understand the limitations of double precision data in shader calculations. Particularly if there is some min/max range of values that is supported by the language. We have this simple vertex shader that takes a dvec position attribute and uses dmat matrices to transform it.



#version 410

in dvec3 _pos;

uniform dmat4 _world;
uniform dmat4 _view;
uniform dmat4 _proj;

void main()
{
gl_Position = _proj * _view * _world * dvec4(_pos, 1.0);
}


When we use "reasonable" values for the data (values that would be well-suited to single precision) everything works as expected. When we try testing very large values, such as vertex and eye position located at (FLT_MAX, y, z) nothing draws at all. This makes me wonder if everything is just being cast to single-precision on the hardware as this calculation should be well within the limits of double precision. We've tested this on AMD and NVidia with the same result which makes me think there's something in the GLSL double-precision spec that we're overlooking.

Brokenmind
03-13-2014, 02:12 AM
I've worked with double precision myself, but the values I calculated didn't leave the shader code via the output variables. This page (http://www.opengl.org/wiki/Built-in_Variable_%28GLSL%29) states that


out gl_PerVertex
{
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
}

which would support your assumption: The calculation must be cast to float in order to fit in the output vector.

Moreover, this thread (http://www.opengl.org/discussion_boards/archive/index.php/t-181359.html) claims that using double here not even possible.

€dit: Actually, your example should work well if the float-surpassing calculations were performed in dvecs and dmats (whose existence would make no sense if all values were casted into float anyway) and the results to be stored in gl_Position are in a normal range, but I couldn't find anything on that.

Aleksandar
03-13-2014, 03:36 AM
I'm trying to understand the limitations of double precision data in shader calculations. Particularly if there is some min/max range of values that is supported by the language.

Support of double precision is hardware dependent. AFAIK NV GT200 is the first GPU with DP support (IEEE 754-1985), but many GPUs released after it didn't have DP (for example Radeon HD 7670 released in January 2012).

Starting with Fermi, the new IEEE 754-2008 standard is fully supported. But be aware that transcendental functions are still SP if not emulated.

Why then your test doesn't work? Because you are using SP variables to convey DP calculation result. Instead of writing to gl_Position, try to write to some DP feedback buffer and read the values and compare them to CPU values. There will be some deviations, since Intel has 80-bit internal DP, but the values should be the same up to 1/2 lsb (or 1 lsb, it depends on many factors) of normal 64-bit DP numbers.

eodabash
03-13-2014, 11:34 AM
Support of double precision is hardware dependent. AFAIK NV GT200 is the first GPU with DP support (IEEE 754-1985), but many GPUs released after it didn't have DP (for example Radeon HD 7670 released in January 2012).

Starting with Fermi, the new IEEE 754-2008 standard is fully supported. But be aware that transcendental functions are still SP if not emulated.

Why then your test doesn't work? Because you are using SP variables to convey DP calculation result. Instead of writing to gl_Position, try to write to some DP feedback buffer and read the values and compare them to CPU values. There will be some deviations, since Intel has 80-bit internal DP, but the values should be the same up to 1/2 lsb (or 1 lsb, it depends on many factors) of normal 64-bit DP numbers.

My test has very large incoming vertex positions, but a camera that is also located far from the origin. So after the application of the view matrix the values should all be reasonably sized. I'm not really concerned in the precision lost casting clip space doubles to clip space floats when storing final result in gl_Position. Even if the picture were to be slightly wrong it, there should still be some kind of picture.

I was thinking about doing a test with transform feedback to see what results are actually being written out. I suspect I'm find going they're all garbage.

UPDATE:

I just checked some of the calculations I was trying to do on the CPU and I can see now that the geometry I'm using is too small to be resolved at all at these values. FLT_MAX + 100 == FLT_MAX for instance. So I think that probably explains everything :/