View Full Version : GLSL and floating point precision
08-20-2005, 07:03 AM
Hi everybody! As far as I understand GLSL supports only single precision floating point numbers. The NVIDIA 7800 series seems to support double precision (and even more?!) through the whole pipeline. Am I right? How can I take advantage of this feature using GLSL. Cg has a reserved type double. Is there something similar in GLSL. How can I influence the precision with which the pipeline works?
Thanks for your help!
What exactly make you think the Geforce 7xxx can use double-precision? I haven't seen any paper saying this.
1) Through the pipeline most gfx-cards use only one precision. Some cards are able to run specific pixel-shader tasks at (usually 2) different precisions to improve speed. You can tell the card which precision to use, by using their propreritary extensions (NV_fragment_program1-3).
2) If you have read, that the Gf 7xxx uses higher precision, that might be true, but only for certain parts of the pipeline, where higher precision is important. You won't be able to change the precision. Maybe that could be changed by drivers, but some stuff will be pretty much hardcoded into the chip.
If i'm wrong, please tell me.
08-21-2005, 03:37 AM
There are some remarks about floating point precesion in the tech specs of the card on the NVIDIA homepage. But I might have mixed something up. What about professional cards like the Fire GL from ATI or the Quadro FX form NVIDIA? I am new to this field and my concern is not graphics programming but scientific computing using the power of current graphics cards. But if all cards only support single precision they will be pretty useless for me.
Somewhere in between the pipeline it is possible that certain cards use higher precision, if that is necessary to guarantee good results. However, in the end all you will get is single-precision, which is pretty much, the last generations of gfx-cards had much less precision!
The "professional" cards are simply the same cards, with all features enabled. A Quadro card is, hardwarewise, the same as a Geforce, but on the Geforce the features, which are important for CAD-appliactions, are disabled (ie. hw accelerated line drawing).
Therefore they won't make a difference.
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.