PDA

View Full Version : Windows OpenGL drawing accuracy



Belgrad
11-16-2002, 07:13 AM
Is it restricted to float type precision? If not how to force it to be double?

[This message has been edited by Belgrad (edited 11-16-2002).]

[This message has been edited by Belgrad (edited 11-16-2002).]

mikael_aronsson
11-16-2002, 09:10 AM
Hi !

There are support for both float and double in the api, but I guess you would like to use double internally instead of float ?

I think most OpenGL hardware use float, and you cannot change it, the reason I guess is that it's only half the amount of data to send down the pipe line.

I am not sure if you would gain much with double anyway, the depth buffer is limited to 24 or 32 bits anyway.

Mikael

Belgrad
11-16-2002, 09:55 AM
http://www.opengl.org/discussion_boards/ubb/frown.gif((( Very bad news. It is essential for my project to be able draw with double accuracy...

[This message has been edited by Belgrad (edited 11-16-2002).]

nexusone
11-17-2002, 03:56 AM
You can store your data in double format, then convert it to float when drawing.

A float is more accurate then a double.


Originally posted by Belgrad:
http://www.opengl.org/discussion_boards/ubb/frown.gif((( Very bad news. It is essential for my project to be able draw with double accuracy...

[This message has been edited by Belgrad (edited 11-16-2002).]

Jambolo
11-17-2002, 11:00 AM
Originally posted by nexusone:


A float is more accurate then a double.



nexusone: Huh? Usually float has a 23-bit mantissa and double has a 52-bit or 64-bit mantissa. Look it up.

Belgrad: You must be doing something very unusual if you require the renderer to use double-precision. But you can always do the transformations yourself and just give OpenGL the final results.