double -> float vs. gl*d

Hi all,

Does anyone have any idea which is faster and why:

glVertex3f( float( double_x ), float( double_y ), float( double_z ) );

or

glVertex3d( double_x, double_y, double_Z );

for any gl* funciton?

I am writting code for an FEA app and of course, all our numerical data are doubles. Which is faster, using the gl double funcitons or casting? On Windows with VS 7.1 btw.

Thanks

If you care about speed, then you shouldn’t even be using immediate mode rendering in the first place. Speed and immediate mode is, sort of, orthogonal.

Anyway, if you really care, can’t you just try and see for yourself? It’s not like one will be universally faster than the other. It will depend on the situation, and noone will be able to give you a reliable answer to your particular situation. Except for yourself of course, who can actually try it, and not just guess.

But my advice is: drop immediate mode and use vertex arrays or something if you care about speed.

Bob,

Thanks for the reply. I see what you are saying about immediate mode and I will certainly look for places where vertex arrays and display lists can be used. But I guess I’m curious what the compiler is doing vs. what the rendering pipeline does (or should I say the graphics harware).

For those interested or if anyone has any feedback on this I found the following web site discussing conversion from float to int. Not quite double to float but at least it’s a start:

http://mega-nerd.com/FPcast/

I would benchmark the two as previously recommended but my instincts tell me that float would be faster. Think about it … sending a million float vertices over the pipeline versus sending a million double vertices over the pipeline … thats a doubling of the required bandwidth …

Float : (4 bytes)(1 million verts)(3 coordinates) = 12 Million bytes

Double : (8 bytes)(1 million verts)(3 coordinates) = 24 Million bytes

Unless you have a good reason to use doubles, of course.
If you calculate everything in doubles because you need the precision, performancewise it’s probably a better idea to let the driver/card handle any conversion.

I assume that majority of current HW does not support double values (they are twice as big and for ordinary use the increased precision is not necessary).

It is probable that on hw that does not support doubles the double variant (or any other variant that is not native to the hw) simply convert/cast input to the native format. It will be probably done directly in the entry function because that will reduce ammount of code paths that needs to be implemented.

I assume that implementation will probably look like:

void glVertex3d( double x, double y, double z )
{
glVertex3f( float( x ), float( y ), float( z ) )
}

>>I assume that majority of current HW does not support double values.

I’m no hardware engineer, but I would guess you don’t need 64-bit CPU registers
to process 64-bit doubles. I would guess, yes, all graphics cards can do
double precision, it just takes more CPU instructions to do it.

Originally posted by <tibit>:

I’m no hardware engineer, but I would guess you don’t need 64-bit CPU registers
to process 64-bit doubles. I would guess, yes,
all graphics cards can do
double precision, it just takes more CPU instructions to do it
I have meant graphics card GPU not the CPU. There are two things. What can do CPU (x86 based cpus do have 80bit float registers) and what can do GPU (ATI 9500+ cards do not support doubles natively according to ATI documentation). If I am talking about what card can do, I am talking about GPU capabilities not what driver can emulate in some way on CPU.