Double vs Float vs Int for drawing

I want to know what is the most preferred. Its just my uneducated guess but there is no partial pixel, so is using ints instead of floats or double as accurate? I noticed most tutorials and book use floats. any pros and cons? (Yes I know the difference between floats and ints and doubles)

so floats are recommended because the float uses less memory than doubles since they are less precise, am I right?

Are you talking about using them for vertex data or image data?

New DX11-class hardware may have support for using doubles in buffers and possibly even textures. But certainly no other hardware allows it. So you’re pretty much restricted to floats and ints.

If you take a look at GF100 (aka Fermi) specification, you can read that:

GF100 implements the new IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic.

What it means we will see when cards come to us. From the charts it seams like the same units serves for both single precision (SP) and double precision (DP) operations. What I know for sure, that in GT200 one DP unit comes on every 8 FP32 SP ALUs. In other words there are only 3 DP in the cluster with 24 SP units. Of course, currently you cannot use DP units in GL. But if you need DP values, for the vertices position for example, than you can encode each coordinate with two SP numbers. So, even now you can achieve higher precision without direct support on the graphics card. The problem is that this solution requires a little math in the vertex shaders and splitting DP coordinates in two SP…

If you are using fixed functionality than you are confined to lower precision. But even then there are benefits if you are using double precision for matrices calculation (and loading instead of using GL functions for multiplication) if your scene is huge. Not only the memory allocation is less, but also calculations can be done faster with SPs. Maybe Fermi will change that. :wink:

I have little quick question about floats and integers. Decided not to make new thread.

Is it big mistake when I use “GLfloat a = 1” insted of “GLfloat a = 1.0f”?

Not really because C’s type promotion and the compiler will figure it out for you.

For the OP: you should prefer floats as the type is native to current (and previous) GPUs. There’s nothing to be really gained by using doubles, and - in the worst case, because you never really know what’s going on inside your OpenGL implementation - you may even lose performance if your driver needs to go through a software stage (to convert an array of double* to an array of float* in a VBO, for example).

Decided not to make new thread.

Threads are not an endangered species. You don’t have to resurrect a 6-month old thread to ask a question.

Is it big mistake when I use “GLfloat a = 1” insted of “GLfloat a = 1.0f”?

Which version of GLSL? Version 1.0-1.2 would be an error (no implicit conversion). Version 1.3 and above would allow the implicit conversion.

However, NVIDIA’s GLSL compile will always accept this syntax, even in GLSL versions when it should cause an error. Unless you’re compiling under strict mode.

It will probably give an annoying compile error about ‘cast from int to float possible loss of precision’ … I think. If so, these kinds of warnings can clutter your compile and hide more serious warnings, so, elminate it!
Most would likely agree that the second is better programming style.

So big mistake? No. Is there a (possibly) deeper concern lurking behind your question :slight_smile: