0.0f vs 0f, what's the difference?

most code i downloaded always put a .0 to a floating point number even when it’s not needed. However getting rid of that .0 doesn’t seems to cause any compilation/runtime error, either. So why do people do that? THere must be a reason…

Anyone?

‘f’ is not a valid suffix for integers, it only works for floatingpoint constants. A floatingpoint constant must have a ‘.’ somewhere, so 0.f would work, but not 0f. Without the period, it is an integer constant.

If it works for you, then you’re either not using C or C++, or you have a broken compiler. MSVC 7.1, Intel 8 and GCC 3.4 all complains on 0f.

Originally posted by mpan3:
most code i downloaded always put a .0 to a floating point number even when it’s not needed. However getting rid of that .0 doesn’t seems to cause any compilation/runtime error, either. So why do people do that? THere must be a reason…
What has opengl todo with it!?

Anyway: Consider this 5/2=2 vs. this 5.0/2.0=2.5

Speaking from the point of view of the compiler:

0.0f is a float
0.0 is a double
0 is an int

float a = 0; /* Implicit conversion from int to float */

float b = 0.0f; /* No implicit conversion */

float c = 0.0; /* Implicit conversion from double to float */

sorry, made a mistake in the subject line, it’s suppose to say 0, not 0f. Anyways, thanks for the answer. So 0.0f is float while 0.0 is double.