PDA

View Full Version : 0.0f vs 0f, what's the difference?



mpan3
10-27-2004, 03:23 PM
most code i downloaded always put a .0 to a floating point number even when it's not needed. However getting rid of that .0 doesn't seems to cause any compilation/runtime error, either. So why do people do that? THere must be a reason...

mpan3
10-27-2004, 03:26 PM
Anyone?

Bob
10-29-2004, 03:54 AM
'f' is not a valid suffix for integers, it only works for floatingpoint constants. A floatingpoint constant must have a '.' somewhere, so 0.f would work, but not 0f. Without the period, it is an integer constant.

If it works for you, then you're either not using C or C++, or you have a broken compiler. MSVC 7.1, Intel 8 and GCC 3.4 all complains on 0f.

1024
10-29-2004, 07:21 AM
Originally posted by mpan3:
most code i downloaded always put a .0 to a floating point number even when it's not needed. However getting rid of that .0 doesn't seems to cause any compilation/runtime error, either. So why do people do that? THere must be a reason...What has opengl todo with it!?

Anyway: Consider this 5/2=2 vs. this 5.0/2.0=2.5

nigels
10-29-2004, 07:45 PM
Speaking from the point of view of the compiler:

0.0f is a float
0.0 is a double
0 is an int

float a = 0; /* Implicit conversion from int to float */

float b = 0.0f; /* No implicit conversion */

float c = 0.0; /* Implicit conversion from double to float */

mpan3
11-01-2004, 08:20 PM
sorry, made a mistake in the subject line, it's suppose to say 0, not 0f. Anyways, thanks for the answer. So 0.0f is float while 0.0 is double.