No impliconversion between int literals and float?

Drivers from NVIDIA have traditional displayed a warning when int literals are used where a float is called for. However, I’ve seen at least one new driver (NVIDIA’s drivers for Mac OS X 10.6) that now flags these as errors, not warnings. For example:


float x = 5; // Error, but used to be a warning
float y = 5.0; // OK

I’m having trouble figuring out why this is an error. The GLSL spec states pretty clearly than an implementation isn’t even required to actually support ints. It also states that implicit conversion between types is done when necessary, and specifically cites conversion from int to float (4.1.10).

So my real question is, does anyone know an easy way to disable this error, or an easy way to convert hundreds of int literals in dozens of files to float literals?

Did you enable a GLSL version that allows implicit conversion from int to foat?

My spec references were to thw 1.20 spec – from how I read it, implicit conversion should already be allowed.

If I understand you correctly, implicit conversion isn’t available until some later version. Which version?

And… why is conversion not performed in v1.20? Where does the spec identify this as an error and prohibit the conversion?

My spec references were to thw 1.20 spec – from how I read it, implicit conversion should already be allowed.

Are you actually compiling it as a 1.20 shader, with the #version definition at the top of the shader file?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.