After I do a normalization cube map lookup, I unpack the value with a:
MAD result, texValue, 2, -1;
However, I am finding on the latest drivers the results I get are wrong.
Geforce FX 5600 it can behave like the instruction is ignored.
ATI 9800 - Negative values are messed up.
Geforce 6800 - No problems.
Intel Integrated - No problems.
However, if I disable the OPTION ARB_precision_hint_fastest; on Nvidia I get expected results. (have not tried ATI -but I assume ATI mostly ignores this flag).
If I change the code to be:
MAD result, texValue, 2.0001, -1;
I get expected results on all cards.
(If I also do the same operation on someting that does not come from a texture, I also seen to get expected results.)
I am assuming Nvidia/ATI have a “optimization” in to recognize the MAD x,x, 2,-1; type of instruction and optimize it. (ie. unpacking bump maps etc which seem to work fine) However, from a cube map the results are just wrong.
It could be that I am doing someting wrong as it is strange the both Nvidia and ATI have a similar bug. Just wondering if anyone else has the same problems?
Driver versions : ATI - Cat 4.8 (will try 4.10 soon)
Nvidia - 66.81