GL vector component conversions

I am baffled by the signed vector component conversions into float vector components below:

ubyte c/(2^8 − 1)
byte (2c + 1)/(2^8 − 1)
ushort c/(2^16 − 1)
short (2c + 1)/(2^16 − 1)
uint c/(2^32 − 1)
int (2c + 1)/(2^32 − 1)

for unsigned types, apparently there are no negative vector components possible. But for signed ones, GL multiples by 2 giving the interval [-2^n, 2^n - 2] of possible values, adds 1 giving [-(2^n - 1), 2^n - 1] and divides by (2^n - 1) for the interval [-1, 1].

Somehow the lighting I get from using signed shorts, isn’t as sharp as the one I get from using floats directly (IMO 16 bits ought to be enough, but it is very noticeable). Maybe there’s a catch somewhere with rounding? Are you rounding the integer parts of floats in your code to get the signed ints? Do the signed ints give you satisfactory results in your apps?

Currently I use the inverse of the GL formulas to derive signed short components, I’ve tried with rounding and no rounding. Maybe I should use some custom normal compression scheme and use a shader to decompress?

But for signed ones, GL multiples by 2 giving the interval [-2^n, 2^n - 2] of possible values, adds 1 giving [-(2^n - 1), 2^n - 1] and divides by (2^n - 1) for the interval [-1, 1].

No. It’s simple two’s compliment. The range of values goes from [-2^(n-1), 2^(n-1) - 1].

The conversion from floats on [-1, 1] works in exactly the same way as it does for unsigned integers.

An interesting perspective. You mean because unsigned goes from 0 to 2^n - 1 and signed to 2^(n-1) - 1 and muliplying by 2 removes the sign bit and shifts negative values “downwards” and positive “upwards”? I imagine the variable c would be converted to a float on the GPU.

Still, for unsigned integers, I don’t see how there could be negative vector components.