Dividing 128 by 256 gives the same result as dividing 127 by 256

Hi,

in below vertex shader I encode an integer value into one color component and I have noticed that the 0-127 range works as expected but from 128 and higher I need to add 1 in order to compute the correct color value. This behavior is consistent in the 128-255 range.

Can anyone shed any light on this pesky matter?


out vec4    varying_Color; 
void main(void) 
{
	...
	// float r is a value between 0 and 255 inclusive
	...	
	if (r>127) { 
		r++;      // why is this needed
	}
	varying_Color = vec4(r/256,0,0,1);
}

GLCapabilities: GLCaps[wgl vid 8 arb: rgba 8/8/8/8, trans-rgba 0x0/0/0/0, accum-rgba 16/16/16/16, dp/st/ms 24/0/0, dbl, mono , hw, GLProfile[GL4/GL4.hw], on-scr[.]]
INIT GL IS: jogamp.opengl.gl4.GL4bcImpl
GL_VENDOR: NVIDIA Corporation
GL_RENDERER: GeForce GTX 570/PCIe/SSE2
GL_VERSION: 4.6.0 NVIDIA 388.13

If you want to map the range [0,255] to [0,1], you need to divide by 255, not by 256. If you’re storing the output in an 8-bit unsigned normalised texture, conversion from floating point will multiply by 255.

Dividing by 256 then multiplying by 255 will scale all values downward slightly. For values below 128, the difference is less than 1/2 so the result will be rounded up, making no difference overall. For values above 128 the result will be at greater than 1/2 and the result rounded down. For 128 itself, 128*255/256=127.5, which could be rounded either way.

Note that the standard doesn’t require rounding to the nearest value, but states that it is preferred. All implementations seem to do this.

If you need to store exact values in a texture, consider using an unsigned integer texture (e.g. GL_RGBA8UI) instead (this requires OpenGL 3.0 or later).

Thanks, dividing by 256 instead of 255 was indeed incorrect. I’m humbled by my own stupidity.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.