Hi,
I have some data in the range from roughly -3000 to 3000. What I want to do is load it into texture and display it applying some transformations to it (right now it’s just a linear mapping).
I think all textures that you load are scaled and converted to [0…1] by openGL. I’m going by this diagram I found after digging through the specs: spec 1.1 So scale my data to [0…1] myself and convert it into float myself before passing it as texture.
The problem is I seem to be loosing precision or level of detail as pixels with values that are close to each other get painted with the same value. If I decrease the range by which I scale my data to say -1000 to 1000 (clipping the rest) things seem better. However I thought float should have enough precision to handle a wider range.
Here’s my vertex shader:
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Here’s the fragment shader:
uniform sampler3D tex;
uniform float center;
uniform float window;
/* Texture is expected to be stored as luminance which means L goes to RGB values */
void main()
{
vec4 color =texture3D(tex,gl_TexCoord[0].stp);
//map colours to line y = (1.0/window)(x - x0)
//x0 = center - window/2
float x0 = center - window/2.0;
float maxX = window + x0;
float lum = color.r;
if (lum <= x0)
{
lum = 0.0;
}
else if (lum >= maxX)
{
lum = 1.0;
}
else
{
lum = (1.0/window) * ( lum - x0);
}
gl_FragColor = vec4(lum,lum,lum,1.0);
}
Thanks for input on the issue.