Hey,
I am developing an application to visualize CT data ( volume’s size 512x512x128) using opengl.
As for the implementation, I used a Texture3D and it works just fine on volume coded on 8 bits.
My problem is that real data are stored on 16 bits (gray level stored on 16 bits). I want to be able to change the lookup table to only see the texture’s pixels that belong to a specific intensity range.
I found stuffs about Color map Extension, but I have an ATI graphic card, so this extension is not supported. Therefore I guess I have to used the ARB_fragment_program extension (but I am not quite sure).
Could anyone give me hints about how to setup a 16 bits texture lookup table? Or any otherway to reach my goals?
Thanks
If you have a Radeon 9500 or better you can use LUMINANCE16 as internal format for your 3D texture. Store your palette in a second 1D or 2D texture and use a dependent texture lookup to access the transfer function. If you have an older card you are out of luck - the only choice is to quantize your data to 8 bit.
You have to learn how to use ARB_fragment_program or GLSL to do the dependent texture lookup.
Here’s some example GLSL code:
uniform sampler3D volume;
uniform sampler2D transferfunction;
varying vec3 texCoord0;
void main()
{
vec4 data = texture3D(volume, texCoord0);
gl_FragColor = texture2D(transferfunction, data.xx);
}
PS: I’m sure you will figure out how to support palettes with more than 11 bits (texture width is limited to 11 bits/2048 on ATI).
Thanks Klaus, I will try your suggestion.
I kind of knew I would have to go through ARB_fragment_program or GLSL, but at least I am sure, now, that it leads to a solution.