Hello,
I have been trying to write a volume renderer and am using Cg for shaders. Instead of calculating gradient’s on the fly, I am trying to pre-calculate them, store them into the volume texture (rgb - gradient, a - isovalue/ density) and then pass on to the fragment shader.
The gradient components may be negative and hence I am storing them into the vector of integers. To bind it to a texture, I am using the following,
glBindTexture(GL_TEXTURE_3D, texName);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage3D
(
GL_TEXTURE_3D,
0,
GL_RGBA8,
volume_w,
volume_h,
volume_d,
0,
GL_RGBA,
GL_INT,
volDataPtr
);
where, volDataPtr refers to the vector of integers. The reason why I am not using GL_UNSIGNED_BYTE (instead of GL_INT) is because the gradient values may be negative (and GL_UNSIGNED_BYTE goes from 0 - 255). Using GL_BYTE also doesn’t helps as the -ve values may be less than -127 (and +ve values greater than 127). Hence, using GL_INT seems logical to me.
However, this doesn’t works and I get a blank screen for my volume (GL_UNSIGNED_BYTE works, but I cannot store gradient’s then). Also, I am reading in from a .raw file and am properly typecasting into integer values before storing them into the vector.
Can anyone suggest what I may be doing wrong and/ or where exactly to look into.
Thanks,