PDA

View Full Version : using GL_LUMINANCE16 without loss of data



COZE
11-06-2002, 02:03 AM
I`m using 3Dtexture to calculate section of volume data.
I do someting like:
glTexImage3D(GL_TEXTURE_3D,0,GL_LUMINANCE16 , tWidth, tHeight, tDepth,0,GL_LUMINANCE , GL_SHORT, pImage);

and I map this texture on single quad and then I do glReadPixels(0,0,Vwidth, Vheight,GL_LUMINANCE,GL_SHORT,(GLshort*)result_ima ge);

The result look like 16bit data are converted to 8bit calculated and then converted back to short.
Don`t you have someone an idea how to get back corect data. I cant use Color Index mode because I`m using pbuffer.

Bob
11-06-2002, 02:44 AM
Do you have a frame buffer with 16 bits per channel? Are you sure the texture is actually uploaded as a 16 bit liminance texture, and no as a 8 bit one? You see, the internal format of the texture is just a hint, you can't be sure you get what you ask for. The only thing you can be sure about is that you get a luminance format, but you have little control over the number of bits.

zed
11-06-2002, 11:46 AM
what internal format u ask for is not necessary what u get given (if its important check afterwards what u got given)
IIRC with nvidia cards only about 5 internal formats are supported of the 50 or so http://www.opengl.org/discussion_boards/ubb/smile.gif + LUMINANCE16 aint one

COZE
11-11-2002, 03:44 AM
And how can i check what internal formats asre supported by my card?
In PixelFormatDescriptor I can use just 8bite per color so my frame buffer can not use 16bit http://www.opengl.org/discussion_boards/ubb/frown.gif
Result looks like with some overflow.
I tryed to distribute it in to 2 bytes like 8bits R, 8bits G and B = 0, use 3D texture in RGB mode and than make 16bit data from result again. It was strange but better than luminance 16. So i realy dont know what to do because I need just{ http://www.opengl.org/discussion_boards/ubb/smile.gif } set corect 16bits data, interpolate it, and get back 16bit data