using GL_LUMINANCE16 without loss of data

I`m using 3Dtexture to calculate section of volume data.
I do someting like:
glTexImage3D(GL_TEXTURE_3D,0,GL_LUMINANCE16 , tWidth, tHeight, tDepth,0,GL_LUMINANCE , GL_SHORT, pImage);

and I map this texture on single quad and then I do glReadPixels(0,0,Vwidth, Vheight,GL_LUMINANCE,GL_SHORT,(GLshort*)result_image);

The result look like 16bit data are converted to 8bit calculated and then converted back to short.
Dont you have someone an idea how to get back corect data. I cant use Color Index mode because Im using pbuffer.

Do you have a frame buffer with 16 bits per channel? Are you sure the texture is actually uploaded as a 16 bit liminance texture, and no as a 8 bit one? You see, the internal format of the texture is just a hint, you can’t be sure you get what you ask for. The only thing you can be sure about is that you get a luminance format, but you have little control over the number of bits.

what internal format u ask for is not necessary what u get given (if its important check afterwards what u got given)
IIRC with nvidia cards only about 5 internal formats are supported of the 50 or so + LUMINANCE16 aint one

And how can i check what internal formats asre supported by my card?
In PixelFormatDescriptor I can use just 8bite per color so my frame buffer can not use 16bit
Result looks like with some overflow.
I tryed to distribute it in to 2 bytes like 8bits R, 8bits G and B = 0, use 3D texture in RGB mode and than make 16bit data from result again. It was strange but better than luminance 16. So i realy dont know what to do because I need just{ } set corect 16bits data, interpolate it, and get back 16bit data