Regarding GL_SHORT....

Hi all,
I am trying to make scrolling display using openGl in windows.
I am able to get the display if I pass GL_UNSIGNED_BYTE as pixel type to glTexSubImage2D funtion but if I pass GL_SHORT as pixel type I am not getting display.My pixels are 10 bit or 12bit wide I want to display total 10 or 12 bits.Is is possible?

I am able to get the display if I pass GL_UNSIGNED_BYTE as pixel type to glTexSubImage2D funtion but if I pass GL_SHORT as pixel type I am not getting display.

What do you mean by that exactly? Are you saying that the texture doesn’t show up? What colors do you get? Is it pure white or something else?

Does OpenGL give an error?

My pixels are 10 bit or 12bit wide I want to display total 10 or 12 bits.Is is possible?

Not really. What exactly is the format of your data? That is, what bits are red, green, blue, and alpha?

Thank you for your reply.
I am trying to display single band image only i.e. mono not color.when I put GL_SHORT, texture is showing up but i am getting totally black image.
My pixels are 10 bit wide so I passed short int buffer pointer (in my machine short int is 2 bytes)to the glTexSubImage2D funtion.
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, imageW, imageH, GL_LUMINANCE, GL_UNSIGNED_SHORT, td_Src);(td_Src is short int buffer pointer).
In the data set which i tested,the pixel min value is 120(decimal) and the max value is 540(decimal).

My pixels are 10 bit wide so I passed short int buffer pointer (in my machine short int is 2 bytes)to the glTexSubImage2D funtion.

Pixels must be byte-aligned for OpenGL to read them. The individual colors within a pixel don’t have to be (for things like GL_UNSIGNED_SHORT_4444), but each pixel must be byte-aligned at a minimum.

Even if you pad your 10-bit values out to 16-bits, that’s not going to help, because the 16-bit values will be normalized as 16-bit values. If you store 540, it will be read as 540/65535, which is rather small (close to black). So you need to adjust your data so that it fits the range properly.