Texture defining

That will be cool, if you could define texture in other format, than 32 or 24 bit.
For example 16 or so…

Also, why pixel operations are so terribly slow? If GL is 3D , it doesn’t mean that is should not cover 2D.

You CAN already use 16bit textures among a lot of other formats.

For 2D I recommend using texturing instead of pixel operations. OpenGL treats 2D as a special case of 3D.

OpenGL already supports 16 bit textures. GL_RGBA4 is a 16 bit RGBA texture, GL_LUMINANCE16 is a 16 bit luminance texture, GL_RGBA16 is a 64 bit texture with 16 bits per component.

There are other types, but I don’t remember their exact name. It’s something like GL_RGB5_A1, which is a 16 bit RGBA texture, and GL_R5_G6_B5, which is also a 16 bit texture format, but only RGB.

On the other hand, explicitly specifying a texture format can be bad for the performance. If you render an image and want to copy the result into a texture, you’re best shot is to let the driver deal with the internal format. The driver can easily decide what format is the best performacewise. If you explicitly request a 16 bit texture, but use a 32 bit framebuffer, each pixel must be converted from 32 bit to 16 bit, and this is a serious performance killer. If you instead let the driver choose best format, in this case 32 bit texture, no conversion is needed, and updates will be very fast.

And concerning pixel operations. One big thing to remember is the same thing as with texture updates. Make sure your data matches the format of the framebuffer. If you try to read a 32 bit framebuffer into your array with 16 bit data, all pixels must be converted, which leads to sucky performance due to bad programming.

Proper use of glRead/DrawPixels, along with a good driver of course, isn’t that bad at all. It’s very easy to use it wrong and say, “this sucks”.

Originally posted by Bob:
OpenGL already supports 16 bit textures. GL_RGBA4 is a 16 bit RGBA texture, GL_LUMINANCE16 is a 16 bit luminance texture, GL_RGBA16 is a 64 bit texture with 16 bits per component.

Sorry BOB, but as far as I know, GL_RGBA4 and others are internal format types. When you are loading texture with glTexImage you must use data specified by Format and Type parameters. There is no other solution as byte or float or etc. your data to be.

Ok, I thought this was a part of the core OpenGL (up to 1.2 at least, not sure about 1.3), but there is an extension, GL_EXT_packed_pixels, which will do the job for you.

Read more about it in the extension registry .

There’s an extension for being able to submit data in a format other than 24 or 32 bit. I think it’s called gl_packed_pixels_ext. I believe that the extension is also a part of OpenGL 1.2

j