16-bit textures...

This is my first post, but here goes:

I’m having a problem with 16-bit textures, arranged in the 5551 RGBA bit format. I have used a 24-bit version of the texture, a simple RAW file, and all worked well. However, when I coverted the file to a 16-bit texture, I could not seem to get OpenGL to render it. I know the 16-bit texture is fine, the filesize checks. Here’s what I’m doing:

void* LoadRAW(char* pTexture, int nNumber, int nDepth)
{
FILE* pFile = fopen(pTexture, “br”);
char* pData = new char[1281282];
fread(pData, sizeof(char), 1281282, pFile);

fclose(pFile);

glGenTextures(1, &texture[nNumber]);
glBindTexture(GL_TEXTURE_2D, texture[nNumber]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGB5_A1, GL_UNSIGNED_BYTE, pData);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

return pData;

}

Forget the return value, isn’t used, this code is not clean, but you have the idea. I believe the problem deals with the params used in glTexImage2D, but am not sure. My screen res is set to 16-bit, so that is not the problem. If anyone can help…please.

Originally posted by jtwoods:
FILE* pFile = fopen(pTexture, “br”);

I have never used fopen with “br” as second parameter but rather “rb”, never seen in my help file that it could be possible to reverse the order either. So I was wondering …
I rebuilt my texture loader function with “br” and … it doesn’t work any more .
So was it just an error in your post or is your code actually like this ?

Hope this helps.

Moz

No, as I was posting the topic, I was playing around with the code…I thought I changed it back to the original before I pasted, but I guess I didn’t catch that part. In the official code, it is “rb”. “br” causes an exception error. In response to your question, the fopen does use “rb”, and the file is loaded properly.

The problem seems to be the

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGB5_A1, GL_UNSIGNED_BYTE, pData);

The second GL_RGB5_A1 is not allowed. The first one is correct, but for the second one you need probably GL_RGB. The first format parameter tells how the data is stored in memory. I’m not really sure what the second means, but as I understood the RedBook, it means the type of display you have. So GL_RGB could be fine.

->
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGB, GL_UNSIGNED_BYTE, pData);

Kilam.

I just tried your suggestion, didn’t work…a driver error. I’m wondering if its some sort of problem specific to a Radeon/OpenGL/16-bit texture system…if anyone has any idea how to make OpenGL recognize 16-bit textures…

I’m not sure about this, but I"m gonna take a stab at my problem: Could the problem have something to do with pixel packing alignment? glPixelStorei() seems to change something about pixel storage alignment, but I’m not sure. Could it be that I have to use this or a similar function to make OpenGL realize my color components of my textures are not individual bytes, but bits within a short (16-bit) value. Please reply, thanks!

> Could it be that I have to use this or a similar function to make OpenGL realize my color components of my textures are not individual bytes, but bits within a short (16-bit) value.

Only in OpenGL 1.2 or with GL_EXT_packed_pixels extension.

The first GL_RGB5_A1 is a internal image format, and the second is the pixel format of the submitted image.

If you use GL_RGB for the second you will get a driver error becuase it tried to read 3wh bytes, which is more than the data you have submitted.

It may have something to do with the supported input texture formats, so check that out.

Also you can try using a RGBA texture as an input and keep the internal format as GL_RGB5_A1. The driver will keep a 5551 version of the texture object itself so you can delete the original RGBA data after the bind. There may be a problem then of the dithering quality the driver uses to convert the format.

kalim was right this aint alowed
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGB5_A1, GL_UNSIGNED_BYTE, pData);

use this instead
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, pData);

only 4 things can go in the image format GL_RGB,GL_RGBA,GL_LUMINANCE,GL_LUMINANCE_ALPHA

BTW i wrote a program yesterday to enumerate through all the internal texture formats and see what the driver actually gives u.
ill post it up here today http://members.nbci.com/myBollux
ive just gotta make a webpage

reading your original question again (helps)
u might have to alter the data when u load it in , ie convert the 16bit texture data to 24/32 bit before passing it to glTexImage(…)

Originally posted by zed:
[i]kalim was right this aint alowed
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGB5_A1, GL_UNSIGNED_BYTE, pData);

use this instead
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, pData);

only 4 things can go in the image format GL_RGB,GL_RGBA,GL_LUMINANCE,GL_LUMINANCE_ALPHA
[/i]

Yes, but data type can be changed.

If GL_EXT_packed_pixels is available:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB5_A1, 128, 128, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1_EXT, pData);

Holy S***! It works…the GL_UNSIGNED_SHORT_5_5_5_1_EXT bit did it! However, the color looks very crappy, worse than 16-bit should look, so I’m wondering about that a little, but that my have to do with my algo for taking 32 bit to 16 bit, so it’s not necessarily OpenGL. Thanks to everyone who answered my post, I finally have 16-bit textures!!!