how to read 16bit targa files?

I want to use 16 bit targa files(5bit R,G,B 1bit alpha) for some reason. Is there any way of doing this? or am I totally wrong about it? cause I am actually a beginner in opengl.

Originally posted by urban debugger:
[…] cause I am actually a beginner in opengl.

so why do you posting your frequently asked question in an advanced forum ?

Originally posted by AdrianD:
so why do you posting your frequently asked question in an advanced forum ?

I don’t know if the topic is advanced or not? I thought it was but if it was not proper to post here, i’m sorry about it.

What is the problem really? Is it to actually read the file from disk, or do you have problem uploading the image data as a texture?

the problem is about loading data as a texture in “16 bit” format?

But what part of the texture loading are you talking about? Can you load the image from file, but you can’t upload it as a texture? Or don’t you even know how to get the image from the file?

Originally posted by urban debugger:
[b] [quote]Originally posted by AdrianD:
so why do you posting your frequently asked question in an advanced forum ?

I don’t know if the topic is advanced or not? I thought it was but if it was not proper to post here, i’m sorry about it.

[/b][/QUOTE]

Then why don’t you search for “targa” in the beginner and advanced forums. I got 22 hits in beginner and 18 in advanced and image loading code in about 5 minuntes.

i have no difficulties in handling targas which are 24 or 32bits. i can open and make display them as a texture but, i cant upload 16bit targas as textures.

(There are 5bits for each R,G,B and 1bit for alpha channel…)

If you can get 24 or 32 bit to work, why don’t you just use 24 or 32 bit textures instead of 16 bit, load them into memory, downshift/scale the color/alpha bits to 16bits, and create a 16 bit texture? I’m assuming your goal is to use 16 bit textures? This point is not very clear in your original question.

You can use 1 bit files to create 32bit textures (of course very wasteful), or 32 bit files to create 1 bit textures (very lossy). So you really need to state your end goal in more detail.

thanks IT, seems i’ll do it that way, altough it is not the optimum solution(or is it?)

btw: i searched for targa but everyone opens 32,24 and even 8bit , but not 16.

Here is a page on TARGAs that I found particularly easy to follow:
http://astronomy.swin.edu.au/~pbourke/dataformats/tga/

It even has a link to sample C source code for loading 16-, 24- or 32-bit compressed or uncompressed TARGAs.

You can use packed pixel formats to upload a 16-bit RGB5A1 image (and other formats aswell, look in the specification for more information). Try with format GL_RGBA and type GL_UNSIGNED_SHORT_5_5_5_1.

Just remembered that TGA store the image data in BGR order. Not sure about the order in 16-bit TGA’s. Just try a all combinations of the following constants.

format:
GL_RGBA
GL_BGRA

type:
GL_UNSIGNED_SHORT_5_5_5_1
GL_UNSIGNED_SHORT_1_5_5_5_REV

I should point out that (AFAIK) 16bit TARGA images are 5-6-5 not 5-5-5-1.

I have a TGA library here: http://nitrogl.cjb.net
That can read:
32bit (RLE & uncompressed)
24bit (RLE & uncompressed)
16bit (RLE & uncompressed)
8bit (RLE & uncompressed)

It hasn’t been updated in ages (nither has my page ), but it works.

thank u so much Bob, this will solve all of my problems, but i got one more thing: i read about GL_UNSIGNED_SHORT_5_5_5_1 in spec. But the compiler says it is undeclared. I looked into gl.h, it is not there. do i have to include something else?

edit: does that mean 16bit targas do not support alpha channel nitro?

[This message has been edited by urban debugger (edited 10-17-2002).]

Originally posted by NitroGL:
I should point out that (AFAIK) 16bit TARGA images are 5-6-5 not 5-5-5-1.

No, they are 5-5-5-1.

Download the latest glext.h from here .