glTexImage2D parameters

Hello.

I’m having trouble getting the right parameters
to glTexImage2D for certain textures.

For a 16-bit per-pixel greyscale texture, I’m doing:

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, gx->w, gx->h, 0, GL_LUMINANCE, GL_UNSIGNED_SHORT, pixels);

…which looks right but clearly isn’t:

I don’t quite understand where I’ve gone wrong.

To clarify, I have verified that the texture pixels
really are packed how I think they are, so it’s not
a problem with the data.

Post more code related to the image, and the image itself. It’s not really easy to help when we have no clue whan the image is suppose to look like, and the single line of code is in itself not wrong in any way. It can only be wrong together with the image data.

I apologise, the pixels WEREN’T packed correctly -
the method I was using to check them was producing
incorrect results.

For reference anyway:

The image is:

http://img266.imageshack.us/img266/139/gsa16ug2.png

There really isn’t much more code than that. I’m
loading the image in using a PNG library, and a raw
dump of the pixel data shows exactly what I’d expect
(exporting the pixel data to a TIFF with libtiff
produces expected results too).

A dump of bytes from the start of the image shows
the packing:

00000000  4d 4d 4d 96 96 96 1c 1c  1c b3 b3 b3 69 69 69 e3  |MMM......iii|
00000010  e3 e3 00 00 00 ff ff ff  ff fd f6 f0 e9 e2 db d4  |...|
00000020  ce c7 c0 ba b3 ac a5 9f  98 91 8a 83 7d 76 6f 69  |.....}voi|
00000030  62 5b 55 4e 47 41 39 33  2c 26 1f 18 11 0a 04 00  |b[UNGA93,&......|
00000040  dd dd dd dd dd dd dd dd  dd dd dd dd dd dd dd dd  ||
00000050  dd dd dd dd dd dd dd dd  dd dd dd dd dd dd dd dd  ||
00000060  dd dd dd dd dd dd dd dd  dd dd dd dd dd dd dd dd  ||
00000070  dd dd dd dd dd dd dd dd  dd dd dd dd dd dd dd dd  ||

I’ll try harder next time. :slight_smile:

No, it’s still not working properly.

Let me rephrase the question:

Given greyscale image data, packed two bytes per
pixel, what parameters do I pass to glTexImage2D()
to create a texture that’s closest to the original
format?

The direct answer to the direct question you asked is; GL_LUMINANCE and GL_UNSIGNED_SHORT.

As I said in my first post, that line of code itself is not wrong. The code can only be wrong together with something else which doesn’t match.

From the look of it, the texture seems to be uploaded with rows interleaved with some constant gray color. Odd rows (counting from the top) corresponds to the image and even rows to some constant gray color. Check the hex-dump you posted; that series of 0xDD matches very close to the color in the even rows.

That behaviour could just as well come from your incorrect setup of the PNG library, which is the reason I asked for more code in my first post. Given the hex dump, the result you get make perfect sense to me. The problem is likely elsewhere.

What bob said, all those ‘dd’ from hex dump do not look right.
You might also have a Pixel Store Alignment problem.
See points 7. and 8. here :
http://www.opengl.org/resources/features/KilgardTechniques/oglpitfall/

Thanks for the suggestions both of you.

I’ve looked into it further and even with other
images I still get the same results. The only way
I can describe it is that it seems as if OpenGL is
using GL_UNSIGNED_BYTE, no matter what pixel store
alignment is set and no matter what is passed to
glTexImage2D. I will try and post the source soon,
when I’ve tested it across a few operating
systems.

I neglected to mention that I’m currently using
Mesa software GL, as nvidia just dropped support
for my card. This MAY be the cause of the problem,
as it only happens with textures that use more
than one byte per pixel (64 bit RGBA, 48 bit
RGB, 16 bit greyscale).

Well, I’ve verified that it’s not Mesa, at least…

is the problem that opengl seems to convert it to 8 bit luminance? in that case, hint the driver that you really want 16bit.

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, gx->w, gx->h, 0, GL_LUMINANCE, GL_UNSIGNED_SHORT, pixels);

I have uploaded a snapshot of the source here:

http://d.turboupload.com/d/1543126/gltexload_20070219-114833.tar.bz2.html

MD5 (gltexload_20070219-114833.tar.bz2) = 99a442f60a921fd211fad993c9bd09b4

It requires a POSIX build environment, but other
than that should be portable to anything. It uses
SDL, libpng and OpenGL.

Just type ‘make’ (if you’re on OS X, ‘make &&
make osx’) and run ./test1 (or run OSX/test1.app).
Press the up and down arrows to cycle through
the loaded textures, you’ll see that some of
them are corrupted. Press escape to exit.

Indexed alpha textures don’t currently work.
The problem I’m having is with textures that use
two bytes per channel or element (see gs32.png
and rgb64.png in particular).

I forgot to add:

pngload.c is an independently developed wrapper
around libpng, I just imported the source file
directly. If anybody would like that package
(complete with unit tests), I’ll
upload it as well.

Not going to go through all that code for time reasons, but looking for immediate pitfalls didn’t give anything obvious.

However, one thing I noticed now is something wrong with the hex dump you posted. The image is 64 pixels wide. There are 64 bytes in the hex dump before the series of 0xDD comes. The pixel data occupies exactly one row in the image shown and matches the first 64 bytes of the image (visual inspection only).

What does this mean? If the image as you claim is 16 bits per component, the data you pass to glTexImage is downsampled to 8 bits per pixel. 64 bytes of data. 64 pixels. Must be one byte per pixels.

It also means you actually pass the value of GL_UNSIGNED_BYTE instead of GL_UNSIGNED_SHORT. Data is bytes, and data is read correct, so OpenGL can’t read it as shorts.

So I suggest you take a moment with your debugger and watch EXACTLY what happens at each step when loading a 16-bit greyscale image. From what you have posted, the image is, at some point, downsampled incorrectly to 8 bits per component.

You wouldn’t BELIEVE what the problem was.

I’ve stepped through this program in a debugger
over 100 times over the course of two days.

That table in gltex_load.c - the for loop checks
for the type of texture, but didn’t check that the
bpp matched. So, even though I was setting the
right values for the surfaces, the wrong ones
were being passed to glTexImage2D().

I can’t BELIEVE I missed it. Repeatedly.

Thanks for your patience, everyone.