Post more code related to the image, and the image itself. It’s not really easy to help when we have no clue whan the image is suppose to look like, and the single line of code is in itself not wrong in any way. It can only be wrong together with the image data.
There really isn’t much more code than that. I’m
loading the image in using a PNG library, and a raw
dump of the pixel data shows exactly what I’d expect
(exporting the pixel data to a TIFF with libtiff
produces expected results too).
A dump of bytes from the start of the image shows
the packing:
Given greyscale image data, packed two bytes per
pixel, what parameters do I pass to glTexImage2D()
to create a texture that’s closest to the original
format?
The direct answer to the direct question you asked is; GL_LUMINANCE and GL_UNSIGNED_SHORT.
As I said in my first post, that line of code itself is not wrong. The code can only be wrong together with something else which doesn’t match.
From the look of it, the texture seems to be uploaded with rows interleaved with some constant gray color. Odd rows (counting from the top) corresponds to the image and even rows to some constant gray color. Check the hex-dump you posted; that series of 0xDD matches very close to the color in the even rows.
That behaviour could just as well come from your incorrect setup of the PNG library, which is the reason I asked for more code in my first post. Given the hex dump, the result you get make perfect sense to me. The problem is likely elsewhere.
I’ve looked into it further and even with other
images I still get the same results. The only way
I can describe it is that it seems as if OpenGL is
using GL_UNSIGNED_BYTE, no matter what pixel store
alignment is set and no matter what is passed to
glTexImage2D. I will try and post the source soon,
when I’ve tested it across a few operating
systems.
I neglected to mention that I’m currently using
Mesa software GL, as nvidia just dropped support
for my card. This MAY be the cause of the problem,
as it only happens with textures that use more
than one byte per pixel (64 bit RGBA, 48 bit
RGB, 16 bit greyscale).
It requires a POSIX build environment, but other
than that should be portable to anything. It uses
SDL, libpng and OpenGL.
Just type ‘make’ (if you’re on OS X, ‘make &&
make osx’) and run ./test1 (or run OSX/test1.app).
Press the up and down arrows to cycle through
the loaded textures, you’ll see that some of
them are corrupted. Press escape to exit.
Indexed alpha textures don’t currently work.
The problem I’m having is with textures that use
two bytes per channel or element (see gs32.png
and rgb64.png in particular).
pngload.c is an independently developed wrapper
around libpng, I just imported the source file
directly. If anybody would like that package
(complete with unit tests), I’ll
upload it as well.
Not going to go through all that code for time reasons, but looking for immediate pitfalls didn’t give anything obvious.
However, one thing I noticed now is something wrong with the hex dump you posted. The image is 64 pixels wide. There are 64 bytes in the hex dump before the series of 0xDD comes. The pixel data occupies exactly one row in the image shown and matches the first 64 bytes of the image (visual inspection only).
What does this mean? If the image as you claim is 16 bits per component, the data you pass to glTexImage is downsampled to 8 bits per pixel. 64 bytes of data. 64 pixels. Must be one byte per pixels.
It also means you actually pass the value of GL_UNSIGNED_BYTE instead of GL_UNSIGNED_SHORT. Data is bytes, and data is read correct, so OpenGL can’t read it as shorts.
So I suggest you take a moment with your debugger and watch EXACTLY what happens at each step when loading a 16-bit greyscale image. From what you have posted, the image is, at some point, downsampled incorrectly to 8 bits per component.
I’ve stepped through this program in a debugger
over 100 times over the course of two days.
That table in gltex_load.c - the for loop checks
for the type of texture, but didn’t check that the
bpp matched. So, even though I was setting the
right values for the surfaces, the wrong ones
were being passed to glTexImage2D().