16 Bit Monochrome Texture

I cannot seem to get opengl to display 16Bit monochrome textures. I fill the buffer properly and use GL_UNSIGNED_SHORT as my buffer layout argument to gluBuild2DMipMaps or glTexImage2D. Still, the data gets truncated at the one byte level when the texture is built. This makes my data only capable of 256 values and loses precision. Any ideas?

Do you specify the internal format parameter of glTexImage2D to GL_LUMINANCE16 ?

Also what video card and driver/os?

I wouldn’t be surprised if the glu functions didn’t know about 16-bit formats, they are ancient (and all software).

What is your graphics card, and driver version? Are you sure the card actually supports 16-bit internal format textures? What format are you trying to use in TexImage() for internal and external format?

The glu function is defined as

GLint
gluBuild2DMipmapLevels(GLenum target, GLint internalFormat, GLsizei width, GLsizei height,
GLenum format, GLenum type,
GLint userLevel, GLint baseLevel, GLint maxLevel,
const void *data)

notice the second parameter. It gets passed to glTexImage2D directly.

Try glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPONENTS, comp);

but there is not guarantee it will be honest.

thanks for the replies all. yes i have tried GL_LUMINANCE16 to no avail. i also did the glGetTexLevelParameteriv thing. the answer was 16. i guess this means that it should be supported. however, i haven’t been successful. i’ve tried both glTexImage2D and gluBuild2DMipmap with all of the different available combinations of internal format and using GL_UNSIGNED_SHORT as the storage param. nothing i do can keep the second byte of data from being truncate. i just would like to support 0-65535 different shades of monochrome data rather than 0-255. if anyone has any ideas of how else to accomplish i would appreciate it. i have tried a number of different platforms including SGI workstations, linux and windows on both GF2 and GF4 and nothing seems to work. maybe this is an opengl limitation?

Just so we know:

How do you know your data is being truncated? Are you doing a facncy shader that gets bad results or are you doing a glGetTexImage and comparing the results? (My thinking is that maybe it is how you are using the texture that clamps the texture lookup…)

It could be that he did a test by rendering textured quad, which doesn’t prove much if you buffer is 8 bit per component.

That’s one possible mistake

i am determining that the data is truncated by beginning with a bitmap that has 0xFF in one byte and 0x00 in the other byte alternated for a whole bitmap. at this point the texture generated is all white. this tells me that the two bytes are only being evaluated as a single byte (ie 0xFF). On the other hand if it was being evaluated as two bytes, i would expect it to be quite dark (255/65535). does this make sense? also, in order to make sure that it was not white for some other reason, a buffer filled with 0x00 and 0xA9’s yielded a somewhat grayish color. does this seem like a valid test?

That is probably not a good test as whatever the first byte is will mostly represent the color. (Ie you may have your byte ordering wrong:

ie. 0xFF 0x00 = Whiteish
0xA9 0x00 = Greyish

anyway, depending on how you set your bytes this may be different.

Try just creating an array of unsigned integers and memsetting them to 0x00FF. Then access it in a ARB/GLSL shader and multiply it by 128. you should then see some grey.

(Also does OpenGL have a byte ordering settings? I don’t know)

Originally posted by boxsmiley:
i have tried a number of different platforms including SGI workstations, linux and windows on both GF2 and GF4 and nothing seems to work. maybe this is an opengl limitation?
No, its a hardware limitation.

No matter what you do, a Geforce2/3/4/5 is not able to display more then 8 bits per channel because the RAMDAC only uses a 8 bit digital/analog converter for each channel.
I’m not 100% sure but I would be surprised if the SGI RAMDACs actually sample 16 bits per channel.

So it depends on the framebuffer format and if the RAMDAC is able to convert it.

The Geforce6 can use a FP16 framebuffer if I am not mistaken (I dont have one) but I dont know how the RAMDAC handles such a framebuffer.

This has nothing to do with the RAMDAC:
Internal precision != display precision

LUMINANCE16 textures work on GeForce 3/4/5/6 (since driver 40 something) and ATI Radeon R2xx/R3xx including filtering.

  • Klaus

Originally posted by Klaus:
This has nothing to do with the RAMDAC:
Internal precision != display precision

I am aware of that but since he also wants to create mipmaps, I assumed he wants to use the texture as a diffuse texture and not as a LUT of some sort. Using a 16 bit diffuse texture on hardware that is limited to 8 bit per channel output is IMHO pretty useless unless you do a whole lot of blending.

Originally posted by Klaus:
LUMINANCE16 textures work on GeForce 3/4/5/6 (since driver 40 something) and ATI Radeon R2xx/R3xx including filtering.

AFAIK the Geforce3 and 4 only have an internal precission of 10 bit per channel. I dont know about the ATI’s though. Following your argument: driver caps != hardware caps.

Anyway this discussion doesnt help boxsmiley, does it?

AFAIK the Geforce3 and 4 only have an internal precission of 10 bit per channel. I dont know about the ATI’s though. Following your argument: driver caps != hardware caps.
That’s not true. 10 bits in the register combiners, probably 32 bit in the texture shaders. GeForce 3/4 do have HILO texture formats (2x16bit channels) and support LUMINANCE16 textures (1x16bit channel). Of course, as V-Man already pointed out, if you render high precision textures into a low precision on-screen buffer you will loose your precision.

Anyhow, the answer to the original question is “Yes, 16 bit monochrome textures work on GeForce3/4/5/6 and ATI Radeon 2xx/3xx. Might not work with gluBuild2DMipmapLevels, thus build the mipmap levels yourself. Render offscreen into a floating point buffer for high precision output”.

  • Klaus

I should point out that only the R3xx cores and up support higher than 8bit/channel textures.

Originally posted by Klaus:
That’s not true. 10 bits in the register combiners, probably 32 bit in the texture shaders.

So I am wrong because you are guessing I am? Thats rich.

Originally posted by Klaus:
Render offscreen into a floating point buffer for high precision output".

You mean the floating point buffer a GF3/4, a R2xx and the SGIs I know of dont have?

Klaus, I think this debate is pointless because it doesnt help boxsmiley at all, if you need to have the last word on this, so be it I am done with you.

Klaus, I think this debate is pointless because it doesnt help boxsmiley at all, if you need to have the last word on this, so be it I am done with you.
Keep cool.

So I am wrong because you are guessing I am? Thats rich.
Texture shaders have IEEE 32 bit floating point precision ( Texture Shaders ).

You mean the floating point buffer a GF3/4, a R2xx and the SGIs I know of dont have?
As GeForce3/4 do not support floating point textures, you cannot output with this precision. But you can still use this precision for intermediate results.

I should point out that only the R3xx cores and up support higher than 8bit/channel textures.
Thanks, NitroGL … i think you are right.

  • Klaus

Originally posted by Klaus:
Texture shaders have IEEE 32 bit floating point precision ( Texture Shaders ).

sigh

This is only true for the Geforce FX++ series. It does not apply to GF3/4’s or even the ATI 3x00 series (which uses only 24 bits of an IEEE single float to represent floating point values between 0.0f and 1.0f, I am not saying that this isnt sufficient).

In fact the document you linked too does not make any comment at all with regards to the required precision of an fragment shader, it only states that you can combine two 8 bit channels to form a 16 bit integer (hi byte + low byte) normal map on GF3++ hardware. In my book this is just a LUT.

Originally posted by Klaus:
As GeForce3/4 do not support floating point textures, you cannot output with this precision.

Exactly what I said. Given the available texture reads and the internal precission of an GF3/4, 16 bit diffuse textures are a no-no on GF3/GF4 class hardware.

/sigh

I’ll rest my case.

Texture shaders are full float precision but obvoiusly not IEEE 754 standard compliant, since that would be pretty useles for graphics (what does r=g=b=NaN look like?). I think they use s24e8 format though, so in that sense they use an “IEEE” format.

Look where Klaus lead this discussion too.

Why does it always has to be, that A asks a question, then B, C and D make suggestions based on the more or less vague information and then M enters saying that D is wrong without contributing anything else. Then after some debate M realizises that he may have been wrong after all and gives the discussion a whole new direction.

If you would take the time and actually READ what boxsmiley wrote, you would realize that he also tried to run his application on a GF2 and SGI workstations. Honestly, its some time since I worked with SGI but I cant recall a single one that had fragment shaders and I think we all agree that a GF2 dosent have them either (nor does it support more then 8 bits per channel). This pretty much erases any fancy fragment shader out of the equation and leaves just plain OpenGL 1.2/1.3 functionality, right?

Looks like M did not read carefully but was more then willing to jump to a wrong conclusion and of course Q jumped in to say that M is right. :rolleyes:

Just for the record: this is boxsmiley’s very first sentence (for all you geniuses who did not even bother to read):

“I cannot seem to get opengl to display 16Bit monochrome textures.”

I dont know in what reality you are living in, but in mine this has lots to do with a RAMDAC.

Can we get back to topic now?

Personally, I did not fully understand what boxsmiley did.

He said he can’t display it, but he did not mention the backbuffer/frontbuffer precision.

I took a guess and assumed it is 32 bit (RGBA8888)

Certainly, if you can create a higher precision backbuffer and front buffer and have a nice RAMDAC to go with, then you will be able to display that texture.

Otherwise, use a p-buffer with high precision and glReadPixels on it (if your card supports it).