PDA

View Full Version : 16 Bit Monochrome Texture



boxsmiley
09-07-2004, 09:26 PM
I cannot seem to get opengl to display 16Bit monochrome textures. I fill the buffer properly and use GL_UNSIGNED_SHORT as my buffer layout argument to gluBuild2DMipMaps or glTexImage2D. Still, the data gets truncated at the one byte level when the texture is built. This makes my data only capable of 256 values and loses precision. Any ideas?

sqrt[-1]
09-07-2004, 09:48 PM
Do you specify the internal format parameter of glTexImage2D to GL_LUMINANCE16 ?

Also what video card and driver/os?

jwatte
09-08-2004, 07:34 PM
I wouldn't be surprised if the glu functions didn't know about 16-bit formats, they are ancient (and all software).

What is your graphics card, and driver version? Are you sure the card actually supports 16-bit internal format textures? What format are you trying to use in TexImage() for internal and external format?

V-man
09-09-2004, 06:42 AM
The glu function is defined as

GLint
gluBuild2DMipmapLevels(GLenum target, GLint internalFormat, GLsizei width, GLsizei height,
GLenum format, GLenum type,
GLint userLevel, GLint baseLevel, GLint maxLevel,
const void *data)

notice the second parameter. It gets passed to glTexImage2D directly.

Try glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPONENTS, comp);

but there is not guarantee it will be honest.

boxsmiley
09-09-2004, 03:19 PM
thanks for the replies all. yes i have tried GL_LUMINANCE16 to no avail. i also did the glGetTexLevelParameteriv thing. the answer was 16. i guess this means that it should be supported. however, i haven't been successful. i've tried both glTexImage2D and gluBuild2DMipmap with all of the different available combinations of internal format and using GL_UNSIGNED_SHORT as the storage param. nothing i do can keep the second byte of data from being truncate. i just would like to support 0-65535 different shades of monochrome data rather than 0-255. if anyone has any ideas of how else to accomplish i would appreciate it. i have tried a number of different platforms including SGI workstations, linux and windows on both GF2 and GF4 and nothing seems to work. maybe this is an opengl limitation?

sqrt[-1]
09-09-2004, 03:37 PM
Just so we know:

How do you know your data is being truncated? Are you doing a facncy shader that gets bad results or are you doing a glGetTexImage and comparing the results? (My thinking is that maybe it is how you are using the texture that clamps the texture lookup.....)

V-man
09-09-2004, 04:50 PM
It could be that he did a test by rendering textured quad, which doesn't prove much if you buffer is 8 bit per component.

That's one possible mistake

boxsmiley
09-09-2004, 06:20 PM
i am determining that the data is truncated by beginning with a bitmap that has 0xFF in one byte and 0x00 in the other byte alternated for a whole bitmap. at this point the texture generated is all white. this tells me that the two bytes are only being evaluated as a single byte (ie 0xFF). On the other hand if it was being evaluated as two bytes, i would expect it to be quite dark (255/65535). does this make sense? also, in order to make sure that it was not white for some other reason, a buffer filled with 0x00 and 0xA9's yielded a somewhat grayish color. does this seem like a valid test?

sqrt[-1]
09-09-2004, 07:53 PM
That is probably not a good test as whatever the first byte is will mostly represent the color. (Ie you may have your byte ordering wrong:

ie. 0xFF 0x00 = Whiteish
0xA9 0x00 = Greyish

anyway, depending on how you set your bytes this may be different.

Try just creating an array of unsigned integers and memsetting them to 0x00FF. Then access it in a ARB/GLSL shader and multiply it by 128. you should then see some grey.

(Also does OpenGL have a byte ordering settings? I don't know)

lgrosshennig
09-10-2004, 05:47 AM
Originally posted by boxsmiley:
i have tried a number of different platforms including SGI workstations, linux and windows on both GF2 and GF4 and nothing seems to work. maybe this is an opengl limitation?No, its a hardware limitation.

No matter what you do, a Geforce2/3/4/5 is not able to display more then 8 bits per channel because the RAMDAC only uses a 8 bit digital/analog converter for each channel.
I'm not 100% sure but I would be surprised if the SGI RAMDACs actually sample 16 bits per channel.

So it depends on the framebuffer format and if the RAMDAC is able to convert it.

The Geforce6 can use a FP16 framebuffer if I am not mistaken (I dont have one) but I dont know how the RAMDAC handles such a framebuffer.

Klaus
09-10-2004, 09:34 AM
This has nothing to do with the RAMDAC:
Internal precision != display precision

LUMINANCE16 textures work on GeForce 3/4/5/6 (since driver 40 something) and ATI Radeon R2xx/R3xx including filtering.

- Klaus

lgrosshennig
09-10-2004, 01:44 PM
Originally posted by Klaus:
This has nothing to do with the RAMDAC:
Internal precision != display precision
I am aware of that but since he also wants to create mipmaps, I assumed he wants to use the texture as a diffuse texture and not as a LUT of some sort. Using a 16 bit diffuse texture on hardware that is limited to 8 bit per channel output is IMHO pretty useless unless you do a whole lot of blending.


Originally posted by Klaus:
LUMINANCE16 textures work on GeForce 3/4/5/6 (since driver 40 something) and ATI Radeon R2xx/R3xx including filtering.
AFAIK the Geforce3 and 4 only have an internal precission of 10 bit per channel. I dont know about the ATI's though. Following your argument: driver caps != hardware caps.

Anyway this discussion doesnt help boxsmiley, does it?

Klaus
09-10-2004, 01:58 PM
AFAIK the Geforce3 and 4 only have an internal precission of 10 bit per channel. I dont know about the ATI's though. Following your argument: driver caps != hardware caps. That's not true. 10 bits in the register combiners, probably 32 bit in the texture shaders. GeForce 3/4 do have HILO texture formats (2x16bit channels) and support LUMINANCE16 textures (1x16bit channel). Of course, as V-Man already pointed out, if you render high precision textures into a low precision on-screen buffer you will loose your precision.

Anyhow, the answer to the original question is "Yes, 16 bit monochrome textures work on GeForce3/4/5/6 and ATI Radeon 2xx/3xx. Might not work with gluBuild2DMipmapLevels, thus build the mipmap levels yourself. Render offscreen into a floating point buffer for high precision output".

- Klaus

NitroGL
09-10-2004, 02:11 PM
I should point out that only the R3xx cores and up support higher than 8bit/channel textures.

lgrosshennig
09-10-2004, 02:51 PM
Originally posted by Klaus:
That's not true. 10 bits in the register combiners, probably 32 bit in the texture shaders.
So I am wrong because you are guessing I am? Thats rich.


Originally posted by Klaus:
Render offscreen into a floating point buffer for high precision output".
You mean the floating point buffer a GF3/4, a R2xx and the SGIs I know of dont have?

Klaus, I think this debate is pointless because it doesnt help boxsmiley at all, if you need to have the last word on this, so be it I am done with you.

Klaus
09-10-2004, 03:11 PM
Klaus, I think this debate is pointless because it doesnt help boxsmiley at all, if you need to have the last word on this, so be it I am done with you. Keep cool.


So I am wrong because you are guessing I am? Thats rich.Texture shaders have IEEE 32 bit floating point precision ( Texture Shaders (http://developer.nvidia.com/attach/6370) ).


You mean the floating point buffer a GF3/4, a R2xx and the SGIs I know of dont have? As GeForce3/4 do not support floating point textures, you cannot output with this precision. But you can still use this precision for intermediate results.


I should point out that only the R3xx cores and up support higher than 8bit/channel textures. Thanks, NitroGL ... i think you are right.

- Klaus

lgrosshennig
09-10-2004, 07:04 PM
Originally posted by Klaus:
Texture shaders have IEEE 32 bit floating point precision ( Texture Shaders (http://developer.nvidia.com/attach/6370) ).
*sigh*

This is only true for the Geforce FX++ series. It does not apply to GF3/4's or even the ATI 3x00 series (which uses only 24 bits of an IEEE single float to represent floating point values between 0.0f and 1.0f, I am not saying that this isnt sufficient).

In fact the document you linked too does not make any comment at all with regards to the required precision of an fragment shader, it only states that you can combine two 8 bit channels to form a 16 bit integer (hi byte + low byte) normal map on GF3++ hardware. In my book this is just a LUT.


Originally posted by Klaus:
As GeForce3/4 do not support floating point textures, you cannot output with this precision.
Exactly what I said. Given the available texture reads and the internal precission of an GF3/4, 16 bit diffuse textures are a no-no on GF3/GF4 class hardware.

*/sigh*

I'll rest my case.

harsman
09-12-2004, 09:06 AM
Texture shaders are full float precision but obvoiusly not IEEE 754 standard compliant, since that would be pretty useles for graphics (what does r=g=b=NaN look like?). I think they use s24e8 format though, so in that sense they use an "IEEE" format.

lgrosshennig
09-12-2004, 12:57 PM
Look where Klaus lead this discussion too.

Why does it always has to be, that A asks a question, then B, C and D make suggestions based on the more or less vague information and then M enters saying that D is wrong without contributing anything else. Then after some debate M realizises that he may have been wrong after all and gives the discussion a whole new direction.

If you would take the time and actually READ what boxsmiley wrote, you would realize that he also tried to run his application on a GF2 and SGI workstations. Honestly, its some time since I worked with SGI but I cant recall a single one that had fragment shaders and I think we all agree that a GF2 dosent have them either (nor does it support more then 8 bits per channel). This pretty much erases any fancy fragment shader out of the equation and leaves just plain OpenGL 1.2/1.3 functionality, right?

Looks like M did not read carefully but was more then willing to jump to a wrong conclusion and of course Q jumped in to say that M is right. :rolleyes:

Just for the record: this is boxsmiley's very first sentence (for all you geniuses who did not even bother to read):

"I cannot seem to get opengl to display 16Bit monochrome textures."

I dont know in what reality you are living in, but in mine this has lots to do with a RAMDAC.

Can we get back to topic now?

V-man
09-12-2004, 05:18 PM
Personally, I did not fully understand what boxsmiley did.

He said he can't display it, but he did not mention the backbuffer/frontbuffer precision.

I took a guess and assumed it is 32 bit (RGBA8888)

Certainly, if you can create a higher precision backbuffer and front buffer and have a nice RAMDAC to go with, then you will be able to display that texture.

Otherwise, use a p-buffer with high precision and glReadPixels on it (if your card supports it).

Humus
09-12-2004, 05:45 PM
AFAIK no hardware support any higher precision than RGB10_A2 as a displayable format. Or maybe there's some pro-cards that can do more?

harsman
09-13-2004, 05:37 AM
lgrosshenning, I just pointed out you were wrong with regards to the precision of texture shaders on the GeForce 3/4 (not MX). Might be useful knowledge since it makes the texture shaders (especially in conjunction with high precision texture formats) much more useful. You are obviously correct in saying that there is no possibility to display a 16-bit monochrome texture directly on most hardware, but I honestly don't seem what you're so upset about.

Klaus
09-13-2004, 06:42 AM
Personally, I did not fully understand what boxsmiley did.
I think nobody really does. Maybe he can tell us ?

I doubt though that he was talking about on-screen displaying, because he hopefully knows that his monitor cannot display more that 8 bit grayscale (unless he has one of those new HDR displays).

BTW: As he was also talking about the GF4 - you could output 16 bit grayscale with a GF4 without any special RAMDAC:
1. Fetch a filtered 16 bit sample from a LUMINANCE16 texture in the texture shaders.
2. Lookup into a 256x256 dependent texture to split high and low byte.
3. Output low byte in e.g. red channel, high byte in e.g. green channel.
4. Find some display system that puts the two bytes together from the DVI interface again and brings it to the screen with 16 bit precision.

Klaus

boxsmiley
09-13-2004, 06:48 PM
thanks everyone for the replies. i think it was a hardware limitation as numerous people have guessed. i was trying to DISPLAY 16 full bits of grayscale data (0-65535) as a texture. i have resolved myself to perform a sort of dynamic range adjustment in order to scale it into the 8 bit limit. i was just convinced that if i could have 32 bit color (as someone said rgba 8,8,8,8) that there must be a way of using only half that storage and devoting it all to monochrome. i guess from the posts that this is a monitor problem. anyways, thanks for trying to help. btw, i am kind of limited to vanilla opengl 1.3, so any talk of shaders and such is not useful in this particular application. thanks for the education people. cheers, boxsmiley.

al_bob
09-13-2004, 11:30 PM
i guess from the posts that this is a monitor problemIt's not really a monitor problem, but a RAMDAC problem. See if setting up a RGB10_A2 framebuffer helps. You get 2 more bits per color channel.

Otherwise, I can suggest some creative dithering, either by changing the texture itself, or with a fragment program.

lgrosshennig
09-16-2004, 11:21 AM
I owe you guys an apology, this is especially true for Klaus and Harsman.

I have been in a pretty fooked up mood lately (its a girl friend thingy) and I was unreasonable and took it out on you instead of blaming myself as I should have.

This is of course no excuse for my extremly rude behavior but I do realize it was my mistake.

I am real sorry guys.

Regards,
LG

dorbie
09-16-2004, 07:51 PM
Let's hit a few basic points here:

The RAMDACS on the high end modern PC cards have at least 10 bit precision and they often need it for things like quality gamma correction.

Your monitor is an analog device (CRTs), it can support better than 8 bits, and you could probably see this, exactly how much you can see and under what conditions is an open question but it depends on factors like the scene content, viewing conditions, monitor gamma response and any applied gamma correction.

AFAIK there are limits to what any windows system desktop frontbuffer can display especially with render to a window copy on swap approach. I think you're basically limited to the display depth on the desktop unless they've got a more sophisticated MUX scheme which I think is limited to workstations.

This is a changing situation which seems to get updated everytime you look at a new graphics card.

So if you want high precision you probably have to read the backbuffer before the swap. If you want more out the frontbuffer your best bet is probably to go to fullscreen rendering and keep your fingers crossed but that may not do it, it's just my best guess.

If any of this is out of date or needs correction I'd very much appreciate an update.

Christian Schüler
09-19-2004, 07:37 AM
I just feel to add that you need at least 16 bits of framebuffer precision if you want a framebuffer in linear color space (de-gamma'd), so alpha blending finally works as expected.