PDA

View Full Version : Solid color not so solid on nVidia??



andras
01-12-2006, 05:30 PM
Hey, this is very interesting, watch this:

I have an FBO with a 16bit (565) RGB texture attached to its color0 attachment. First, I clear the texture using glClear() to a random color, say (0.1, 0.2, 0.0, 0.0). Now I read it back with glReadPixels, and the result is a solid 0x19a0 for every pixel. This is fine, this is the 565 equivalent of that color. Now, I render into this texture with a one line shader "gl_FragColor = vec4(0.1, 0.2, 0.0, 0.0);", that should output the same solid color. Blending is disabled, everything I could think of is disabled. Now, when I read back the pixels again, the values are either 0x19a0, 0x21a0 or 0x1980. These look like very different numbers, but in 565 format, the difference is just +/-1 bit per channel! So this is invisible to the eye!
Unfortunately, I have to use this read back data for some precise computation, where this error is not acceptable!
Note that this does not happen with pure white and pure black, just in between.

Could anyone from nVidia enlighten me what's going on here? Is there anything I could do, to get the precise solid values?

I'm running GF6600 with 81.95 drivers on XP SP2.

Thanks,

Andras

tamlin
01-12-2006, 07:40 PM
Just an idea, but what if you use fp numbers that are closer to the color values you expect?

(take the equal signs with a grain of salt)
For the 3/4 difference in the 5-bit channel:
1/31*3 = 0.096774194
1/31*4 = 0.129032258

For the 12/13 in the 6-bit channel:
1/63*12 = 0.19047619
1/63*13 = 0.206349206

Relic
01-12-2006, 10:01 PM
Sounds like enabled dithering.

andras
01-13-2006, 04:35 AM
Uhh, dithering is enabled by default? I didn't know that! That's a surprise! :) Well, that explains.. thanks!
I've thought it looks like dithering, but I was so confident that I've never enabled it, that I didn't even try disabling it!
Oh, boy, sorry for the beginner question..

Hampel
01-16-2006, 02:09 AM
But should dithering not be used in both cases then? :confused:

mikeman
01-18-2006, 03:55 AM
Originally posted by Hampel:
But should dithering not be used in both cases then? :confused: I really doubt OpenGL would use dithering when clearing the buffers.

Hampel
01-18-2006, 06:36 AM
Oh my mistake: I though of a fixed-function vs. shader problem...

Relic
01-18-2006, 07:21 AM
Don't get yourself distracted by false claims.
Shaders don't run during a glClear.
Dithering is specified to affect glClear.
But there are a lot of different ways to fill memory faster with a solid color. I have a faint memory that dual-ported VRAM chips could even fill themselves.
If you need a dithered background use glRect (and, for kicks, compare the performance).
Dithering is only done in highcolor as far as I have seen and who needs highcolor today? Bah. ;)

andras
01-20-2006, 12:15 PM
Ok, so I've disabled dithering, and set the texture min and mag filters to nearest (no anisotropic), but sometimes I still get different values than those in the source texture. Granted, I stretch them, warp them, etc.. but this should still not change the color! I'm sure I've just forgot disabling something.. Any idea what?

EDIT: In case someone is wondering what the heck I'm doing: I store signed 16bit int values in a r5g6r5 texture, I would use one channel luminance if I could, but you can not render into one channel textures, so I have to use RGB, but then filtering/dithering messes up the original values..

V-man
01-21-2006, 04:42 PM
Ok, so I've disabled dithering, and set the texture min and mag filters to nearest If you mean you are binding the FBO and using it as a texture, then it is still conceivabe that the values will still change even your render target is another 565 FBO and you are attempting to just dump the values to it. The pipeline runs in floats so perhaps there is conversion happening from 565 to float 32-32-32-32 or 16-16-16-16 and then back to 565


I store signed 16bit int values in a r5g6r5 Try floats.

andras
01-21-2006, 06:36 PM
Floating point does not mean it's random. The exact same r5g6b5 values will be converted to the exact same float and back. There should be no problems at all.

Also, I can't use floating point, because you can not attach a 1 channel floating point texture to an FBO (yet).

I could use floating point RGBA, and use only the R component, but that has ridiculous size requirements (8x bigger than 16bit RGB). I would rather stick with r5g6b5 if possible. And I think it is, I'm just probably doing something wrong..

shelll
01-22-2006, 01:56 AM
on nVidia cards you can attach single channel textures (GL_FLOAT_R16_NV, GL_FLOAT_R32_NV) to fbo.

andras
01-22-2006, 05:20 AM
Yes, but there's a catch. You have to use texture rectangles, which means that you can't use GL_REPEAT..

EDIT: hmm, wait a second, would it be possible to emulate GL_REPEAT from the fragment shader?

EDIT2: oh, I just realized that I only need repeat for the source texture, for which I guess I could use the normal ARB float format, right? I'll give that a try! But I'd still like to know why the r5g6b5 method didn't work!

andras
01-22-2006, 06:27 AM
Also, it would be nice if it worked on ATI boards too. Can I attach 1 component floats to FBO on ATI?