mach banding

I never realized that you can get such ugly looking graphics, even in 32 bpp. All this time I thought the video card is not using enough precision or something, until I wrote my own software rendering code (calculates lighting and everything else in software).

A white triangle, will look shaded from light gray to dark gray at a specific angle of rotation and mach banding becomes very visible.

I took a screen shot and check out the color values ->

the bands went something like this

RGB (89, 89, 89)
RGB (90, 90, 90)
RGB (91, 91, 91)

A difference of 1 for each components and it was very much visible!
Does GL do dithering for 32 or 24 bpp?

The V-man is very upset now!

i am not sure what is the default setting for dithering. But what aboyt trying it out by yourself ?
just call glDisable(GL_DITHER) before you call your rendering code and see what happens…

I’m pretty sure GL_DITHER is enabled by default.

I experienced that GL_DITHER is enabled by default. However i still wonder what GL_DITHER really does (except for the performance lost).

Jan.

The problem is that you’ve got a pure greyscale going, which limits you to 256 shades of grey at 32bpp. You’d find the same thing if you tried doing a solid red, green or blue.

Wonder when 64/128bpp graphics will be mainstream… Sorry to mention the ‘D’ word, but does anyone know if DirectX9 requires floating point color /display/ formats, or is that just for internal computation?

Sorry V-man, I have no good solutions for you.

– Jeff

Not sure about DX9. From the specs of the ATi Radeon 9700 (aka R300) I strongly suspect that the frame buffer is only 10-bits-per-component (32- or 40-bits-per-pixel). The floating-point buffers are only available for internal rendering I think.

It says: “High precision 10-bit per channel frame buffer support”. You’d think if it was genuinely 64-/128-bit FP they’d be shouting from the rooftops about it.

My experience has been that I haven’t noticed anything with dither on 24 bpp. I have yet to try out on this situation (working on other parts)

Even if it counts as 256 shades, it is very much noticeable. I can email you a screenshot if you want. A single value change per band and it looks like 50 or something.

Dithering will lead to extreme complications.

V-man

The R300 does floating point in the fragment shader. Displayable framebuffer max res is 10 bits per component (as nutball said), but I think at the sacrifice of alpha to 2 bits (10:10:10:2).
I think there is a fp drawable mode, but not displayable, supposedly for readback (so you can make multipass things or “accelerated nuclear simulation using your GPU” type of thing).
Note that glReadPixels has GL_FLOAT as one of the possible types…

I could swear that D3D 9 required full floating-point frame buffer functionality. I remember the Anandtech article raving about the 9700’s floating-point frame buffer capabilities, so I will assume that it has full-fledged 128-bit floating-point framebuffers.

I remember hearing about a card that had a 10:10:10:2 framebuffer, but I don’t believe it was the Radeon 9700. I thought it was Matrox’s new thing.

Even if the floating-point buffer is internal, that’s fine. You render the entire display into the floating-point buffer, and then use a single pass over the actual framebuffer to do your post-process effects (saturation, gamma correction, etc).