PDA

View Full Version : 1-bit rendering



Decimal Dave
01-04-2002, 08:46 PM
Ok; this is a weird problem that intuitively seems easy but I'm having a really tough time figuring it out.

What I need is to render what starts out as a full color scene and display it as a 1-bit dithered image. The overall appearance would be reminiscent of photographs appearing on ancient B&W hardware. I want to avoid any per-pixel operations because this needs to be done in real time.

If 1-bit is too difficult, I might be able to get by with just a grayscale image...though I can't find a way to do that either.

Has anyone ever attempted this in OpenGL?

Dodger
01-04-2002, 09:58 PM
I'd say the only way is read back the frame buffer with glReadPixels and do the dithering yourself. Best looking will probably be error diffusion (Floyd-Steinberg): Basically you compute the brightness of each RGB pixel, threshold it to black or white at 50% and then compute the error that this threshold has introduced. Then add this error into thresholding every other pixel, weighted, for example like this:

x 21% 7%
19% 15%

if x is the pixel you just converted to b/w, add 21% of the error into thresholding the next pixel to the right, 19% into the pixel below, 15% into the one right below, and 7% into x+2 These values are off the top of my head, but they should work fairly reasonably... it's been a while since I've done this http://www.opengl.org/discussion_boards/ubb/wink.gif
You'll need a variable and 3 buffers the size of one line of your image to store the error values for each pixel.

Store the dithered b/w values as 0 and 255 in a greyscale image and either draw it back in the framebuffer with glDrawPixels or copy it into a texture (glCopyTexSubImage2D) and render a screen filling quad.

Error diffusion has the advantage of a heterogenous point distribution (no restriction to the number of representable grey values by the size of the pattern as oppsed to ordered dither, which means fine gradients are represented more natural) and will show fine details in the image very reasonably. Disadvantage is, the images can look grainy.
Play around a little with the distribution values to find the ones that best match the type of image you're rendering (if you're clever, make the error diffusion adaptive to the image... a little tricky but possible http://www.opengl.org/discussion_boards/ubb/wink.gif)

glReadPixels will most likely be the biggest bottleneck in this app. Try to keep the screen resolution as low as visually acceptable to keep performance up.

Hope that helped.

dorbie
01-04-2002, 11:19 PM
Hmm, this seem like it'd be slow with pure software and there is potential to accelerate this operation keeping everything in hardware on some implementations.

Perhaps you want to draw the scene then draw the dither pattern +ve deltas in 2D over the screen. This is a classic way of passing a threshold on fragments based on their position. The dither pattern would be your chosen pattern of values between 0 and 1. You just drawpixels or texture with a GL_ONE,GL_ONE blendfunc to add the dither pattern in.

Next you want to do a threshold test on the framebuffer fragments. It wouldn't be error diffused, but it might be fairly quick. So how could you do this test in OpenGL?

One way quick way would be to read it back through the graphics pipeline with a scale & bias.

You could just use glPixelTransfer with a copypixels.
http://www.eecs.tulane.edu/www/graphics/doc/OpenGL-Man-Pages/glPixelTransfer.html

I dunno if this degree of scale & bias would be legal or what would happen with the clamping, because of the order of operations you need to scale by about 255 then bias by -245 with an 8 bit visual, perhaps you could transfer twice to keep the numbers a bit more sane and reverse the scale bias order. If this doesn't work then this is also possible on more advanced implementations with the ARB imaging extensions with a color matrix transformation and a copypixels using the OpenGL imaging pipeline if that's available.

You could do the bias with a quick blend in the framebuffer then scale afterwards. This might be fast using a couple of pbuffers and render to texture instead.

Humus
01-05-2002, 05:08 AM
At least to get a grayscale image the simplest and fastest way has to be to render the whole scene to a pbuffer, then draw this texture in 2d to cover the whole screen and do a DOT3 with the color (0.30, 0.59, 0.11) since grayscale = 0.30*R + 0.59 * G + 0.11 * B

Decimal Dave
01-05-2002, 08:30 AM
Well, I do want a really grainy effect...thresholding independent pixels will not do this, and it sounds like diffusion will involve looking at adjacent pixels in order to determine the color of any given pixel. I want this to be as fast as possible, and performing complex spacial filters is probably something I want to avoid.

To get a diffused-looking effect performed 1 pixel at a time, could I simply find the grayscale luminosity of the pixel and use that to randomly choose a color weighted like: 255 will have a 0% chance to be black, a luminosity of 127 will have a 50% chance of being black, a luminosity of 0 will have a 100% chance of being black, and so on?

[This message has been edited by Decimal Dave (edited 01-05-2002).]

jwatte
01-05-2002, 09:25 AM
Dave,

The "random" dither is basically what dorbie is suggesting.

To render a scene in grayscale, the easiest way is to just upload all textures as grayscale, and make sure all the colors you specify (light, vertex colors) are also shades of gray. The DOT3 trick mentioned above will also work, if you have spare texture ops to set that up with.

Then use CopyTexSubImage to grab your scene into a texture, and re-render the texture with a noise texture applied (either modulate, add, or signed_add) and set the texturing up so that passing pixels get written white, others get written white (for instance by clearing to black and then using alpha test, perhaps?) You probably need register combiners and two texturing stages, or extended texture_env_combine and two stages, to pull this off, but that should be fairly generally available.

You can re-use the same noise texture, and just jitter the U/V for it a bit for each frame.

dorbie
01-05-2002, 06:23 PM
Decimal,

my proposal will give a grainy effect, the dither deltas added before the threshold test are key. You are adding noise to the image so that when you threshold later the tone over an area can be reconstructed. The shape of the dither is completely programmable, it's not really random, and it's not error diffused, it's just a classic dither pattern but you get to choose what dither pattern you like.

If you want a classic dither (with any dither pattern distribution) what I've described will supply it.

[This message has been edited by dorbie (edited 01-05-2002).]

Tom Nuydens
01-10-2002, 04:18 PM
I thought this was an interesting problem, and decided to take a shot at it. I came up with a trick involving texture shaders, and have a reasonable "grainy black&white" rendering going on a GeForce3. It's not perfect, and it is by no means a real error diffusion or anything, but it looks "vintage" to me http://www.opengl.org/discussion_boards/ubb/smile.gif

Here's a screenshot: http://www.delphi3d.net/misc/images/onebit.jpg -- note that the objects are much more clearly recognizable when they're moving.

However, it currently only works if the scene is grayscale to begin with. I tried using Humus' suggestion of doing a Dot3 with a constant color (0.30, 0.59, 0.11), but I can't quite get that to behave. I believe this is because Dot3 does signed math, which is not really what I need. To resolve this, I could scale and bias the constant color all I want -- but doing the same for the incoming fragment colors is a different story. Any ideas?

Thanks,

-- Tom

mcraighead
01-10-2002, 04:22 PM
Yes, this is one of the oddities of the dot3 extension -- you can't use it to perform dot products on anything other than vectors compressed into [0,1]. In register combiners parlance, it is equivalent to EXPAND_NORMAL plus a dot product. You want UNSIGNED_IDENTITY instead.

It so happens that the vast majority of uses for dot products is for lighting computations, but this is certainly a weakness.

- Matt

dorbie
01-10-2002, 04:26 PM
Very nice Tom.

Humus
01-10-2002, 08:07 PM
Originally posted by Tom Nuydens:
I thought this was an interesting problem, and decided to take a shot at it. I came up with a trick involving texture shaders, and have a reasonable "grainy black&white" rendering going on a GeForce3. It's not perfect, and it is by no means a real error diffusion or anything, but it looks "vintage" to me http://www.opengl.org/discussion_boards/ubb/smile.gif

Here's a screenshot: http://www.delphi3d.net/misc/images/onebit.jpg -- note that the objects are much more clearly recognizable when they're moving.

However, it currently only works if the scene is grayscale to begin with. I tried using Humus' suggestion of doing a Dot3 with a constant color (0.30, 0.59, 0.11), but I can't quite get that to behave. I believe this is because Dot3 does signed math, which is not really what I need. To resolve this, I could scale and bias the constant color all I want -- but doing the same for the incoming fragment colors is a different story. Any ideas?

Thanks,

-- Tom

Very cool indeed! http://www.opengl.org/discussion_boards/ubb/smile.gif
I didn't really think of the signed issue, I haven't really tried rendering grayscale this way, was just an idea that popped up. Anyway, I think it shouldn't be hard to solve especially in a texture/fragment shader. Otherwise with the dot3 extension you could after having rendered it into a texture draw a colored quad with the constant color (0.5, 0.5, 0.5) and with glBlendFunc(GL_ONE, GL_SRC_COLOR). After that the dot3 operation should work.

[This message has been edited by Humus (edited 01-10-2002).]

Tom Nuydens
01-11-2002, 03:58 AM
Thanks for the feedback, guys! I just tried doing the dot product in the register combiners like Matt said, instead of using Dot3, and it works like a charm.

My only gripe with the end result now is that the dither patterns are very unstable. The lighting on the objects changes very slowly as they rotate, but that makes the dither patterns change as well, which is much more visible, and can get rather annoying.

I'll see if I can resolve this, but in any case I'll put my demo up on my site tonight, along with an explanation of the algorithm.

-- Tom

pATChes11
01-11-2002, 01:52 PM
But that's exactly why you don't do a random dither in realtime graphics. http://www.opengl.org/discussion_boards/ubb/smile.gif The only way to do it right is with some kind of aglorithmic (or is it algorithmic?) dither... but I'm only 15 (Getting outta Algebra C in 4 days) and I don't have a GF3, so I don't know at all how to do it.

dorbie
01-11-2002, 02:05 PM
I assume there's no random number generation here, how do you set the threshold/offset for each pixel? This is what defines the pattern. You want to make sure that over small areas the binary pattern sums to the average shade, this is a matter of specifying a small repeating pattern, even without error diffusion.

Tom Nuydens
01-11-2002, 02:13 PM
Originally posted by pATChes11:
But that's exactly why you don't do a random dither in realtime graphics. http://www.opengl.org/discussion_boards/ubb/smile.gif The only way to do it right is with some kind of aglorithmic (or is it algorithmic?) dither...

A "proper" dither would have been nice, but that requires access to neighboring pixels, which is something pixel shaders don't offer.

In any case, I made a couple of tweaks to my demo setup and now it looks okay. See for yourself: http://www.delphi3d.net

-- Tom

dorbie
01-11-2002, 03:07 PM
An error diffused dither requires access to adjacent pixels, a proper dither with a fixed pattern would still be possible. If the colors vary randomly it may be because you have too many tones and too large a pattern or it's just too irregular.

[This message has been edited by dorbie (edited 01-11-2002).]

dorbie
01-14-2002, 10:40 AM
Just looked at your effect again since it made the OpenGL front page.

Your mistake is seeding the destination alpha with random numbers. You need to seed with a regular dither pattern. There are all sorts of choices for a pattern but the usual checker style incremental pattern would be a good option.

This is why your image looks too broken up, a random dither pattern is very bad for most images.

You should put a screenshot of the result back on the explanation page for visitors.

Tom Nuydens
01-14-2002, 12:25 PM
Dorbie, I tried changing the demo so that it loads an 8x8 pattern from a TGA rather than using random values.
http://www.delphi3d.net/misc/1bit_pat.zip

The 8x8 pattern leads to some "posterization", but it does look a lot cleaner. I'm going to play with the pattern some more to see what works best, then update the demo and explanation.

Thanks,

-- Tom

dorbie
01-14-2002, 03:58 PM
8 * 8 = 64

You need 256 shades, so you need a 16x16 dither pattern, that way you'll eliminate posterization.

dorbie
01-14-2002, 04:03 PM
You really should include your screenshot especially on the explanations page. I know you made one and posted it above, but you don't show the results of your work, for all to see.

I don't always run a demo, and I am less inclined to if I don't see a screenshot of the 'reward' for downloading and running :-)

Eye candy is also good for a broader audience IMHO.