Software Anti-Aliasing

I am currently implementing scene anti-aliasing for users who don’t have a 3D card that supports FSAA. My question is which way is faster and which will generate a higher quality image?

  1. Using the scene jitter method and the accumulation buffer to generate an image.

  2. Rendering a larger version of the image copying it to a buffer and using gluScale() to generate box filtered antialiased image.

Thanks

[This message has been edited by gavinb (edited 06-15-2002).]

Another method would be render to background using some jittering and additive blending with the scene dimmed by some factor.

The accum buffer is usually in software on your basic PC card. You will lose some color precision with 32 bpp, but as long as you dont do too many jittering…

V-man

Accum buffer is software on consumer cards.

gluScale() is software.

What you might want to do to stay in hardware (on modern cards, at least) is render into a pbuffer, and then CopyTexSubImage and draw that image as a quad with LINEAR interpolation to soften the image. Make sure your texels are sampled between each 4 pixels to make sure you get the best use of the filter.

V-man I don’t think that would work with a depth buffered scene.

There are two other approaches you might consider.

You could outline everything with antialiased lines, it’s not perfect but quite effective for many situations, esp for relatively large objects where yu just want to eliminate the jaggies rather than exactly weight samples.

The other approach is to use the depth sorted saturate alpha blend with antialiased triangles. The only problem with this is you need to depth sort ALL triangles in the scene, it does give perfect high quality anti-aliasing though.

[This message has been edited by dorbie (edited 06-16-2002).]

>>>V-man I don’t think that would work with a depth buffered scene.<<<

I think it will look OK. You do have to clear the z buffer between passes. Havent tried it yet.

Besides, I thought that AA polygons was done in software on most cards.

I have tried the AA lines technic long ago. The lines were visible for textured polygons (maybe texturing was not perspective correct), otherwise it’s a nice and easy.

V-man

I still say it won’t work. The zbuffer doesn’t prevent distant fragments that ultimately should be occluded but are rendered first blending in the framebuffer. They must fail the test but there is nothing to test against and fail (or at least no guarantees without a perfect front to back sort).

I have done the AA on textured lit lines and it worked perfectly. Textured lines should be perspective correct. Maybe you needed a to hint for nicest quality on your implementation or there was a bug. AA lines are excessively slow on some implementations.

[This message has been edited by dorbie (edited 06-17-2002).]

Yes, I guess that would not work. The idea would be to produce an image, and then accumulate. A separate buffer would be needed.

I will try the AA lines trick again when I get the chance.

V-man

Pedro V.Sander, Hugues Hoppe, John Snyder and Steven J.Gortler have a paper about the AA-lines technique to achieve AntiAliasing (and especially to remove crawling jaggies), named Discontinuity Edge Overdraw. They’ve made some very useful observations and also describe a solution for popping artefacts along appearing edges (when a new polygon comes in to view).

Here’s a link:
http://www.google.com/search?q=discontinuity+edge+overdraw&ie=UTF8&oe=UTF8&hl=fi&lr=

Couldn’t you render to a pbuffer as large as the screen, with jittering? Use the pbuffer as a texture, and render to the screen with blending and intensity scaling.

Alternatively (if pbuffers/render to texture is not supported):

  1. clear screen

  2. render to screen with jitter

  3. glCopyTexture2D to a texture

  4. clear screen

  5. render to screen jitter

  6. glCopyTexture2D to another texture

etc.

N) clear screen
N+1) render all textures to the screen with blending and scaling

This would require very much video memory to be used though, and is probably not practical.

Come to think of it… Both methods would probably only work on cards that already support FSAA, since the methods requires large tetxures and much video ram.

The approach I use for rendering high resolution images is rendering the scene as tiles <= size of the viewport. You can render a very large image like this using no extensions. Jittering works well but is very slow. I have been very happy rendering 4 times the size that I need and resizing the image in Photoshop.
As for which method jittering or supersampling is better, it depends on the scene. I prefer the second in most cases since I can render 12000x9000 in a few seconds while jittering will take a minute or two.
Here is an article that I found on tile rendering: http://www.mesa3d.org/brianp/TR.html

James

Yes, this Discontinuity Edge Overdraw “research” paper is a damned disgrace. I remember being amazed when I first saw it. Many people including myself had been doing this for years in vis-sim, it is a very well known technique with hundreds of engineers in that industry. SGI used to instruct customers on this technique years ago when the VGXT came out, but even 10 years ago it wasn’t original work.

Here’s just one post online (the whole thread is relevant) that predates the paper and indicates how prevalent this was (long after the software mentioned was first complete). Note this was in response to another poster asking a question on how to implement this, even the guy asking the question already knew the algorithm, he was asking for implementation details.
http://oss.sgi.com/projects/performer/mail/info-performer/perf-99-01/0196.html

[This message has been edited by dorbie (edited 06-21-2002).]

Originally posted by JWeaver:
[b]The approach I use for rendering high resolution images is rendering the scene as tiles <= size of the viewport. You can render a very large image like this using no extensions. Jittering works well but is very slow. I have been very happy rendering 4 times the size that I need and resizing the image in Photoshop.
As for which method jittering or supersampling is better, it depends on the scene. I prefer the second in most cases since I can render 12000x9000 in a few seconds while jittering will take a minute or two.
Here is an article that I found on tile rendering: http://www.mesa3d.org/brianp/TR.html

James[/b]

[QUOTE]Originally posted by JWeaver:
[b]The approach I use for rendering high resolution images is rendering the scene as tiles <= size of the viewport. You can render a very large image like this using no extensions. Jittering works well but is very slow. I have been very happy rendering 4 times the size that I need and resizing the image in Photoshop.
As for which method jittering or supersampling is better, it depends on the scene. I prefer the second in most cases since I can render 12000x9000 in a few seconds while jittering will take a minute or two.

I have come to a simalar conclusion. I wanted to support anti-aliasing for user who don’t haves FSAA and most likely would not have a pbuffer either. I use a tiling style method and use gluScale which has a box filter to reduce the image back to the original size.

Originally posted by JWeaver:
[b]The approach I use for rendering high resolution images is rendering the scene as tiles <= size of the viewport. You can render a very large image like this using no extensions. Jittering works well but is very slow. I have been very happy rendering 4 times the size that I need and resizing the image in Photoshop.
As for which method jittering or supersampling is better, it depends on the scene. I prefer the second in most cases since I can render 12000x9000 in a few seconds while jittering will take a minute or two.
Here is an article that I found on tile rendering: http://www.mesa3d.org/brianp/TR.html

James[/b]

I don’t understand why the jitter takes so long for you. I have just implemented jitter FSAA and saw virtually no slow down. In fact NVIDIA’s FSAA slows my app down a lot more and has less effect.

The way I do it is to capture the final image in the back buffer(dont need to use the accumulation buffer) using glCopyTexSubImage2D. Then redraw it 9 times with offsets. (9 is overkill, 4 is probably enough).

Here’s an example:
No FSAA http://planet3d.demonews.com/GravityNOFSAA.JPG
4xFSAA http://planet3d.demonews.com/Gravity4XFSAA.JPG
I have also implemented motion blur which you can see in the first image.

The images were taken from my colliding galaxies demo.

I guess the biggest problem with jitter is banding from the layering and loss of colour precision. Hurry up with those 64bit colour cards NVIDIA,ATI

I realise now I have repeated v-mans idea. I’m not sure I understand why it won’t work for a depth buffered scene, maybe I will try it out…

[This message has been edited by Adrian (edited 06-23-2002).]

Rendering to a texture, may be worth investigating for antialiasing a window. It would not suit my needs as I need to render offscreen at a high resolution for print. This adds the need for tiling and the shifitng texture method would produce artifacts along the tile edges. The accumulation buffer doesn’t do this.

James

i prefer the sharp image, if i’m allowed to say that…

aa should not make an image look blurry, but more detailed. thats just an image blur, but the image doesn’t look near as accurate as the sharp one. i prefer the sharp one. looks more 3d for me…

thats just my oppinion

Originally posted by davepermen:
i prefer the sharp image, if i’m allowed to say that…

Yes you are, in fact I agree with you I was just messing around to see if I could do a jitter blur fast.

JWeaver,

If you’re reading the data back, you might as well just render at 2x the resolution and software filter after you slurp the image back. Doing 4x2->2x1 in MMX is really easy, and doesn’t add much time to the readback case. If the API was not synchronous, you could even overlap it to get it “for free” but, alas, OpenGL is too UNIX-centric in its initial design, and UNIX is all about being synchronous.

So, to render a 800x600 with anti-aliasing, you render 4 800x600 images, each covering a 400x300 quarter of the output image, and you box filter (sum 4 pixels) while assembling the final output image. This method doesn’t have any problems with seaming/tiling if you implement it correctly, although getting the projection offset for each quarter right is crucial.

>>>I realise now I have repeated v-mans idea. I’m not sure I understand why it won’t work for a depth buffered scene, maybe I will try it out…
<<<
I think the main prolem is having to sort front to back. My idea is to do the blending as you draw the geometry, otherwise something that shouldn’t not be visible will be blended with a closer object. At the same time, there is the problem of transparency, which may require a sorting from back to front, the complete opposite!

The other technic is that you render, then you take that pic and blend. The OpenGL accum works like that.

The most obvious method is to render normally, then you blend each pixel with the nearest neighbor.
http://planet3d.demonews.com/Gravity4XFSAA.JPG
looks like a total blur. That cant be NVidia’s FSAA.

V-man

>>>
http://planet3d.demonews.com/Gravity4XFSAA.JPG
looks like a total blur. That cant be NVidia’s FSAA.

No it isn’t, its my jitter blur. I shouldnt have called it FSAA4X. Its a jitter blur with a 4 pixel offset, maybe JBAA4+ or perhaps just TB