the way of Antialiasing?

We could write code using GL_POINT_SMOOTH,GL_POLYGON_SMOOTH and so on,and we can also enable antialiasing through the GPU(Nvidia/ATI) settings.

So I don’t know what’s the difference between them. Can anyone explain it to me.
Thanks!

Point Smooth and Polygon Smooth is totally different from full scene antialiasing you enable in driver settings.
The *smooth mode only works for non textured primitives but is much higher quality than FSAA. FSAA needs at least 4 or 8 taps to be visually nice.
FSAA can be enabled programatically throught the Multisample extension :
http://oss.sgi.com/projects/ogl-sample/registry/ARB/multisample.txt

Thanks!

I will study that.
However,another question:
"FSAA can be enabled programatically throught the Multisample extension ",does that mean “FSAA = Multisample”?

Not exactly, multisampling is just one method to do full scene antialiasing. There are others, like supersampling or jittered accumulation buffer rendering.
Multisampling has the performance benefit that it calculates the color only once for the multisample grid and only fills covered samples with that color. That has some accuracy drawbacks.
Supersampling increases the rendered resolution.
Jittered accumulation buffer rendering needs to draw the whole scene multiple times.

Thank you.
I’m a newbie.I learned a lot about the antialiasing through your message.Now I’m wondering which way the GPU(such as Nvidia 6800) use for FSAA.

Multisample will antialias without zbuffer transparency problems. It’s as if you had extra pixels within each pixel and the final result is the average. With blended antialiasing using polygon smooth you don’t have multiple samples you have a blended accumulation process that does not play well with 3D objects and the depth buffer.

Many of the alternative methods suggested like supersample or accumulation with jitter also are also problem free with zbuffer 3d etc. They all tend to be slower than multisample though. The built in OpenGL GL_POLYGON_SMOOTH method you suggested of alpha fragment generation based on coverage is uniquely high quality but uniquely crap for 3D zbuffered scenes. It’s useful for lines in a 3D scene if you draw them last or reverse painters with saturate alpha blending but multisample is generally accepted as the preferred general purpose efficient approach.

Thank you all.

Now I learned that I should read some more papers about this object to understood it totally.