Antiliazing

How do you do anitiliasing? I use GL_POLYGON_SMOOTH but doesn’t work.
have any idea?

thanks

you cant choose this in opengl, its something that voodoos/geforces do support in the driver…

in gf3 you can setup it with an extensio

you can theoretically do it with accumulationbuffer, but its terrible slow cause its then done in software

polygon smooth is only a hint to the opengl implementation, and the opengl specs say it can be ignored.

er, to do “manual” antialiasing you’ll need to… <voice trails off>

okay, here’s a brief a brief story. The frame buffer is an integer array of finite sampling elements called pixels. despite popular belief, pixels are NOT little squares… they’re point samples of a continuous 2D function of light striking the retina. (yes, yes, we’re talking about opengl and its all discrete, anyway, but this is the theory).

suppose, for argument’s sake, that the frame buffer is just a line, and we are only concerned with digitising a 2D function. (Actually, this is really what an audio file is all about). Further suppose that the intensity striking the camera can be represented by the fuction f(x)=x^2, and that the frame buffer is the set of integer pixels x E (-5, 5)

in the un-antialised world of things, each pixel stores the intensity at a SINGLE point sample. so, for example, the most left pixel will have the value 25, the next to the right will have 16 (8, 4, 2, etc)

what this is really doing is approximating a block of values with a single value. ie. as far as this representation is concerned, the value at 4.1 is the same as at 4.0 (which is the same as 4.2, and as far as the pixel’s sample range is concerned). that is to say, then, that our representation of f(x)=x^2 is EQUIVILENT to the representation of the function f(x)=|x|.

the trick of antialiasing is to store multiple samples per pixel. in the same way that voting polls aren’t based on the political observations of a single individual, pixels need to sample the continuous function at more than one point.

so, then. how’s this done? well, er, i’ve lost the will to write an impromptu tutorial on antialiasing. one trick, tho, is to jitter the camera and re=render the scene. what this will do is average multiple results for a single pixel, which smooths the boundaries between pixels. but, you should do a search and read up on that.

cheers,
John

The way GL_…_SMOOTH works is to calculate how much of a pixel is covered by the primitive being drawn, then draw the pixel with the corresponding alpha value (eg SRC_ALPHA * 0.5 for a pixel that is 50% covered).
This can make things look messy if your primitives aren’t depth sorted, which is why the supersampling method described by john is preferable.

Thanks