Render to anti-aliased texture?

Is there a way to get wglChoosePixelFormatARB to accept both WGL_BIND_TO_TEXTURE_RGBA_ARB and WGL_SAMPLE_BUFFERS_EXT on a Geforce4 Ti at the same time? If so, would someone be so kind as to post their attributes array that works? I can’t seem to enable both at the same time…

Thanks,
Zeno

Nope, this is an intentional limitation on our part.

It seems to me like you get very similar, or even better, results by rendering 1 more mipmap level and generating mipmaps? Generated mipmaps, after all, will effectively contain antialiased images. If your image was 256x256, your 128x128 level will have 4x AA, your 64x64 level will have 16x AA, and so on all the way to “65536x AA” at 1x1.

  • Matt

Thanks for the fast reply.

Unfortunately, though, the mipmap option isn’t really a good one for me. I need to capture the entire screen (maybe 1280x1024) and a “rounded up to a power of two, quadruple-sized” buffer at that level may be a tad slow.

Here’s another quick question: Suppose I rendered a screen sized non-anti-aliased image to a pbuffer set up as a texture, mapped it onto a quad, and rendered this quad into a multisampling backbuffer. What, if any, would the difference in image quality between this method and rendering the original scene to an anti-aliased backbuffer? Does it matter if you draw things one at a time with anti-aliasing as opposed to doing the whole scene at once?

– Zeno

I don’t entirely understand your question. Rendering the scene without antialiasing and using that as a texture can never give you the quality you would have gotten from rendering the scene with antialiasing…

  • Matt

I’ll try to clarify Would there be any difference in output between these scenarios given the way nvidia implements anti-aliasing:

Situation 1:

  1. Enable anti-aliasing
  2. Render Scene

Situation 2:

  1. Disable anti-aliasing
  2. Render scene to backbuffer
  3. Copy backbuffer to texture
  4. Enable anti-aliasing
  5. Render single quad with texture from step 3

I think that the output images would be the same on an a Ti4600 card, but different on a card like a Matrox P10 (I think) that only anti-aliases edges of geometry…is this correct?

– Zeno

Sounds like “situation 2” would result in absolutely no antialiasing.

  • Matt

To be a bit more precise that Matt, I’ll explain. Note: this is a simplified explaination of aliasing and anti-aliasing.

Aliasing is the result of converting an analog signal (the mathematical description of a polygon, for example) into a digital one (the scan-converted polygon). 3D rendering takes an analog world and attempts to represent it digitally. Because a digital image cannot represent analog data with 100% accuracy, aliasing is the result.

Antialiasing is an attempt to convert aliasing (which human beings notice very easily, as we don’t usually see aliasing) into noise (or blur, which we don’t mind or notice nearly as much).

Regular, non-antialiased rendering takes only one scan-converted sample per pixel. The common supersampling technique takes multiple scan-converted samples of the 3D world for each pixel and combines them together to produce a pixel. So, if one triangle is at the edge of a pixel, it will only contribute a small portion to the overall color value of that pixel.

Notice that this requires sampling the analog world. Once you’ve rendered the world, it is forever digital. If you simply rendered an image un-antialiased, you are losing information to aliasing. You can never get that information back, no matter what you do. At best, you can try to reconstruct that information, but most antialiasing techniques are designed to work during the conversion from analog to digital. They would be useless in trying to “de-alias” a digital signal.

Can’t you just render into an antialiased pbuffer, and then copy the results into a texture using glCopyTexSubImage2D()? Go to page 18 of the following document:
http://cvs1.nvidia.com/OpenGL/doc/presentations/DynamicTexturing.ppt

tcobbs -

Yes, that is most likely what I will end up doing. I was just exploring the situation a bit.

Matt and Korval -

If anti-aliasing is done on a per-pixel level (by sampling several pixels of a virtual larger image, for example) how can it not have an effect on the texturemap of a screen-oriented quad?

– Zeno

If anti-aliasing is done on a per-pixel level (by sampling several pixels of a virtual larger image, for example) how can it not have an effect on the texturemap of a screen-oriented quad?

it does - but it will only antialias the edges of your screen sized quad - which you can’t see anyway…

[This message has been edited by vshader (edited 09-19-2002).]

[This message has been edited by vshader (edited 09-19-2002).]

>>it does - but it will only antialias the edges of your screen sized quad - which you can’t see anyway<<
On Parhelia maybe, others filter all pixels.

The essence of “situation 2” is that you have a non-antialiased texture which is drawn into a multisample buffer with e.g. 4x size and then downsampled to the screen size. If you consider nearest filtering for the texture and a simple box filter for the AA downsample you’ll end up with the same image you rendered into the back buffer.
With filters using more samples you’ll see an AA effect.

theres a difference between MULTISAMPLING AA and SUPERSAMPLING AA. supersampling is where a larger image is sampled down to a smaller - so if you mipmap a 4x quad down to final image resolution, you will gfet an AA effect. but when you choose a MULTISAMPLE AA pixelformat, it just generates several depth (and maybe other? not sure) samples for each pixel, and then weights those to get the final color for each pixel. so if you draw a screen sized quad using MULTISAMPLE AA, it will just smooth the edges - all the centre pixels will have the same data for each sample. you lost the separate samples for each pixel when you converted into a texture.