Full Screen antialiasing in iphone opengl

Hi,

I am trying to learn different antialiasing techniques used in iphone opengl . But now I started to think iphone opengl engine doesnt support antialiasing method used in normal opengl . I used textures and I tried mipmapping which did reduces some aliasing but still the edges are not smooth . Also there is a blurry look to the texture now . I read about Full Screen AntiAliasing (FSAA) but iphone opengl doesnt seems to be supporting it and we have to manually mimick FSAA by rendering to a larger target buffer and then scale down the image. I googled for an example for how to do it but didnt get one…I am using opengl ES1.1 working on XCode 3.1.4… Can anybody help me to get an example code on FSAA or post any useful links to the same…

Thanking you,

Jayanth

Technically iPhone’s GPU (PowerVR MBX Lite) should support anti-aliasing. However Apple’s OpenGL ES drivers don’t expose this capability.

There are a few useful suggestions here : http://discussions.apple.com/thread.jspa?threadID=1660044

Antialiased points (GL_POINT_SMOOTH) are supported.
Antialiased lines (GL_LINE_SMOOTH) are not supported.
Antialiased polygons (GL_POLYGON_SMOOTH) are not supported by ES.
Multisampling is not supported.

You can manually supersample by setting the renderbuffer size larger than the EAGLLayer size. The layer will automatically downsample during presentRenderbuffer:.

Thanks friends… I was also trying to do manual super sampling…

You can manually supersample by setting the renderbuffer size larger than the EAGLLayer size. The layer will automatically downsample during presentRenderbuffer:.

I tried to do the same… .But didnt work…Can you post an example code or link to one…

Something like this:


- (void) resizeFromLayer: (CAEAGLLayer*) layer
{
    // The fractional layer scale to use.
    // Scales larger than 1.0 will effectively supersample, and use more fillrate.
    // Scales larger than 2.0 are not useful, because they will alias.
    // Scales smaller than 1.0 will under-sample, providing a low resolution.
    const float samplerate = 1.5;

    // Save off the real layer bounds
    CGRect bounds = layer.bounds;
    
    // Compute temporary scaled bounds
    CGRect scaledbounds = bounds;
    scaledbounds.size.width *= samplerate;
    scaledbounds.size.height *= samplerate;
    layer.bounds = scaledbounds;

    // Allocate GL color buffer backing, based on the current layer size
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

    // Allocate GL depth buffer backing
    if (USE_DEPTH_BUFFER)
    {
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
        glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
    }
    
    // Restore original layer bounds
    // On presentRenderbuffer: the layer will stretch the renderbuffer backing to fit the bounds.
    layer.bounds = bounds;
}

During rendering, be sure to set up GL to use the larger backing size (i.e. glViewport, matrices, etc.)

Hi…
Thanks to all who give valuable suggestions… I tried to the idea of rendering to a larger buffer and reducing the buffer size and it almost worked…Still I have small artifacts in the rendered scene. I tried to increase the sampling rate and rate of 2.0 gave me the best result… I couldnt increase the sample rate more as for iphone performance reasons(I believe…)…Any way it did reduce the aliases and I have a smoother scene now…Thanks to all…

Hi,

 I forgot to try the last thing you said...I tried  to increase the glViewPort. But it doesnt seems to have any effect..Is there something that I missed..This is what I did...

glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
glViewport(0, 0, width * backingWidth, height * backingHeight);

where width and height are two integer variables which took values from 0 to 4. Ofcourse I adjusted the starting point of the viewPort according to change in width and height (In the above code it is shown as 0,0). Is this the way you suggested…

sigh. i tried the upscale/downscale of the layer bounds suggested above…

but when used in conjunction with kEAGLDrawablePropertyRetainedBacking i get serious trouble. it seems the retained buffer doesn’t properly handle the sizing… :frowning:

(i should mention - it works fine in the simulator - on the device, no.)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.