PDA

View Full Version : BlitFramebuffer drops GL_LINEAR filtering?



Tzupy
03-14-2010, 11:53 AM
After extensive testing of my off-screen high quality antialiased rendering, I discovered that if the destination renderbuffer is 4x by 4x smaller than the source renderbuffer (for a 16x downsample), then BlitFramebuffer drops the GL_LINEAR filtering and probably uses GL_NEAREST.
I am sure about this because I checked the number of shades in the destination image, and it didn't match my expectations.
If the destination renderbuffer is only 2x by 2x smaller than the source renderbuffer, GL_LINEAR is working OK.
So to get the 4x by 4x I want I had to add another intermediary renderbuffer and perform two times 2x by 2x downsampling.
This way I was able to achieve downsampling 2x * 2x * 2x * 2x * 8x MSAA = 128x total AA, which looks quite smooth. ;)
This happens on Radeon 4850, Vista_x64 HP, drivers 10.2. I read the GL 3.1 specification, but I couldn't find anything about dropping the filter setting.
Please comment on this: the behaviour of BlitFramebuffer is normal, or is it a driver bug?

Alfonse Reinheart
03-14-2010, 01:12 PM
I discovered that if the destination renderbuffer is 4x by 4x smaller than the source renderbuffer (for a 16x downsample), then BlitFramebuffer drops the GL_LINEAR filtering and probably uses GL_NEAREST.

What exactly are you expecting to see? Are you expecting the driver to average each 4x4 pixel group into a single output pixel? If so, that's not what GL_LINEAR means.

Linear filtering means that the destination pixel will be composed of a linear blend between the 4 pixels nearest it in the space of the source framebuffer. Just like it does for textures. This is also why texture mipmaps decrease in size by half.

Tzupy
03-14-2010, 02:12 PM
Thank you, my mistake! I know the meaning of GL_LINEAR, but didn't realise that: when I try to downsample 4x by 4x I am actually undersampling, because only the inner 2 by 2 pixels (in a 4 by 4 cell) are used for linear filtering. :o
So it's normal behaviour, not a bug, GL_LINEAR is NOT dropped. I still would have expected more shades, even with this undersampling.
Now I'm not 100% sure if the workaround - which generates many shades - is giving me 100% accurate pixel values.
Maybe I should try to use textures for downsampling and the automatic mipmapper? This shouldn't take too long...
Or would it be 100% accurate ONLY if I did the downsampling with a fragment program?