I’m just curious if anyone has heard about using the mipmaps of a texture to speed up doing an (approximate) gaussian blur on GPUs.
It is a known fact that in the limit of taking ‘n’ repeated box blurs you get a gaussian blur. The first blur is box, second is Bartlett/triangle filter kernal, third is a quadratic which pretty well approximates a gaussian.
The mip levels of a texture are pretty much the same as doing a 2x2, 4x4, 8x8, 16x16… box blurs.
So say if you wanted to do a gaussian with a std deviation of 64 pixels, couldn’t you approximate it by doing something like taking something like:
a0 * (1x1 level) + a1 * (2x2 level) + a2 * (4x4 level) + a3 * (8x8 level) + a4 * (16x16 level)
and then maybe do some other small kernal blur ontop of that to smooth it out. Here a0, …, a4 are just some constants.
Instead of doing 64 texture samples, you might be able to get away with doing only 9 or 10 samples to get something that approximates a gaussian pretty well.
Or perhaps you could take a higher miplevel and repeatedly box blur it till it softens out to a gaussian.
I’ve already got quite a few ideas how to do this, just wondering if anyone has ever encountered something like this before to save me from working out the math…