Fast texture convolution (e.g. gaussian blur).

Hello there…

I was wondering what would be the best way to do texture convolution (filtering) in OpenGL. I have rendered a scene to a 256x256 pixels texture and I subsequently use that texture on a window-sized (also 256x256 pixels), textured quad, effectively displaying the scene that I rendered to the texture on the screen. So I have got the image that I want to convolve available as an offscreen texture (residing in a pbuffer) and as an on-screen image (residing in the framebuffer).

Now I want to convolve it using, let’s say, a Gaussian kernel (separated into two 1D kernels for efficiency). What would be the best approach? Should I use glConvolutionFilter*() or is it better to copy the information into some structure (e.g. a 2D array) and do the convolution on the CPU? If it is possible / efficient to use glConvolutionFilter*(), can I apply it directly to my texture or only to the framebuffer? Is glConvolutionFilter*() hardware accelerated (I heard stories about it not being hardware accelerated)? And if it is better to copy the information into a structure and do the convolution on the CPU, what should I use, glReadPixels() on the framebuffer or is there also some (fast) way to copy the texture data into a structure? As you can see, I have quite a lot of questions on this subject ;-)… Some help would be greatly appreciated. Thanks in advance.

GRTZ,

(Sn)Ik.

P.S.

I was also thinking about implementing the convolution as a fragment program, but I heard that it is not possible to (directly) inspect the color of neighbouring pixels in current fragment program profiles / versions. Does anyone know if this is indeed true?

[This message has been edited by VuurSnikkel (edited 03-12-2004).]

How about implementing it as a multitexture operation? By setting up texture coordinates slightly offset for different stages, but binding the same texture, you can sample adjacent texels into the pixel pipeline and do whatever you want with them

IIRC both nVidia and ATI have demos/sample code/whitepapers showing how to implement various forms of filters using mostly inventive textures sampling.

Actually, check out this page

http://www.ati.com/developer/sdk/RadeonSDK/Html/Samples/OpenGL/HW_Image_Processing.html

The concepts should scale to whatever hardware you’re intending to run on.

Thanks a lot, I’ll take a look at it.