Fast depth-of-field in OpenGL - How to do it?

Hello,

I was wondering what would be the best way to implement a decent quality, real-time depth-of-field effect in OpenGL in such a way that it does not require the support of pixel shaders (ARB fragment program).

At first I thought about using the accumulation buffer to render multiple passes of a scene from a number of jittered camera eye positions and a common camera target position (the point of focus), but the accumulation buffer seems too slow for this task.

I am now thinking about using multiple render-to-texture passes (pbuffers) and blending these together to see if this will achieve decent quality and real-time performance, but there might very well be another way that I’m not aware of.

Does anyone know of a way to achieve a decent quality, real-time depth-of-field effect in OpenGL without using pixel shaders? Any help would be greatly appreciated. Thanks in advance.

Regards,

Danny Holten.

Take a look at the following presentations:

http://ati.com/developer/gdc/Scheuermann_DepthOfField.pdf
http://www2.ati.com/developer/ScenePostProcessing.pps
http://www.daionet.gr.jp/~masa/archives/GDC2003_DSTEAL.ppt
<edit>
oh, and this one too: http://ati.com/developer/shaderx/ShaderX2_Real-TimeDepthOfFieldSimulation.pdf
</edit>
http://ati.com/developer/gdc/D3DTutorial07_AlexEvans_Final.pdf

Regards,

Hello,

Thanks for all the links, I’ll get right to it ;-).

Regards,

Danny.