Deferred Rendering: Depth-of-Field Blur

Hello all.

I’m writing up a DoF blur implementation, and I need your help. This involves multi-pass / deferred rendering. I would like this thread to be useful beyond what I take from it; considering there are some difficult topics involved here, this can exist as a useful, extensive source for others who would like to learn more about deferred rendering, FBOs, and obviously DoF. I would like to have a solid body of reference code here by the time we’re done. I expect to learn a lot about these topics in the process.

As I understand things at present, I’m going to need to run a few passes, pushing the results into an FBO…

[ul]
[li]Initial pass outputs the scene as ordinary, crisp quality (GL_COLOR_ATTACHMENT0), with the depth buffer (GL_DEPTH_ATTACHMENT), into the FBO.[/li][li]Second (and possibly third) pass does vertical and horizontal blur to get a maximally-blurred image (GL_COLOR_ATTACHMENT1). I’ve read that this would be two passes, if I use Gaussian blur. This stage is fragment shader only.[/li][li]We’re now storing both the original crisp image as well as the max-blurred one, in the same FBO.[/li][li]The final pass is once again fragment shader only: We look at the depth buffer to get the blur strength. We then use gl_FragColor = mix(unblurredTex, blurredTex, blurStrength), and voila, we have composited our final image. Simplistic, but it’s a start.[/li][/ul]

I would credit the approach but I cannot for the life of me find the link where I read it. The benefit here is that we maintain crispness between eg. highly blurred background objects and crisp foreground objects (that is, crispness follows the scene’s z-buffer contrasts).

Here’s what I have working so far:

[ul]
[li]I’ve just started using FBOs. What I have at the moment is a preliminary shader pass where I render out my usual scene into an FBO for color (and later, once I’ve got that working, depth). So when I surround my draw calls with glDrawBuffers(…), the viewport remains blank, which I assume means everything is working: I have yet to check whether what is in GL_COLOR_ATTACHMENT0 is valid, but I’ve no reason to believe it’s not, as my code executes OK. I use a few glDrawArrays(…) calls to do the actual rendering.[/li][/ul]

Here’s what I’d like to get working next:

[ul]
[li]Get the contents of the FBO’s GL_COLOR_ATTACHMENT0 into the default framebuffer to check if it’s OK (literally just a passthrough fragment shader). I’ve no idea how one is meant to do the second pass in client-side code, particularly as I don’t think any more vertex data needs to be processed after the first pass? (so maybe glDrawArrays is no longer appropriate here…) Assistance here would be of great value as I’ve had little luck googling this part.[/li][/ul]

Thanks in advance for your helpful contributions. :angel:

P.S. I shall post code as soon as a little progress is made.

Get the contents of the FBO’s GL_COLOR_ATTACHMENT0 into the default framebuffer

Ugh. So simple. (1) Place a fullscreen quad, (2) draw the prepass textures onto it with no projection transformations, (3) make use of glBindFragDataLocation(). Thanks go to Chris for providing a straightforward, well-documented example.

Will update this thread as I move ahead.

Good luck. And don’t feel alone :slight_smile:

PS: Is DoF used only to provide a cinematic look and feel right now? I see games blurring things close to the near plane (probably to hide the macro pixels) and I wonder how that is done. A mag filter? A mip map? Dof? Any ideas?

Maybe there is a future for DoF in video games if it ever becomes practical to track the player’s eyeballs somehow. I wonder if stereo automatically mimics human focus or not???

@michagl Good to know someone cares :surprise: Up till now I thought no one gave a rat’s ass.

Re the near plane, I think it just works that way in RL (the extra blurring on nearer geometry/textures is incidental). This diagram explains it pretty clearly. The circle formed at each slice along the view depth past f, the focal point, is known as the Circle of Confusion (CoC). Essentially that circle is what causes the blurring as well as bokeh (the size of bokeh artifacts are directly proportional to the size of the CoC at a given depth into the field of view / frustum). Just yell if I’m telling you stuff you already know :slight_smile:

Re human eye focus. I’m writing this for a game, and plan to do focusing as follows: In some games, we basically raycast from through the player’s targeting reticle / crosshair so that the focal point depends on the nearest intersected geometry. That actually makes it pretty sensible as an interactive effect. If you search YT for videos of Sonic Aether’s MineCraft shaders, you can see this in action, I think. It’s pretty cool, though I suppose in later years we may look back at our obsession with DoF blurring in games and say, “That is so 2010’s, man”. Anyway, you asked about how DoF is done – see the four-point process I outlined in my original post above. That’s one way.

Re my progress, I feel less alone as I now have my FBO using a single texture for OpenGL’s own depth calculations and my own; this is more efficient than using a renderbuffer object as the FBO’s depth attachment, as I can pass this depth directly to my second (and possibly third) shader passes, as a uniform sampler2D. (You can’t do this with RBOs without doing a glReadPixels on the CPU, then reuploading the data to the GPU.) So yeah, I have the scene colour being rendered into my framebuffer, then I’m rendering that into a fullscreen quad successfully, and mixing that with the depth texture just to see that the depth uniform is in fact being passed into the shader and not optimised out (my pet GLSL hate). Progress.

I’m right now looking at the depth buffer, but it’s currently all white; that’s probably because the differences in depth of my scene are so miniscule compared to the actual difference between the near and far planes… ah, I think the answer is right here.

Will update again soon, when I’ve started figuring out the above depth problem… now that I’ve received a response from at least one person. :slight_smile:

EDIT: Aha, it is resolved. The link above did indeed have the answer – raising each of the components of vec4 depth as sampled from a working depth texture will let you see things in black and white. DoF blur… Here we come!

DoF seems like an effect that is intrinsically hard to do in real-time. Please correct me if I am wrong. I am really interested in anything that aids first-person immersion, but I am wary of effects that mimic camera artifacts and do things in screen space that are already being done in “head” space…

I worry about deciding the DoF for the viewer (without scanning their retina or something) unless it is to create a cinematic moment, because if the player is seeing a focus artifact that disagrees with what they want to see it will break immersion and possibly tax the player.

PS: One experience I had a while back is playing with this game, a new entry in the Armored Core series (V) that puts a giant reticle in the middle of the screen. It was really really obnoxious like having a big thick bullseye painted on your windshield until I got the idea of sitting right in front of the screen like a little kid so that the effective screen real-estate is cut down to about 1/3rd… and the bullseye and everything else became peripheral vision. It was very convincing anyway in terms of full peripheral vision immersion but so much for casual play. I gotta admit it’s an idea that never occurred to me, and I don’t know if that is how the creators expect people to play or not. Personally I would have liked for the reticle to be configurable but configuration seems to be less and less of an option as the years go by in the commercial gaming world.

DoF seems like an effect that is intrinsically hard to do in real-time

Quite. Then again, I’m learning deferred shading / FBOs / g-buffers / depth buffer quirks / mipmaps and more at the same time. So to me, right now, it would seem hard. :wink:

I agree with you re difficulty as there are a number of approaches in existence, and none of them aside from raytracing produce a “physically correct” output – in real time – using a single logical model. Sounds familiar doesn’t it! – that’s a description of the field of real-time rendering, in general!

So because none of the extant approaches are single-step, you have to write hybrid code if you want it to be performant:

-Some methods work excellently beyond the focal point, but produce artifacts when using that same mechanism up close.
-Some methods work excellently closer than the focal point, but don’t work further away.
-If you want bokeh as well, you will need to do up to 4-5(!) passes to produce accurate blur with bokeh impostors, since again, calculating real bokeh of any arbitrary lens-shape is pretty much out of our reach right now at any sensible resolution (only offline RT will get you that).

…So you have to mix and match from amongst various approaches. I’ve adapted my plan since my initial post; I’m going for an approach that works superbly (and cheaply) for distant objects; if I want close-up blur, I’ll add in some other logic to treat fragments that lie closer than the focal point.

Also re difficulty, you’re won’t get away with less than 3 shader passes, unless you’re willing to seriously skimp on realism (i.e. skip near field, skip bokeh, or skimp on your blur kernel). If anyone knows of any 2-pass approaches (I know of only one), please post here.

I worry about deciding the DoF for the viewer (without scanning their retina or something) unless it is to create a cinematic moment, because if the player is seeing a focus artifact that disagrees with what they want to see it will break immersion and possibly tax the player.

-Not every game is fast action;
-The degree of blur need not be particularly extreme, as in macro photography (which I think you see a lot in games that do use DoF at present);
-As an effect, DoF definitely doesn’t fall into the same category as an annoying, obtrusive HUD – I’m a fan of HUDlessness. I’m using shaders to produce something more of an artistic feel. So yes, it should be apparent, it should evoke some kind of fuzzy/positive feeling from the viewer.
-View distance plays a huge role – in my engine, such as it is, one can see several kilometers – which makes DoF more sensible than when you’re already seeing it 10 feet away.

I think it will work out well, but I’ll reserve judgement till then :wink: