Depth Of Field

I’m interested in implementing the depth of field effect very broadly described in a tutorial on this page:- http://realtimecg.gamedev-pt.net/tutorials.html
There’s not a lot of implementation detail in there, so I’ve a few questions.
(1) Do I use the depth_texture extension to extract the depth buffer into the texture?
(2) How do I get these depth values into the alpha of the ‘sharp’ texture?
(3) Would it be wise to use the sgi automatic mipmapping extension to generate the ‘blurred’ texture?

Thanks for any light you can shed.

(1) Yes.
(2) You don’t. Just bind the depth texture to an unused unit and set up a GL_INTERPOLATE combine mode to output LERP(sharp, blurry, depth).
(3) That’s certainly the easiest way to do it. You should be able to get higher quality blurs by doing them yourself, though, but that’s probably also more expensive.

– Tom

Thanks for your reply, Tom!

So I’ll need 3 texture units in total?! 0=sharptexture,1=blurredtexture,2=depthtexture
?

Just to clear something up, here’s an extract from the tutorial:-

Tutorial:-
The algoritm is simple.
Render your scene into a texture.
When you do this, you must compute the depth distance of your pixel to a Focus Plane, and save
it in the alpha channel of the texture.
Make a copy of that texture. Lets call it the in focus texture.
Blur the original texture. Lets call it the out of focus texture.
Now render a quad with the size of your screen, with the 2 textures aplied.
I think that you have checked out whats the trick by now!

(bear in mind I’m going to be rendering into the framebuffer, not a texture (I think the tutorial is based around d3d)).
Is there no way I can get around this usage of the alpha/computing focal plane distance per vertex etc.?
I want to have the freedom to do fancy shader stuff, and have the depth of field as a post render process without relying on stuff accumulated during the render…

[This message has been edited by KuriousOrange (edited 01-23-2003).]

The effect is completely independent of your normal rendering. The only thing you have to “accumulate” during rendering is the Z buffer.

You can do all the fancy shading you want, then when your frame is finished, you read back the color and depth buffers into textures. Then you create the blurred version of the color buffer, so you have a total of three textures. You bind all three of them, draw a single fullscreen quad, and you’re done.

What you seem to be missing is that the depth values are only converted to alpha values during the rendering of this fullscreen quad. This is achieved by setting the GL_DEPTH_TEXTURE_MODE_ARB texture parameter to GL_ALPHA. None of this requires any modifications to your normal rendering operations.

– Tom

But how do you calculate the distance from the focus plane? Surely this is done per vertex and then interpolated across the triangle?

I’ve had a stab at trying to do this without having to compute the distance at renderscene time…please tell me what you think:-

void initialisation()
{
// bind sharp tex id
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, viewportwidth, viewportheight, 0);

// somehow compute a blurred version

// bind depth tex id
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, viewportwidth, viewportheight, 0);
}

void render()
{
// render scene

// bind sharp tex id
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, viewportwidth, viewportheight);

// somehow compute a blurred version

// bind depth tex id
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, viewportwidth, viewportheight);

// clear colour/depth buffer

// bind sharp texture to unit 0
// bind blurry texture to unit 1
// bind depth texture to unit 2

// enable register combiner program (shown below)
// set const0 of register combiner to some value between 0 and 1

// render screen size quad

// swap buffers
}

//////////////////////////////////////////////////
// nvparse Register Combiner

// tex0 = sharp texture
// tex1 = blur texture
// tex2 = depth texture
// const0 = depth of focal plane

// subtract focal plane depth from depth texture depth to
// give distance from focal plane
{
rgb
{
discard = tex2;
discard = negate(const0);
spare0 = sum();
}
}

out.rgb = lerp(tex0, tex1, unsigned(spare0));
out.a = unsigned_invert(zero);

By the way, it’s also occurred to me that one could add a heat haze effect by introducing a 4th texture (scrolled/scaled/rotated using the texture matrix) to peturb the focal plane constant perpixel! I think that might look very smart!

If someone could please tell me if my code is bollocks, or am I barking up the right tree?
I’m not at a graphics proficient PC at the moment, so can’t test it myself.
I’m not that sure how I can get the blurred texture (without having to do it on the CPU). If I let opengl generate a set of mipmaps automatically, how do I specificy that I want mipmap level 0 to bind to unit 0, and mipmap level 3 (say) to unit 1 ? Is there anyway I can select the mipmap in the register combiners or something? (if so, then I’d only need 2 texture units)…

You can’t select the mipmap into the register combiners. Register combiners apply after texture access.

If you want to get different levels of mipmaps, the EXT_texture_lod_bias can do it. The spec is at : http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_lod_bias.txt

You don’t really need 3 texture units. You can use a single texture object : RGB component stores RGB values of the framebuffer (that will be blurred by the mipmap) and the ALPHA component of the texture stores the DEPTH component.

Ah! I knew I was missing something…
texture_lod_bias!
Right, so I can copysubimage the colour buffer into a texture with automipmap on, and then bind that single texture object to 2 texture units, but set the lod_bias differently for the 2 units, therefore giving me a sharp and a blury texture!
Good.

vincoof, when you say it can be done with a single texture, I’m confused. I put to you the same question I put to Tom - you’re not taking into account the ‘distance from focal plane’ thing…in other words, we’re not really interested in the depth fragments directly, just in their depth in relation to a reference depth value (the depth of the focal plane)… How do you solve this in your own apps? Where do you calculate the distance from focal plane?

Because the framebuffer already contains the sharp piece, you don’t need to put that in a texture. It would seem that “all” you need to do is to overlay the framebuffer (using GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blending) with the blurred texture, assuming distance-from-focus-plane is in ALPHA.

I don’t think packing color and alpha into the same texture is going to be all that efficient, because it requires re-shuffling and splicing of data that doesn’t come pre-spliced. Also, you probably want blur on the texture when you access it for color, but not when you access it for depth (== alpha).

With the right set-up, you might be able to get depth into alpha using a fragment program or register combiner set-up in the frame buffer, although I don’t know if this will actually help you much.

You could get a good approximation of depth in alpha just by calculating the right depth for each vertex in software or a vertex program and storing that in the alpha component of the frame buffer. This will give subtly different results though, since there is perspective in the values stored in the depth buffer so that there is more distance between distant adjacent depth values than close adjacent depth values, just as there is more distance between distant pixels and close pixels. Interpolating alpha from the vertices will be pretty much linear from the near plane to the far plane, while using the depth values will vary more rapidly for pixels closer to the camera and taper off as you get further away.

Thanks for your input, guys. I really want to settle on a method now - as time is running out.

Originally posted by jwatte:
Because the framebuffer already contains the sharp piece, you don’t need to put that in a texture.

But I need to grab the sharp image into a texture in order to blur it (ie. autogenerate mipmaps from it).

Coriolis, I don’t want to impose any requirements on the actual rendering of the scene - I’ve got stuff like multipass perpixel lighting and reflection going on, plus stencil shadows…and god knows what else I may eventually want to implement…therefore I don’t want to make things even more complex by storing depth in destination alpha while rendering the scene.

Has anyone implemented a depth of field effect in their applications?
Anyone any idea how a game like Wreckless (on the XBox) does its DOF effects?

Just found this interview with the guys who programmed Wreckless - it’s a real gem of tips, so I’m sure you’ll all be interested:- http://spin.s2c.ne.jp/dsteal/wreckless.html

I’m afraid you really have to perform another pass in order to get depth values. It’s not complicated since you don’t need fancy per-pixel lighting, shadows or whatever effect. You just need to render your geometry without lighting of any kind.

About getting the distance value, I recommend 1D texturing with automatic texture coordinate generation in eye-space. It’s really flexible and available since OpenGL1.0

Has anyone looked at the code I wrote (above)?
Why would I need a 1d texture when I can do the subtraction in the register combiners?
Why would I need a second scene render pass?
I’m even more confused - I don’t think we’re on the same page in the book here!

The above register combiner setup will give you signed distance. I don’t get how signed distance can help you until you get the absolute value of it or until you square it.

You must have missed the line:-

out.rgb = lerp(tex0, tex1, unsigned(spare0));

The ‘unsigned’ keyword is the same as abs, or so I thought.

afaik, the ‘unsigned’ keyword means ‘clamp to 0’ in register combiners terminology.
Taken from the spec :

[ Table NV_register_combiners.4 ]

   Mapping Name              Mapping Function
   -----------------------   -------------------------------------
   UNSIGNED_IDENTITY_NV      max(0.0, e)

Right, ok, I remember now - it’s just a clamp.
So, I square it - no problem there, done it before in register combiners, just need to adjust the texture lod for each unit, no problem.
One parting shot from this “topic that went nowhere” :-
I’m pretty surprised nobody seems to have much of a clue about implementing depth of field. From the bits I’ve read of this forum, I assumed you lot were up to date, but obviously not. You prefer to talk about the symantics of the term “bump mapping” rather than solve real problems. Do you have trouble breathing with your heads so far up your own arses?
Over and most definitely out.

KuriousOrange,

I’ve looked at depth of field, but in the context of ARB_fragment_program, where it’s fairly obvious how to calculate the blur factor, so I stayed away from the register combiner specific part of your question. If you’re looking for specific nVIDIA-specific extension set-ups, realize that the pool of people who want to or are able to accurately answer is much smaller.

Also, you wrote:

But I need to grab the sharp image into a texture in order to blur it (ie. autogenerate mipmaps from it).

The implications of my original suggestion was that you’d render into the frame buffer, then CopyTexSubImage() into the texture; i e, the frame buffer itself doesn’t need to go into a texture target, and needn’t be part of the input into the second pass, except for being part of blend.