Depth of Field

I’ve found ana rticle by Alex Evans of Lionehead studios about blurring scenes to make a Depth of Field effect.
I found it really interesting anda want to try an implementation.
If I understood he uses the depth buffer and a combination of textures with different sizes (smaller = far = more filtering = more blur).
He takes a small textures for the far objects, so when it render it in the original size it comes “blurred” by the filtering.
The method is reading back the depth buffer, render far objects to texture (more far, more small) and then render a billboard quad at the normal size, so far objects are rendered with hardware “blurring” and near object with few blurring.

It looks a good method, but how to implement it in OGl ?

I tought two way of doing it :

1)Z sort all the objects and polys , render them to TextureSize/Z texture. and billboard 'em in distance order.

  1. Read back the depth buffer in some way and do the same but without sorting “by hand”

any ideas ??

The article is at http://www.gamasutra.com/features/20010209/evans_01.htm

bye

rIO http://www.spinningkids.org/umine

As far as I understand this, he’s using the zbuffer to stamp out one layer, which is rendered to a texture. This way, you’ll create as many layer textures as you want and blend them together somehow.

Yes, but using alpha mask, and rendering only separated objects to textures.
The result is really good, but the modifications to the pipeline are worth the effect ?
The other side, the result looks more realistic than a straight render.

rIO http://www.spinningkids.org/umine

I would basically think he uses front and back culling planes to stamp the layer out of it, renders the complete scene into a texture (where only the layer will be). Then it does so for any layer, so unless you do really great tests like quadtrees, where you can test for not visible blocks (which don’t lie in the layer), you’ll be blocking the bus. Plus I don’t think that the effect is worth it, because the eye will produce that effect for free if the object move fast enough.

I just hope Black & White has an option to disable this depth of field trick. I’m afraid it’ll bring my poor old TNT to its knees.

[This message has been edited by DFrey (edited 02-13-2001).]

For Michael :
It’s not true the eye will produce the same effect if objects are moving fast, even if there are no moving objects you MUST do depth of field blurring for a real realistic image.
Even a panorama shoot has depth of field blurring.
I’m not talking about motion blur.
Anyway I’m sure they tought about disabling that GREAT feature.

rIO
http://www.spinningkids.org/umine

Well, I was referring to motion blur, you’re right. . But the eye makes sharp on what you concentrate, so it will never be realistic. You would have to make an option where the user specifies the layer on which he is concentrating. Or you give most details to layers where something really great is happening.
So, my thought is, that there is no depth blurring, there is only non-concentration blurring.

What the sense of this effect? It’s completely unrealistic on the second thought. It might be a cool feature, but it doesn’t exist in biology. After all, we’re trying to imitiate the human, not the camera. The world always looks sharp, the eye makes the blur. I doubt it would look realistic if all in the background is blurred, since the user could focus on the background, and it would still look blurred. This effect takes away image-information which could be important when trying to see and recognize far away enemies.
If you would like that effect, you would have to use the web camera, to look where the eyes are looking, and it would still make problems. The only possibilities are these real 3d glasses.

Not sure what effect B&W uses depth blurring for, but the typical use of blurring is not to make things realistic, but to focus your attention. Movies do it all the time.

Imageing you have something subtle but important going on in a REALLY complex environment. If they made the whole image sharp, you might be looking at other things in the scene and miss the important thing that happens. By blurring everything else in the background, you effectively say “HEY, forget about that stuff there for a moment…something really improtant is about to happen, so look !!!HERE!!!”

I agree with Michael, that it is an effect, which is not good when you try to focus on far away objects. And I thought it would look unrealistic. But when I first saw it in Outcast, I loved it. It looks absolutely great! But they have a voxel engine, so it’s much easier to do it there…

Yeah, I agree, in cut scenes, one could have a curve-of-interrest, which bascially indicates the interresting things at a time. But, taking the distance as important is unrealistic as well, even objects at the same distance may have different blur applied to them, if seen from the eyes. If you imitate a camera, this effect might be suitable. So you’ll need different 3d pipelines for 1st and 3rd person, which I don’t even think about.

Blurred vision doesn’t exist in biology? The world alwaus looks sharp, its just the eye that “makes” it blurred. and that the eye has nothing to do with a camera, and vice versa? Huh?

Of course the eye has the same problem as cameras do, and that objects the eye is not focusing on will appear blurred. The reason why cameras and human vision has blurred vision is because they are not pin-hole camera systems. OpenGL is modelled on a pin-hole camera, and so everything apepars sharp.

(yadda yadda, mutters about points radiating light and the need to focus by converging light to a point on the retina yadda yadda limited accomodation range which is why people have bad eye-sight yadda yadda). At the end of the day, cameras and eyes have the same problem, and its not the eye just “making” something blury. it IS blurred.

Incidentially, the eye uses the accomodaiton level as part of its depth perception. Oh, and we looked at blur in a virtual reality system with a wall, head-tracker and eye-tracker so the user can focus on different parts of the scene and have things blurr approrpriately.

cheers
John

I always thought blur comes from lower resolution at the borders of the retina and that the two part images don’t lie on each other (don’t know how to express that) and that the lens is wrong.
I know that both camera and eyes have the same problem, since their construction is quite similar.

My talking is about the crapness to simulate a blur effect that depends on the user. Additionaly, blur effects look quite nice but cost performance and visible information. That’s the same as with these sun-things that go over all of the view in one line and only disturb the user, even while he is in 1st person mode (and this effect doesn’t exist in human eyes, it’s camera only (well, we don’t see anything of it)).

If you have however these said 3d glasses which can see track the eyes of the user (which only few persons have…) you can simulate the blur effect quite well. But I think that is one of these effects who look good but cost information. Shadows are realistic, help players while hiding etc, ie they produce game fun. They are also good to show 3 dimensional correlations between objects. But does the game provide new features with blur?

At the end, I take back that with the layer. The complete layer should be pre-blurred and the eye will add the concentration blur. So you only need to know the object the eyes are looking at and take it’s distance as the concentration point.

Stereo would definitely be a better way of doing real depth of field. Just as we don’t have to “simulate” blur on the edge of the field of view because peoples’ eyes already do that for us, once you have stereo, depth of field should be automatic.

Unfortunately, “good” stereo HW costs way too much.

  • Matt

Also, in addition to eye tracking, proper simulated depth of field needs to know the diameter of the pupils. Not entirely practical…

  • Matt

The lower resolution at the border of the retina is problems with acurity more than depth of field blur. (Just ask someone with bad eyesight if they can look straight at something and see a focused image=)

you’re right about knowing the pupil size, but we used an 8mm pupil diameter, and since its in a darkened room, the size doesn’t change much.

depth of field is not automatic with a stereo system, since all you have is two pin-hole cameras instead of one. You still need to blur parts of the scene the user is not focusing upon. The entire image, after all, is being projected on a screen a metre and a half from the user’s head, so you can’t get “automagic” DOF when everything is still the same physical distance from the user =)

you’re right about blur not producing MUCH in terms of game fun… but we were developing an application for… well, hmm =) a company that wanted this for prototyping stuff. DOF isn’t feasible for games that do not know where the user is looking, but DOF IS one of the depth perception cues. (There are quite a few, tho, and its not a particularly strong one).

cheers
John

Wow, what a storm!

You seem separated in two “teams”.
The one who say that is our eye target that makes everything else blurred is definetly right.
But we (me) are talking of games, and of course if I’m a game maker I can be able to understand wich is the “hot point” in the scene.
So why not use the same technique used in film to focus the attention of the game player on a particular zone of the scene ?
Imagine how ugly could look a movie without the depth of field blurring “trick” !

And mainly, I wanted to discuss the technical side of it !

rIO http://www.spinningkids.org/umine

Sorry for asking, but is it difficult to project the image so near ahead that the user doesn’t see the edges, in 3d glasses (not blenders I mean, real color LCD or whatever glasses). Anyway, the user won’t add depth blurring, since all objects are projected on the same plane, so he won’t need to refocus his eyes to objects farther away. That resolution blur should be for free if that image would be realistically large (ie greater than the eye can see).

I don;t understand what you mean about the edges… the system I was talking about is a stereo wall. We have a 3.0 x 2.4 metre rear-projected screen with crystaleyes stereo glasses. two images are rendered, and each are alternately projected on each refresh. That is, we run the display at 120Hz so each eye image is drawn 60 times a second. The stereo glasses ahave LCD shutters in sync with the display, so that when the left eye is on the screen, the right eye is blanked out, and vice versa.

The frustums are configured from the user’s eye position (determined by acension flock-of-birds head tracker) and known geometry of the wall. We have to project onto a plane not orthogonal to the optical axis, incidentially.

So, yes; we get acurity errors for free, because the human eye is physically limited to a small high-resolution region. We still need to model depth of field blur, though, because the entire image is projected to a physical plane a known distance from the observer. DOF is a limitation of the human eye an cameras because they cannot accomodate everywhere at once. Its not something the brain/eye adds to a sharp image to pick out attention; it isn’t a “trick” employed by cinematographers to direct the audiences attention (although, obviously, it can and is used for this effect), its a “feature” you get from using a lens based cameras system. Yes, its entirely possible to take a real picture with a real camera where everything is in focus… its just a pinhole camera. You make a tiny TINY hole and wait for ages for the photons to trickle through. (Ages, i mean, is in terms of hours, not in the time to go make coffee). The reason why we have the lens system we do is because we can’t wait this long; we want to let in more light, but in doing so we lose the ability to accomodate across the entire scene.

cheers
John

Hmmm, I think I was braindead in thinking that stereo somehow solves depth of field.

  • Matt