PDA

View Full Version : Volumetric fog

mstr
12-20-2002, 03:39 AM
What is the fastest way to do convex volumetric fog brushes in quake style engines?

How does q3 do it's fogging? Looks to me like billboarded polygons drawn on top on eachother, but I don't really understand how it works. I've done loads of websearches, but not found a method which would be fast and usable. http://www.opengl.org/discussion_boards/ubb/frown.gif

dawn
12-20-2002, 03:59 AM
Indeed is doing "billboards". Check the content flag of a brush to see if contains fog and after you use the planes of the brush to define gl's user clipping planes. I did this and worked like a charm.

(you have a texture associated with the brush and that texture has a content flag -- CONTENTS_FOG is 64)

mstr
12-20-2002, 03:13 PM
Ok, so here is my plan:
-I set fog volume's planes as clipping planes.
-I calculate the midpoint of the fog volume, and calculate the distance n from midpoint (lets say point A) to most distant vertex in the volume.
-I start moving point B along the vector from camera to A by m units, and while the distance from A to B equals or is smaller than n, I draw a billboarded quad. I keep doing this until distance from A to B is
once again larger than n. (we draw all quads which can intercept the fog volume, no more) Quad size is 2n x 2n to make sure it covers the fog volume in screen space. Quad's center point is on the vector from camera to A.

Problems(?)
-Can I just repeat above for every visible fog volume in my scene? If not, why?
-6 Maximum clip planes available, so fog volume can only be a cube. Split volumes during map compile or..?
-Fog planes (billboards) are drawn after everything else, so everything should work, except transparent objects in front of / inside of the fog volume. How to fix this, or is it a big problem? Sorting all transparent objects and using clipped fog layers don't seem to mix well. http://www.opengl.org/discussion_boards/ubb/frown.gif

Am I going to the right direction? Any ideas / comments are welcome.

pATChes11
12-20-2002, 10:08 PM
I would just draw an alpha-blended poly that has the color of the fog on it, and change the alpha based on fog density and distance from the viewer. Shouldn't be too hard http://www.opengl.org/discussion_boards/ubb/smile.gif A game that I like to play a lot, Descent 3, uses this technique (although its triangulation leaves a little to be desired. you may consider using a fog "texture"); the only thing with doing this in a game is for polys that do not define the fog volume. For this, you could set a clipping plane, but you'd be limited to your 6, and clipping planes are usually done at least partially in software. Your best bet would be to clip your fog polygons to your fog volume then, I suppose, but I don't know how you would go about doing the math. That's why I haven't done it myself yet :P

Humus
12-20-2002, 10:50 PM
You may want to check my VolumetricFogging demo out. http://esprit.campus.luth.se/~humus/

zed
12-20-2002, 10:54 PM
another (original?) method is based off projected textures

Serba
12-21-2002, 04:42 AM
Try use GL_EXT_fog_coords extension for simple volumetric fog.

mstr
12-21-2002, 05:04 AM
Originally posted by Serba:
Try use GL_EXT_fog_coords extension for simple volumetric fog.

Great, how? http://www.opengl.org/discussion_boards/ubb/smile.gif

vincoof
12-22-2002, 11:39 PM
You can also find info in OpenGL specifications 1.4

mstr
12-23-2002, 02:30 AM

See how I explained my method up there? If you can't do the same with your method, just don't post.

Name dropping, like "read opengl 1.4 specs", "ARB_BOGUS_FOOBAR", "do it with a texture" and "install directx9" is not useful! Thank you.

vincoof
12-23-2002, 04:27 AM
Billboarding is really fillrate-intensive. Moreover fog billboarding does not look very good (often you can see the seams) unless you'r in a very special case.

As for the 6 clip planes, note that you can add a clipping volume if you use an alpha texture. If you can use multitexturing, you can add even more clipping volumes.

The GL_EXT_fog_coord is by now the most interesting solution since this extension is almost exclusively designed for volumetric fogging. I can send you (by email) a little program with source code that uses this extension.

zed
12-23-2002, 08:22 AM
>>The GL_EXT_fog_coord is by now the most interesting solution since this extension is almost exclusively designed for volumetric fogging.<<

visually fog_coord gives the worse results of anymethod, in fact i cant see the point of the extension myself.
eg for vol fog normally u want the fog to be thicker away from the camera, unlike normal gl fog fog_coord doesnt take the distance into account!

vincoof
12-23-2002, 10:03 AM
>>for vol fog normally u want the fog to be thicker away from the camera, unlike normal gl fog fog_coord doesnt take the distance into account!<<
But that's exactly the point why the extension has been defined : you take full control over the fog contribution. Then imagine any shape for your fog.

The problem is that you need an extra computation for every vertex, which can be pretty intensive. Hopefully you can be assisted with vertex programs.

Ninja
12-28-2002, 10:00 AM
Is it possible to use the normal opengl fog function when you are in the fog and GL_EXT_fog_coord when you are over it?
I have some problems with this myself.

//Ninja

dorbie
12-28-2002, 05:00 PM
Gread demos Humus, I'd been unable to run them before because they're Radeon only.

It took me a while to find them, others might not persist.

Coriolis
12-29-2002, 02:57 PM
Quake3 does not do billboarding for fog. Quake3 restricts fog volumes so that only one face of the fog brush is allowed to be visible, and that face has to be axial (ie, its normal has to be parallel to an axis). It then does per-vertex fog calculations for all surfaces in the fog. q3map can automatically tesselate geometry inside fog volumes when generating draw surfaces for the BSP, so that this works adequately and fairly rapidly in practice. Lastly, Q3 can draw a fog shader on the visible face to give it a bit of texture.

If possible Q3 will apply volumetric fog in no extra passes, but to handle the general case it can just draw the triangles another time with the appropriate vertex colors and blend function.

vincoof
12-29-2002, 10:59 PM
Is it possible to use the normal opengl fog function when you are in the fog and GL_EXT_fog_coord when you are over it?
If you mean :
- viewpoint in fog -> use standard fog (based on depth)
- viewpoint outside the fog -> use GL_EXT_fog_coord
then yes you can do it. You still have to switch to those modes "manually", though. Moreover it may not look very good if the fog is not "equal" at the point where the transition occurs.

Ninja
01-02-2003, 07:18 AM

Regards, Ninja

Ninja
01-02-2003, 08:28 AM
Nevermind!

JustHanging
01-03-2003, 07:38 AM
Zed, how would you do this with projected textures?

-Ilkka

Dodger
01-03-2003, 07:54 AM
Unreal used a CPU computed texture - they would cast a ray from the camera position through the fog volume(s) in the view frustum, write the resulting densities into a texture, then project this texture on geometry using the current view point.
Maybe there is a way to do this today with render-to-texture and a fragment program that computes the fog density? Should be awfully fast.

JustHanging
01-03-2003, 08:06 AM
Thanks, but...

do they really project the texture from the camera to the geometry, or did I get something wrong? Because wouldn't that correspond to just drawing a fullscreen quad with the texture.

Also, the texture has to be quite low-res, since it is calculated on the cpu and updated frequently. And that would cause some serious "fog bleeding".

If I didn't understand something, could you please clear this out for me?

-Ilkka

Dodger
01-03-2003, 02:44 PM
From what I understood (I read this in an article written by one of their devs), the fog volume textures are quite low res (I imagine 64x64 to 128x128 should be sufficient) - due to the 'soft' nature of the gradients that result from the fog tracing, there don't seem to be many major artifacts.
As far as projecting the texture using the current viewpoint goes - no, it's not the same as rendering a fullscreen quad. Think about it, you're projecting outwards on the geometry which has varying distances to the view point (which results in perspective distortions, naturally), unlike a fullscreen quad.
I'm not completely familiar with the details of their implementation, so this is pretty much all I know about it - I hope this is giving you a helpful hint or two http://www.opengl.org/discussion_boards/ubb/wink.gif

V-man
01-03-2003, 08:32 PM
Originally posted by Coriolis:
Quake3 does not do billboarding for fog. Quake3 restricts fog volumes so that only one face of the fog brush is allowed to be visible, and that face has to be axial (ie, its normal has to be parallel to an axis).

I think it does have the ability to do billboarded fog, since in Alice, it looks that way. I might add that it looks quite ugly.

The strange thing is, some scenes in Alice have fog everywhere, yet they haven't used glFog, althought that would have resulted in better graphics.

zed
01-03-2003, 09:24 PM
i used projected textures here http://uk.geocities.com/sloppyturds/volfog3.jpg
(example of projecting textures with glut + also the nvidia developer site) the only thing u need to watch is keeping the fog in an area, methods are similar as with projected shadows (how to advoid the backwards projection)

here is another method http://uk.geocities.com/sloppyturds/underwater.jpg
basically fog polygons are blending over the already existing scene

Coriolis
01-03-2003, 11:03 PM
V-man:

I know incontrovertably that Quake3 does volumetric fog exactly as I described. There is nothing you nor anybody else can say to convince me otherwise. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Now, Quake3 does also have the ability to do billboarded sprites -- for example, look at the rocket trails. This may be what you saw in Alice. If that is the case, I can say with certainty that it does not use any of Quake3's volumetric fog code.

Also, it is unsafe to claim how any engine does something based on one of the games made with that engine. Most games modify the engine extensively, so you could be seeing something they added or even something they broke. http://www.opengl.org/discussion_boards/ubb/wink.gif

[This message has been edited by Coriolis (edited 01-04-2003).]

01-04-2003, 12:31 AM
I've always wished for blending functions based on incoming and destination depth values. You could draw back faces and then front faces of fog volumes getting an opacity proportional to view depth to the back of the volume or intersecting geometry at every pixel. It would be no more problematic than typical alpha blending.
I've never understood why this functionality hasn't been available, is converting depth values to a linear form so problematic?

JustHanging
01-04-2003, 05:39 AM
Madoc, you can generate fragment depth values to alpha with texgen and an 1d texture. Linear or perspective, whatever suits you best.

With Zed's links I get a "page not currently available" message.

And Dodger, I am pretty sure that projecting from viewport gives a similar result with a simple quad. Pixel-perfect. I once made an app that relies on this fact, and it worked perfectly. The perspective distortions only appear if you are not using real projective texturing.

You are right about the low-frequency nature of the fog map. But imagine a character standing in front of the fog, that would cause a sharp hole in the fog density map. And the edges of that hole would get seriously undersampled. Well they propably don't raytrace the fog through the characters, but the same goes for the level geometry as well. Perhaps the the artifacts just aren't as bad as I think.

Anyway, if the artifacts are tolerable, the use of low-res texture as fog is interesting. You can render very nice volumetric effects to a low-res texture, since fillrate is no longer an issue. You wouldn't have a link to that article, would you.

-Ilkka

JustHanging
01-04-2003, 06:43 AM
Ok, Zed, I see your pictures now, but I still dont have a faintest idea on how the projected textures technique works. Images are nice though.

I can do projected textures, I'd be more interested about what is in those textures and where you project them. Do you draw with normal fog enabled and use the projected textures to somehow limit the area where it's drawn? In that case, why projected? And that wouldn't work in a very general case, looking through heavy fog to an area outside the fog volume would leave the area fogless. What are the limitations of this technique?

-Ilkka

01-05-2003, 12:07 AM
I realise that, but it's expensive and cumbersome, my point was that I wished it were a simple blending function. It wasn't intended as more than a rant, I wanted such functionality introduced years ago (destination alpha was hard to come by then).
My "question" was why hardware support for such blending was never introduced, wondering if converting values from the depth buffer might have been the obstacle.

Humus
01-06-2003, 09:49 AM
Originally posted by dorbie:
Gread demos Humus, I'd been unable to run them before because they're Radeon only.