Light plumes

Are there any demos out there that demonstrate light plumes (EDIT: actually, light blooms) using shaders?

[This message has been edited by Plato (edited 09-08-2002).]

What do you mean by “light plumes” ??
lens flares ?

Not a lens flare, more like a corona, except it can be applied to surfaces as well as points. For example, a light plume would be the glow effect around a light, regarless of the light shape.

The only way to simulate the light plumes before was to render a flare bitmap at you light source. The downside to this is that it only really works for point-light sources. Now with pixel shaders, I’m quite certain you can have arbitrary glowing shapes, all you’d have to do is convolve your lit surface (which could be a light, or a surface reflecting light) with some sort of point-spear function (like a gaussian, or even just a simple round pillbox).

The Halo 2 demo seems to have light plumes in it, as does the Radeon 9700 demo that I saw in the PR video. I was wondering if there are any other demos out there.

Halo 2 and that radeon 9700 demo are using HDR (High Dynamic Range). A quote from the HDR presentation by NVIDIA:

“Enables realistic optical effects – glows around bright light sources, more accurate motion blurs”

I advise you to check that presention out. HDR sure did make the lights in Halo 2 look totally awesome.

-SirKnight

Is this what you’re referring to?
http://opengl.nutty.org/effects/index.html

2nd image. Click for demo with source. Doesn’t use any shaders tho…

It looks alot better moving.

Nutty

Ah! Light BLOOMS.

Thats the term the bungie guys use when talking about the effect in the Halo2 movie.

Wow! Thx nutty! That’s awesome

I’ll have a look as soon as I get home

Yeah I saw that discussion from Chris at bungie on GD Algo’s list. He says they’re using a destination alpha buffer to store the overbright light value, which is then convolved and applied to the scene similar to the demo I did.

I didn’t use a dest alpha buffer tho, I assume using dest alpha limits your light blooms to a single pre-defined color?

Nutty

How do you determine which areas are “overbrightened” ?
Do you compare to pixels that are equal to (1,1,1) ? or do you render an additional picture with half intensity, and then look the areas brighter than (0.5, 0.5, 0.5) in it ?

Hmm, that’s a pretty interesting technique Halo 2 is using there. Well that may not be HDR exactly (but of course it couldnt be since the xbox doesnt have a NV30 or R300) but it sure looks very similar to it. It looks damn good.

-SirKnight

Basically in the Register combiners, when the final light value is calculated, it’s normally clamped to 0 to 1. So subtract 1, and this will give you the fragment values, that are over 1, and can be stored. If it was less than 1.0 (not overbright), then it’ll result in 0, and not contribute to the light bloom.

Thanks must go to Davepermen for describing this to me.

Nutty

You can read about some of the implementation details of the blooms in our High Dynamic Range demo in my notes and slides from the SIGGRAPH 2002 Hardware Shading Course.

-JasonM @ ATI

I also tried to add some sort of fake dynamic range thing to tenebrae since that halo2 video really looked too cool.
Here’s a shot: http://users.pandora.be/hollemeersch/blackrose/tenebrae/beta/quake75.jpg http://users.pandora.be/hollemeersch/blackrose/tenebrae/beta/quake79.jpg
(The links may sometimes not work since a lot of people are on my poor site.)
This runs on generic hardware so no special stuff needed, but it is slow.
It works as follows:

  1. Render the scene in a small viewport (64*64)
  2. Get the pixels (glReadpix)
  3. All pix with one of their components greater than a certain threashold multiply with some value 1.2 currently.
    Other pixels are made black
  4. Blur the buffer (gamasutra fast blurring stuff)
  5. Make a texture from it and render a screen sized quad.
    It costs a lot of framerate but looks verry cool. The main cost is still rendering the small viewport. But speeding it up seems not easy since I already use all general cominers and destination alpha for doing bumpmapping.

Charles

[This message has been edited by Pentagram (edited 09-08-2002).]

Very similar to my method. Except I do my bluring on the gpu, with lots of render copy to textures, at sucessfully smaller texture sizes.

Try that new NV extension which allows you to do asynchronous read pixels. May help you to speed it up. Dunno if it’s out yet, or if it’s only for Nv30.

Nutty

hm, now quake1 looks bether than doom3… scary, not? next will be the softshadows implemented from the other dude, and, oh well… we could sell quake1 again, HEHE

yeah, try to do the blurring and all in hw, does not need any nonglstandardfeatures… only glCopyTexSubImage2D…

can’t wait for an r300 tenebrae, hehe

Nvidia’s Cg SDK has a shader demo (with source) that does what you are looking for.

These effects are awesome - and so cheap to do it seems. Nutty, your demo runs very quickly on my GF 1 DDR.

The only downside for the mipmap-blurring method is that at certain mipmap levels the blur is blocky. I turn the mipmap level down in the demo and it’s extremely convincing.

BTW, are you able to make “glowing” objects this way as well? i.e. have overbright values for a 3d object, so that you could render, say, a bright light tube with a light bloom/HDR effect?

Yep, you could easily do that. You could have a special render shader, that automatically guarantees some over-brightness when rendering. Ideal for rendering lights and stuff. Then they’ll always contribute to the glow.

The best bit, is that even views through a mirror etc… they’ll still glow, without any clever determination. It’ll just happen!

I’m thinking of doing another one, but based on the Halo2 approach. i.e. using destination alpha. You dont need to render the scene twice using this method either. I’ll also try and get the filtering a bit better to try and remove the blocky edges.

Nutty

I should get a card with pixel shaders… I could do so much more. However, I’m amazed that a lot of these effects work on my GeForce 1 DDR. Even tenebrae runs better than I expected: I don’t usually fall below 15 fps at 640x480x32 with all detail on, and usually am in the 20’s.

Now if only there was a vertex program used to smooth out those animation keyframes, Quake 1 could be modernized .

Is anyone here running an R300? If so, what’s the driver quality like for coding the more advanced features of opengl? Would I be better off waiting for an NV30?

Driver quality should be pretty good. As for features, I think they’re waiting for DX9 to be released before they release the functionality of the card through gl extensions and ARB_fragment_program etc…

Me personally, I’m waiting for NV30, as I like the fact that nv_fragment_program, and nv_vertex_program2, are very similar to nv_vertex_program, which I’m very familiar with. So it should be a breeze to code for.

Nutty