Heat distortion with respect to the depth buffer

Hi,

I was thinking about how to implement heat distortion as
seen in many first person shooters, for example on flames
above ghetto barrels.

I already have an idea for the distortion shader:

  • Always render the scene into a FBO x that has the
    same dimensions as the application window

  • When rendering a flame effect that comes with heat
    distortion copy FBO x to FBO y. Keep FBO x active and
    attach FBO y as texture sampler to a distortion shader.
    Render a quad at the flame position while the distortion
    shader is activated.
    In this shader read some distortion normal from a normal
    map (using a uniform timer to offset in the normal map
    over time) and use the returned normal to lookup the pixels
    from the attached FBO y in a distorted manner.

What is still not clear to me is how to integrate this into
my scene graph. Currently my scene graph works like this that
I always render solid objects first (sorted front to back to
exploit the depth buffer) and then all transparent objects
(sorted back to front) and blending effects like flames.

I guess it is not possible to do a simple post processing
step since the distortion should not affect any objects
(solid or transparent) between the eye and the flame.

So how does it work? :stuck_out_tongue:

Help is really appreciated!
Thanks

One way of doing it would be to draw the heat effect as a cube in world space. Remember that, even if you’re doing post-processing from a texture, you still have depth buffering and testing. Rendering that cube and testing against the scene’s depth buffer would probably yield good results, though it will interact badly with your alpha surfaces. In that case you might want to sort the effect along with your alpha surfaces and render it at the appropriate time.

Hmm ok so why a cube instead of a billboard (quad)? :slight_smile:

And it is not really clear to me how the depth buffer could help.
I mean when I render all solid objects first, then the pixel
information required for the distortion effect is lost at some
areas because the pixels are overwritten by closer objects.

Maybe I need to render the scene at least two times to solve
the problem?

You do lose information, but it’s probably not noticeable unless you’re looking (and very closely). I know a lot of games do screen-space refraction.

And I said cube to emphasize rendering it with a correct 3d position, so you get depth testing, etc.

Ok I’ll give it a try and then see how it looks.

Thanks!