Layered 2D Refraction Shader Help

Hello to the OpenGL community!

I’m looking for some tips on 2D multi-layered refraction.

Currently, I’m able to produce the refraction effect perfectly for all refracted items that do not overlap. I’m also working with orthographic projection.

Here’s the rendering break-down:

  1. Render scene to texture.
  2. Render refraction displacement texture(s) (normal maps).
  3. Pass textures created in step 1 and step 2 to fragment shader to produce (fake) refraction results.

This method works quite well until the normal maps intersect and overlap. Here is a screen shot to illustrate the problem:

Image of the normal maps intersecting:

Image of the shaded result:

As you can see, the result does not emulate refraction of the background correctly because the normal maps intersect.

The only solution I could work out was to sort the normal maps and render each one to its own texture. Here’s my current thinking:

for each refraction object:

  1. Render current scene to texture.
  2. Render normal map to texture.
  3. Pass textures created in step 1 and step 2 to a shader for refraction .

The results are as desired. However, This seems like a terribly slow approach.

Could anyone please offer a better, more optimized, method of producing correctly layered 2D refraction?

Thanks in advance for getting to the end of this exhaustive post!

What about something like this :

  1. Render current scene to texture, only once
  2. Render each refraction object offscreen, on a different normal+depth map
  3. for each refraction object :
    3.1) render to scene, use all normal+depths maps to sort of raytrace : find nearest z, skew ray, find second nearest depth, etc

This approach supposes that 1) is costly, and 2) is not.
And the number of offscreen textures should be optimized, ie only overlapping refraction objects should go to different layers. I guess that 3 layers should be enough for the eye.

I am very grateful for your insight, ZbuffeR. I wonder, though, is there a point to a depth buffer when using 2D ortho projection?

I will work on this problem and post my results for review.

Thanks again.

In fact the depth is only needed to have a very realistic refraction (even on ortho projection).

But it is probably not needed for a good-enough-for-the-naked-eye effect.

Hhmm. I’m still not sure how to sort and render these so that the effect works as desired. What if there are 10 layers of refraction objects all refracting through each other? It seems that the only good way to make this work would to sample the screen several times.

I’ve looked all around the internet. There’s plenty of info on refraction, but no information on how to properly handle several refraction objects refracting through each other.

Stumped.

It seems that the only good way to make this work would to sample the screen several times.

Yeah, that’s pretty much the only way to do it.

No.

Currently how does your algo works, for single refraction ?
I assume it is :

  • get an offset vector from current normal
  • sample rendered scene with offset vector to get refracted color

Generalised with my proposition without depth info it would become :

  • for : each refract layer, ordered from nearest to farthest :
    – get an offset vector from current layer normal
    – general offset += offset vector
    – end for
  • sample rendered scene with general offset vector to get refracted color

Thanks, ZbuffeR.

Yes, you’re correct about how I currently compute a single layer of refraction. However, there would also be a diffuse layer for colored textures, to mimic effects like stained glass windows.

As you suggested, for a maximum of 3 refraction layers deep, each layer would require 2 textures.

for each refraction layer:
1 Texture for normal map + alpha, (alpha map for only refracting what’s necessary)
1 Texture for diffuse (like a stained glass color texture)

I’m assuming the refraction would be computed within a single fragment program. If I’m understanding you correctly, 6 textures would needed to be passed to the fragment program. 3 Layers * 2 textures each = 6 textures.

Then, sample all the normal maps and skew texture for refraction-like effect according to sorted order.

Is this what you mean? Is there a better way to handle more than 3 layers? 7 or 10 layers would require passing 14, or 20 textures to the shader, would it not?

Thank you for taking the time to reply to my posts.

That is it, but I would not bother much with testing alpha for each layer to determine if it has to be added (juste clear with blue instead of black). The offset calculation is probably way faster than the texture sample, so brute forcing adding an empty vector sound reasonable, but of course it is better to benchmark your actual case.
In fact it would be more logical to have alpha on the diffuse texture only.

In your case 3 layer case, you will need 7 textures : do not forget the rendered scene, it has to be on a texture to be sampled too.

7 textures is not much for a shader, especially when they have all the same texcoords and dimensions.
If the hardware supports it, the recent texture array may be interesting.
But maybe you can go away without extra diffuse layers, if you render the “refractors” diffuse parts directly in the rendered scene first. This will look less correct, but depending on the stained glass details, it can be passable.