I’m looking for some tips on 2D multi-layered refraction.
Currently, I’m able to produce the refraction effect perfectly for all refracted items that do not overlap. I’m also working with orthographic projection.
Render each refraction object offscreen, on a different normal+depth map
for each refraction object :
3.1) render to scene, use all normal+depths maps to sort of raytrace : find nearest z, skew ray, find second nearest depth, etc
This approach supposes that 1) is costly, and 2) is not.
And the number of offscreen textures should be optimized, ie only overlapping refraction objects should go to different layers. I guess that 3 layers should be enough for the eye.
Hhmm. I’m still not sure how to sort and render these so that the effect works as desired. What if there are 10 layers of refraction objects all refracting through each other? It seems that the only good way to make this work would to sample the screen several times.
I’ve looked all around the internet. There’s plenty of info on refraction, but no information on how to properly handle several refraction objects refracting through each other.
Currently how does your algo works, for single refraction ?
I assume it is :
get an offset vector from current normal
sample rendered scene with offset vector to get refracted color
Generalised with my proposition without depth info it would become :
for : each refract layer, ordered from nearest to farthest :
– get an offset vector from current layer normal
– general offset += offset vector
– end for
sample rendered scene with general offset vector to get refracted color
Yes, you’re correct about how I currently compute a single layer of refraction. However, there would also be a diffuse layer for colored textures, to mimic effects like stained glass windows.
As you suggested, for a maximum of 3 refraction layers deep, each layer would require 2 textures.
for each refraction layer:
1 Texture for normal map + alpha, (alpha map for only refracting what’s necessary)
1 Texture for diffuse (like a stained glass color texture)
I’m assuming the refraction would be computed within a single fragment program. If I’m understanding you correctly, 6 textures would needed to be passed to the fragment program. 3 Layers * 2 textures each = 6 textures.
Then, sample all the normal maps and skew texture for refraction-like effect according to sorted order.
Is this what you mean? Is there a better way to handle more than 3 layers? 7 or 10 layers would require passing 14, or 20 textures to the shader, would it not?
Thank you for taking the time to reply to my posts.
That is it, but I would not bother much with testing alpha for each layer to determine if it has to be added (juste clear with blue instead of black). The offset calculation is probably way faster than the texture sample, so brute forcing adding an empty vector sound reasonable, but of course it is better to benchmark your actual case.
In fact it would be more logical to have alpha on the diffuse texture only.
In your case 3 layer case, you will need 7 textures : do not forget the rendered scene, it has to be on a texture to be sampled too.
7 textures is not much for a shader, especially when they have all the same texcoords and dimensions.
If the hardware supports it, the recent texture array may be interesting.
But maybe you can go away without extra diffuse layers, if you render the “refractors” diffuse parts directly in the rendered scene first. This will look less correct, but depending on the stained glass details, it can be passable.