Refraction

Hello! I’d like to implement refraction in my program, but I didn’t find any good tutorial on it. I know that nvidia has some paper about refraction but as I know it is done with vertex program. I’d like to implement it without vertex program. So if anybody knows a good tutorial, please let me know. Thanks.

Depends what you want to refract, through what and how accurate you want it to be. There are quite radically different approaches depending on the above factors.

Generally you do something similar to env mapping. I generate my own texcoords on the CPU or with VP but you should be able to do it with some texgen mode too.
I take it you’ve tried Google, found nothing?

I’ve searched google, but I didn’t find any tutorials or papers about refraction without vertex program.

I’d like to render a transparent mesh that refracts the scene behind it. I don’t know vertex programming so I can’t use those examples I found.

As I know you have to render the scene to a texture and then generate those texture coordinates. So mainly I’d like to know how to generate them.

This by any chance to do with watching the Half Life 2 movies?
I believe they’re doing something along the lines of:

  • render the scene as normal.
  • copy the entire framebuffer into a texture (the refractions are high-res in the movie)

then…
if doing per-vertex:

  • render polygons with eye-linear texture coordinates, puterbing these texture coordinates depending on some kind of wave vertex map.

if doing per-pixel:

  • use texture shaders to peturb the texel to fetch

Not very specific, I know - but I’m playing about with this myself (in my mind, anyway).

I didn’t see the HL2 movie but I was also thinking about copying the frame buffer because this way you don’t have to render the scene twice.

I’ve found an nvidia paper about refraction, here it is: http://developer.nvidia.com/view.asp?IO=Refraction_paper . It is a D3D implementation but it might be helpful (I didn’t looked at the source code yet).

There was an nvidia demo called Gothic Chapel whoch had a refracting glass statue. Does anyone knows if the source code of that demo is available?

I don’t think there’s any need for proper refraction calculations, as this sort of provokes the need for rendering the surrounding environment into a cube map, which isn’t a very efficient solution to such a subtle effect as refraction, IMHO.
Nah, just play about with the texture coordinates, that should give a good enough effect.
Now reflection is a different matter, then you’ll need your cube maps.

I think the angel in the Gothic Chapel demo used a (static) cube map. You won’t be able to refract much by just copying the frame buffer and you’ll get bad artifacts if the object is too near the edges of your viewport. You also can’t copy the frame buffer after rendering anything that might occlude your refracive object.

What I do is render the scene through a frustum that covers a rectangle around the refractive object with an increased field of view and then find texture coordinates by projecting the vertices and perturbing by vertex normals. This is for near-planar objects only though. I guess you could extend it to other objects but it wouldn’t be very accurate (not that it really is for planar objects either).
You need to make your perspective matrix for the refraction have the same “origin” as your viewport one and preferably clip geometry in front of the object.

You’re probably better off going with a static cube map approach but if you’re interested in the one I use I could provide some more help.

You are right about the frame buffer copying. And your method seems interesting, maybe I’ll give it a try.

In my other project I only need water refraction. The view is isometric, so I think in this case there are much cheaper methods than rendering a cube map.

I originally implemented this sort of refraction for water surfaces but it works pretty well for curved glass and such too.

If you’re doing this sort of stuff you should be more interested in my modified projection matrix that places the near clip plane wherever you like. I’ve been trying to advertise it for some time but nobody seems to have been able to get it to work properly. I can’t be bothered to write a demo.

Anyway, you can do it by simply modifying your projection matrix like this:

p[0][2] = plane.x
p[1][2] = plane.y
p[2][2] = plane.z + 1
p[3][2] = -plane.w

The far clipping plane is at infinity so it will work with z-fail stencil shadows. Z-precision suffers, especially at grazing angles but I’ve never seen z-fighting even in pathological cases.
The NDC depth range is used most efficiently at grazing angles, when viewing along the plane normal only half the NDC range is utilised. I have some maths to maximise the use of the NDC range but it’s kind of ugly. A bit in contrast with the simplicity of this method.

Hi Madoc,

Can you please share with us the part of extra math that allows better depth precision?

I’ve been looking into this technique (quite passively I must admit) for some time now, and I feel it is the best solution for emulating user clip planes. Just that it seems it really could use improvement in the area of depth resolution.

Thanks,
Alen

It’s been a while since I’ve played around with this stuff, I can’t find an implementation of it anymore and I don’t really rememeber all the logic behind it.
As I said, this doesn’t help much at grazing angles because that’s when the NDC depth range is best utilised.
This should scale the plane equation for a best fit of infinity in NDC space:

p *= 2 / ((p.x * cos(hfov / 2)) + (p.y * cos(vfov / 2)) + p.z)

Given the vertical fov hfov is:

hfov = 2 * arctan(aspect * tan(vfov/2))

My maths is not very good so this might not be the best way to do it.
If this doesn’t “just work” I can have a better look into it. I did test it at the time and it worked fine.
I’m not sure why I’m not using it but I don’t have any problems with depth precision at all.

I see this comes from the algorithms list, this has come up many times now. Seeing as I have a little time for once I’ll see if Cass and I can get back on the case and get it properly cleaned up once and for all.

Thanks for the formula. I won’t have time to test it right now, but I will let you know if I come up with something useful when I do implement it.

Alen