how to rendering EMBM refraction

Hi

What I really mean is with Geforce3 level hardware, how to render environment map bump map refraction?

I know with texture shader we can easily do bump map reflection(GL_DOT_PRODUCT_REFLECT_CUBE_MAP_NV), but how can we do refractions? Seems like these are some “texture addressing in fragment level” so I think it can’t be done through vertex shader right?

For GeForce3-class hardware, it’s a hack on
reflect_cube_map. You can see how this is
set up in:
http://cvs1.nvidia.com/DEMOS/OpenGL/src/bumpy_shiny_patch/

and http://cvs1.nvidia.com/MEDIA/programs/bumpy_shiny_patch/

See the “refract” vertex program…

Thanks -
Cass

cant you simply go for some texture_offset operation in nv_texture_shader
nv_texture_shader2 is also supported by geforce3

Inasmuch as they are both hacks, yes.

But bump-reflection into a cube map is a much more powerful mechanism than simple 2D texture coordinate bias.

It’s easier to get plausible results from cube map bump-reflection.

Thanks -
Cass

Thanks cass! The refract shader helps a lot

-----original shader:

We need the “texel matrix” to

be ©(R^t)(N)®(MV)(S)(B)(F),

But can you explain in more detail about R^t * N * R and how and why does it work?

Thanks

So I figure those matrices should be the only difference from bump relfection,right? Just a set of matrix multiply so that normals are transform to refraction vectors but I’m just curious to know about it

Thanks

Hi tomb4,

It’s all coming back to me now…

I used “dot product cubemap”, not “dot product reflect cubemap”.

That whole mess of matrix concatenation is to take a per-fragment tangent-space normal, transform it into object space, then eye space, then “radial eye space” (where the eye vector is a standard basis vector), apply a non-uniform scale in the plane orthogonal to the eye vector, transform back into eye space, then on to cubemap space.

Now I know why I didn’t write a whitepaper on it.

Does that make any sense?

Thanks -
Cass

Thanks for reply, cass. But the main point is I just didn’t know how the
“then “radial eye space” (where the eye vector is a standard basis vector), apply a non-uniform scale in the plane orthogonal to the eye vector, transform back into eye space”,
which is exactly R^t * N * R, transforms the eye-space normal to the refraction vector, (which is yet to transfom into cubemap space and look up into the cube map)

I think there should be some math about that right? Is there a explaination?

Thanks

[This message has been edited by tomb4 (edited 10-11-2003).]

Hi
I finally came back on this qusetion but plz forgive, I just wanna find out:

yes, the way cass introduced IS a hack,and in two aspects does not make sense to me:

  1. the eye radial space is caculated using vect[0,0,1] rather than should have been?
  2. the 3 texture coords forming the texel matrix are given in the vertex shader hence linearly interpolated across the polygon.

Can anyone suggest a more common and/or robust way to do this?

Thanks

[This message has been edited by tomb4 (edited 10-17-2003).]