PDA

View Full Version : how to rendering EMBM refraction



tomb4
10-07-2003, 01:17 AM
Hi

What I really mean is with Geforce3 level hardware, how to render environment map bump map refraction?

I know with texture shader we can easily do bump map reflection(GL_DOT_PRODUCT_REFLECT_CUBE_MAP_NV), but how can we do refractions? Seems like these are some "texture addressing in fragment level" so I think it can't be done through vertex shader right?

cass
10-07-2003, 10:33 AM
For GeForce3-class hardware, it's a hack on
reflect_cube_map. You can see how this is
set up in:
http://cvs1.nvidia.com/DEMOS/OpenGL/src/bumpy_shiny_patch/

and http://cvs1.nvidia.com/MEDIA/programs/bumpy_shiny_patch/

See the "refract" vertex program...

Thanks -
Cass

tellaman
10-07-2003, 11:36 PM
cant you simply go for some texture_offset operation in nv_texture_shader
nv_texture_shader2 is also supported by geforce3

cass
10-08-2003, 06:24 AM
Inasmuch as they are both hacks, yes. http://www.opengl.org/discussion_boards/ubb/smile.gif

But bump-reflection into a cube map is a much more powerful mechanism than simple 2D texture coordinate bias.

It's easier to get plausible results from cube map bump-reflection.

Thanks -
Cass

tomb4
10-10-2003, 06:20 AM
Thanks cass! The refract shader helps a lot

-----original shader:
# We need the "texel matrix" to
# be (C)(R^t)(N)(R)(MV)(S)(B)(F),

But can you explain in more detail about R^t * N * R and how and why does it work?

Thanks

tomb4
10-10-2003, 06:27 AM
So I figure those matrices should be the only difference from bump relfection,right? Just a set of matrix multiply so that normals are transform to refraction vectors but I'm just curious to know about it http://www.opengl.org/discussion_boards/ubb/smile.gif

Thanks

cass
10-10-2003, 09:03 AM
Hi tomb4,

It's all coming back to me now... http://www.opengl.org/discussion_boards/ubb/smile.gif

I used "dot product cubemap", not "dot product reflect cubemap".

That whole mess of matrix concatenation is to take a per-fragment tangent-space normal, transform it into object space, then eye space, then "radial eye space" (where the eye vector is a standard basis vector), apply a non-uniform scale in the plane orthogonal to the eye vector, transform back into eye space, then on to cubemap space.

Now I know why I didn't write a whitepaper on it. http://www.opengl.org/discussion_boards/ubb/wink.gif

Does that make any sense?

Thanks -
Cass

tomb4
10-10-2003, 05:57 PM
Thanks for reply, cass. But the main point is I just didn't know how the
"then "radial eye space" (where the eye vector is a standard basis vector), apply a non-uniform scale in the plane orthogonal to the eye vector, transform back into eye space",
which is exactly R^t * N * R, transforms the eye-space normal to the refraction vector, (which is yet to transfom into cubemap space and look up into the cube map)

I think there should be some math about that right? Is there a explaination?

Thanks

[This message has been edited by tomb4 (edited 10-11-2003).]

tomb4
10-17-2003, 12:41 AM
Hi
I finally came back on this qusetion but plz forgive, I just wanna find out:

yes, the way cass introduced IS a hack,and in two aspects does not make sense to me:
1. the eye radial space is caculated using vect[0,0,1] rather than should have been?
2. the 3 texture coords forming the texel matrix are given in the vertex shader hence _linearly_ interpolated across the polygon.

Can anyone suggest a more common and/or robust way to do this?

Thanks

[This message has been edited by tomb4 (edited 10-17-2003).]