once again water..

Sorry if this might be a boring topic by now, but I have some questions…

My program renders water (or at least, something that should look like water). My questions are:

  • at the moment, the “waves” come from bump mapping, the water is a flat surface. Is this a reasonable approach at all or is it b**sht? Has there to be a height mesh of some kind?

  • how is the perturbed reflection done? At the moment, the scene is reflected flat, which makes it looks like an ice plane rather than water. What is the technique to do this?

I am using a gf4 so I would be happy if it would be possible only using ARB_texture_env_combine etc and not ARB_FP, but if you convince me that this is inevitable, I will be willing to buy a fx or radeon .

Jan

The environment-mapped bump mapping cannot be done using GL_ARB_texture_env_combine. On a GF4, you need to use GL_NV_texture_shader to do the dependent texture fetch. I believe there is a demo on Nvidia’s website. You can also find a little blurb about the technique in The OpenGL Extensions Guide, Section 6.3:

http://www.terathon.com/books/extensions.html

As for wave simulation, vertex perturbation looks a lot better because the geometry actually moves. The calculations are pretty fast – take a look at Chapter 13 of Mathematics for 3D Game Programming and Computer Graphics (Chapter 12 in the first edition):

http://www.terathon.com/books/mathgames2.html

The vertex fluid simulation can be combined with bump mapping to produce a great effect.

[This message has been edited by Eric Lengyel (edited 01-02-2004).]

Eric, would you like to send me a sighned book of yours as a present? I would appreciate it as I’m getting not enought pocket money to buy me one…

  • :stuck_out_tongue: don’t take this one serious :stuck_out_tongue: *

As for wave simulation, vertex perturbation looks a lot better because the geometry actually moves. The calculations are pretty fast

But T&L’ing all of that geometry isn’t.

It all depends on your needs. If you’re down near the water frequently, then you may want to consider heavy tessellation and perturbing the actual vertices. If you’re not too close to it frequently, then you can get away with a purely bump-mapped effect.

thanks… so I guess the actual size of the water plane is also a factor whether to use only bump mapping or to use “real” geometry.

I will read the specs about NV_texture_shader, if that’s the way. I heard about environment mapping/cube map to do it, any thoughts?

These books seem interesting but I’m out of money at the moment (kind of a thread deadlock: I need the books to make the water, but I need the water to earn money to buy the books ).

Is there an arb-only way to do the perturbed reflection? I’d like my program to be able to run on ati as well.

edit: a little later; I just read the NV_texture_shader specs. Well, quite complicated. Do you have any more hints HOW to do it with them? Which program to use?

Thanks,
Jan

[This message has been edited by JanHH (edited 01-03-2004).]

Here’s an excerpt from The OpenGL Extensions Guide.

Environment-mapped bump mapping is performed by calculating the reflection of the direction to the camera at each fragment and using the resulting vector to fetch a sample from a cube texture map containing an image of the environment. This environment cube map is typically stored in world space so that it can be generated without any dependency on a particular model’s local coordinate system.

The reflection vector calculation depends on the normal vector fetched from the bump map at each fragment. Since samples from the bump map are stored in tangent space, we must transform them into world space before the texture shader can determine the appropriate reflection vector. This is accomplished by calculating a 3x3 transformation matrix at each vertex that maps tangent space to world space and storing its rows in three sets of texture coordinates. Three dot products are evaluated in the texture shader stages to perform the matrix-vector multiply needed to transform the normal vector into world space for each fragment.

Let M be the 3x3 matrix that transforms 3D vectors from model space to world space. Typically, the normal vector N, the tangent vector T, and the bitangent vector B (where B = ±N x T) are available in model space at each vertex. The matrix W that transforms 3D vectors from tangent space to world space is given by

...........[ Tx   Bx   Nx ]
W = M [ Ty   By    Ny ].
...........[ Tz   Bz    Nz ]

The rows of the matrix W are stored in the s, t, and r texture coordinates for texture units 1, 2, and 3 for each vertex. As shown in Table 6.3.8, dot products are performed in these texture shader stages in order to transform the normal vector fetched in stage 0 into world space.

When reflecting the direction to camera E (the eye vector) across the world-space normal vector at each fragment, we may choose to use a direction to camera calculated at each vertex or a constant direction to camera. The GL_DOT_PRODUCT_REFLECT_CUBE_MAP_NV texture shader operation assembles the x, y, and z coordinates of the 3D eye vector from the q texture coordinates corresponding to texture units 1, 2, and 3, respectively. These components of the eye vector must be calculated in world space at each vertex and are obtained by evaluating the equation

  E = M (C - V),

where C is the model-space camera position and V is the model-space vertex position. The vector E does not need to be normalized since its reflection is used to sample a cube texture map, for which vector length is irrelevant. As an alternative to calculating the eye vector at each vertex, a constant eye vector may be used by specifying the texture shader operation GL_DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP_NV for stage 3. In this case, the eye vector E is given by the value of GL_CONST_EYE_NV for texture unit 3.

The world-space normal vector calculated in stages 1, 2, and 3 may optionally be used to fetch a sample from a diffuse cube texture map in stage 2. If the texture shader operation for stage 2 is GL_DOT_PRODUCT_DIFFUSE_CUBE_MAP_NV, then the transformed normal vector is used to access the cube map bound to texture unit 2. The result of the cube map fetches in stages 2 and 3 can be added in the texture environment or in register combiners. If no diffuse cube map fetch is desired, then the texture shader operation for stage 2 should be set to GL_DOT_PRODUCT_NV.

Table 6.3.8 Texture maps, texture shader operations, previous texture inputs, and texture coordinate values used to perform environment-mapped bump mapping. Stage 2 differs in the two cases that (a) a diffuse lighting cube map is used, and (b) only the environment cube map is used. Stage 3 differs in the two cases that (1) the eye vector is calculated at each vertex, and (2) the eye vector is constant. The matrix W is the 3x3 matrix that transforms normal vectors from tangent space to world space, and the vector E is the world-space direction to the camera.

Stage / Texture map

0 Bump map
1 None
2 (a) Diffuse lighting cube map
… (b) None
3 Environment cube map

Stage / Shader Operation

0 GL_TEXTURE_2D
1 GL_DOT_PRODUCT_NV
2 (a) GL_DOT_PRODUCT_DIFFUSE_CUBE_MAP_NV
… (b) GL_DOT_PRODUCT_NV
3 (1) GL_DOT_PRODUCT_REFLECT_CUBE_MAP_NV
… (2) GL_DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP_NV

Stage / Previous texture input

0 -
1 GL_TEXTURE0_ARB
2 GL_TEXTURE0_ARB
3 GL_TEXTURE0_ARB

Stage / Texture coordinates

0 Ordinary 2D texture coordinates
1 (s,t,r) = W1 ; q = Ex
2 (s,t,r) = W2 ; q = Ey
3 (s,t,r) = W3 ; q = Ez

[This message has been edited by Eric Lengyel (edited 01-03-2004).]

Is there an arb-only way to do the perturbed reflection? I’d like my program to be able to run on ati as well.

The only way to do it using ARB extensions is with GL_ARB_fragment_program, which requires GFFX+ or Radeon 9500+. On ATI hardware, GL_ATI_fragment_shader can also do the correct calculations, and it’s supported on Radeon 8x00 class hardware.

thanks although it will take me some time to read, understand and implement it. Maybe I should really buy your book…

Jan