water question

Hello,

I am planning to do water (ocean size, not pond) simply as a dark blue bump mapped surface (bump mapping of course with shininess) with the wave surface shape as the bump map (no reflections, environment mapping etc.). This sounds quite easy, so I wonder what you guys (and girls) think about it!? Is there a chance of getting a good result?

Thanks
Jan

Unfortunately, I think your water will come out looking like plastic if you just make it a bump-mapped surface with specular.

IMHO, the single most important optical phenomena in water is the Fresnel reflection. You have to have at least that to get it to look right. If doing the actual reflection is too much for some reason, I bet it would still look good just using the Fresnel term to interpolate between 2 colors (approximate sky color and upwelling color).

I already thought about “might it look too much like plastic?”, but somehow I have the impresion that “ocean” water does in fact look like plastic a bit (for example http://www.cs.utah.edu/~michael/water/deepWater1.jpg),,) and at least it would be a starting point, I do not know anything about water rendering, have never done bump mapping (and of course don’t know what the fresnel equation ist g). Isn’t it quite hard to do the “actual reflection”? How would that be done anyway, environment mappint? What is the way to do cool water at the moment?

Sorry for my lack of knowledge and thanks,
Jan

i don’t think the water in this screenshot looks like plastic…
however, if you need more infos about water rendering try this:
http://www.gamasutra.com/gdce/2001/jensen/jensen_pfv.htm
(an article called “Deep-Water Animation and Rendering”)

This guy has a really cool ocean demo based off of that gamasutra articles and a few others. Here’s the link: http://meshuggah.4fo.de/ He uses vertex programs and register combiners for all of it.

-SirKnight

thanks :slight_smile: in the gamasutra article, the guy also talks about rendering the surface to a bump map. So I guess I could at least start with this? Or is this a totally useless effort and I will have to rewrite all of it later?

Jan

For information, you can see here what I made with Fresnel, bump mapping, cubic reflection, and cubic diffuse lighting : http://dev.succubus.fr/Shot1.JPG

All this runs in a heavy pair of vertex/fragment program.

SeskaPeel.

You probably want to incorporate the fresnel effect in some way, even if it’s only done at the vertex level.

And to continue on the theme of this thread, this is what my current water implementation looks like at the moment:
http://claes.galaxen.net/ex/images/demo_october2.jpg

There is a dx9-demo (nv20+) as well if you look around, but it might be a little weird atm as it is a work in progress…

I’m using a fresnel texture-lookup, cubemap and all calculations are done in a linear-colourspace. The geometry is based on a perlin noise derivative, and done on the CPU with a projected-mesh concept that i’m developing.

[This message has been edited by vember (edited 10-23-2003).]

Well, geez, if everyone else is going to post screenshots, I better get mine in too:

http://www.sciencemeetsart.com/wade/Projects/WaterFX2/

thansk for all your replies . zeno, your screenshots show about what I’m trying to achieve. I’m still naive enough to believe that simple bump mapping (with specular of course) and the bump map constructed from the water surface geometry should look at least a little bit like that, am I so wrong?

Jan

Only one way to find out, right?

I don’t think it’ll look horrible, it just won’t look like a liquid. Look at the subtle coloration of the water in this shot: http://www.sciencemeetsart.com/wade/Projects/WaterFX2/WaterFX2_03.jpg . Notice how you see more of the water color nearby and more of the reflected sky color at a distance? If you use just a diffusely lit bump-mapped surface, your shading will look the same everywhere. This issue will become more apparent when the surface is animated.

You have to decide what is right for your project depending on how good you want the end result to be and what hardware you’re running on.

the problem is, the most advanced OpenGL feature I have used until now is multitexturing, I am totally new to bump mapping, and good water seems to be a lot more complex than that, so I have to start with something… and I guess this is a good place to start.

Simple specular bump is a fine place to start.

Btw: nice cresting effect in that WaterFX image (#3). Cresting is notoriously hard.

another simple question: assuming that the vertex normals of my water plane and the light vector are in the same coordinate space (and they are, world coordinate space, no transformations or rotations), and I create a normal map which is also in the same coordinate system (x pointing to the right, y pointing up, z pointing backwards along the plane), I do not neccessarily need the tangen space stuff!?

thanks
Jan

and one more question (sorry that this is in the advanced forum, i guess it is rather beginner level): if I got it right, I load the first texture unit with the normalization cube map (with the light vector as tex coords) and the second one with the normal map (with “standard” tex coords) and set the texture combine mode for the first texture unit to GL_COMBINE and for the second one to GL_DOT3_RGB_ARB. this will do a dot product of the two “colors” resulting from the textures. And the I would like to load the material texture (“image” texture, not normal) to the third texture unit and modulate it with the dot product which results from the preceeding operation. but how is this done?

thanks a lot,
Jan

I more and more get to thinking that all bump mapping efforts are useless if I do not fully understand how the texture combiners work (GL_COMBINE_EXT, not register combiners, this is still too advanced for me g). But I cannot find any real documentation… plz someone help, I am getting desperated .

Another thing that puzzles me: When computing a dot product between to vectors, I am expecting to get a single number as result. But what happens in OpenGL? Do I get a color value?? If so, is this a shade of grey?

thanks
Jan

Originally posted by JanHH:
[b]
Another thing that puzzles me: When computing a dot product between to vectors, I am expecting to get a single number as result. But what happens in OpenGL? Do I get a color value?? If so, is this a shade of grey?

thanks
Jan[/b]

The result of the dot product is put in the rgb(a) components of the output. This means all components have the same value, which is a shade of grey.

Originally posted by vember:
[b]You probably want to incorporate the fresnel effect in some way, even if it’s only done at the vertex level.

And to continue on the theme of this thread, this is what my current water implementation looks like at the moment:
http://claes.galaxen.net/ex/images/demo_october2.jpg

There is a dx9-demo (nv20+) as well if you look around, but it might be a little weird atm as it is a work in progress…

I’m using a fresnel texture-lookup, cubemap and all calculations are done in a linear-colourspace. The geometry is based on a perlin noise derivative, and done on the CPU with a projected-mesh concept that i’m developing.

[This message has been edited by vember (edited 10-23-2003).][/b]

vember, I’ve had a look at your projected-grid stuff - demo and papers. I have to say it looks very impressive - makes every other height-map LOD scheme redundant as far as I can make out. It’s a similar approach to one I’ve been considering for a while - sort of rasterizing a height map as if it were a polygon in screen space with perspective-correct texture mapping concepts thrown in for good measure.
But having read your papers, I’m still confused as to what your concept actually is exactly. Could you try to explain in a few sentences exactly what you’re doing?

Originally posted by knackered:
vember, I’ve had a look at your projected-grid stuff - demo and papers. I have to say it looks very impressive - makes every other height-map LOD scheme redundant as far as I can make out. It’s a similar approach to one I’ve been considering for a while - sort of rasterizing a height map as if it were a polygon in screen space with perspective-correct texture mapping concepts thrown in for good measure.
But having read your papers, I’m still confused as to what your concept actually is exactly. Could you try to explain in a few sentences exactly what you’re doing?

sure, i’ll try to be more clear (and will update the papers at a later point). When you’ve had your head around something for a while it seems much more obvious than it actually is…

The basic idea is really simple. I wanted a grid that looks just like a uniform grid on screen, but I wanted it in worldspace so i could displace it. This will be something like a continous LOD by retaining the perspective properties of the camera transform.

To do this I project an uniform grid with the inverse view-proj matrix of the camera to get their coordinates in worldspace. If you look at it from the outside, it will look exactly like a regular projector but casting geometry instead of light. In fact, it was designed so that it all could be done by a rasterizer and stored in a vertexbuffer.

The problem is only that this does only work as it should if no pixels of the cameras is of a direction that is pointing away from the plane and no displacement is used. (If aimed away from the plane the vertices will backfire through the camera and the geometry will be broken)

This is where all the real work comes in, I solve this by creating a projector that I can aim (by setting it’s direction and x/y-range of the originating grid in screenspace) and move to make it cover the intersection of the cameras frustum and the displaced surface volume. It is supposed to do this in a way that avoid discontinuities and distribute the geometry in a decent way. I’m still working on this, but the current implementation works fairly well (especially if you set the levitation parameter to around 3, this limits the projector from coming to close to the plane).

A note about the implementation is that the wave-generation and projection is done in software at the moment (the hardware path is broken). It was designed to be 100% hardware though, and the thing that is holding it back is that current API’s cannot use a vertexbuffer as a rendertarget, which forces it use AGP reads which ruins the performance completely because it stalls the pipeline… I wouldn’t mind if überbuffers where around by now :stuck_out_tongue:

Don’t hesitate to ask if you’re wondering about something…

-claes

[This message has been edited by vember (edited 10-24-2003).]

oh, and by the way. I forgot to mention that i do the projecting on the plane by setting up an equation of a line that goes through a point on the view plane (like ray-tracing) and use the plane-equation to see where it intersects in world-space.