PDA

View Full Version : better bump mapping



mogumbo
01-11-2004, 10:59 PM
Has anyone ever tried this? It's probably already been invented, but I've never seen it before so I thought I'd program it. It's a little better than plain bump mapping, but not as accurate as displacement mapping.

It only works well with low-frequency bumps. Sharp angles cause some annoying artifacts.

screenshot (http://www.reallyslick.com/pictures/offsetmapping.jpg)
Linux demo (http://www.reallyslick.com/src/offsetmap.tar.gz) (requires GLUT, ARB_vertex_program, and ARB_fragment_program.)

[This message has been edited by mogumbo (edited 01-12-2004).]

[This message has been edited by mogumbo (edited 01-12-2004).]

JustHanging
01-11-2004, 11:49 PM
The shot looks very nice, perhaps you could do something about those silhouettes to get them right too?

So how do you do it?

-Ilkka

ZbuffeR
01-12-2004, 01:38 AM
Really nice !

Does it works only with cylinders or such simple geometry ?

Jan
01-12-2004, 02:07 AM
IMPRESSIVE !!!

Could you tell us, how you did that?
And could you post a screenshot of the not-bumpmapped image? I´d like to see the real difference.

Jan.

Asshen
01-12-2004, 02:26 AM
The source seems to be included, why not check it yourself.

K.

Relic
01-12-2004, 03:54 AM
Looks nice and (I didn't look at the source) reminds me of this: http://research.microsoft.com/~cohen/bs.pdf

Check out the 2003 SIGGRAPH proceedings for a correct silhuette: "View-Dependent Displacement Mapping" from Lifeng Wang.
Found a link on http://www.cs.brown.edu/~tor/sig2003.html

mogumbo
01-12-2004, 07:54 AM
That view-dependent displacement mapping is really cool, and way more complicated than what I'm doing. Maybe I'll tackle that later on.

I guess I could have been more descriptive when I posted this. Just figured demo said more than I could. The meat of it is in these three lines of the fragment program:

TEX height, fragment.texcoord[0], texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

All I do is get the height of the surface from a 1-channel texture. Then I scale it down. Then I multiply it by a tangent space vector pointing to the eyepoint and add that to the main texture coordinate. The new texture coordinate is used to index the regular color texture and the normal map for bump mapping.

It should work on any geometry. In the demo I applied it to a flat surface and a curved surface just to see how it looks.

Ysaneya
01-12-2004, 08:43 AM
I have to agree it looks very cool, and also sounds quite simple to implement, and fast.

What kind of artifacts do you get with sharp angles ? Can you post a screenshot ?

No Linux machine with ARB_fragment_program at my disposal right now, i might try to port it to Win32 if i have time tonight.

Does it look correct when the camera is moving too ?

Y.

mogumbo
01-12-2004, 09:24 AM
The sharp angle artifacts look like this (http://www.reallyslick.com/pictures/artifact.jpg) . As you can see in the lower-left corner of this image, though, low frequency bumps look pretty good still. You need something like VDM to do this accurately. This is more like poor-man's diplacement mapping http://www.opengl.org/discussion_boards/ubb/smile.gif

It works alright at any camera angle because it is computed with a vector pointing to the eye. I should have mentioned in my previous post that the tangent space eye vector is calculated in a vertex program, similarly to the way you compute a tangent space light vectors for bump mapping.

ml
01-12-2004, 09:31 AM
I compiled it for windows, let me know if you want it removed from my webspace.
http://www.bostream.nu/tunah/offsetmap.zip

It looks sweet!

Ysaneya
01-12-2004, 10:41 AM
Wow, just wow. Indeed i can't find any good reason to *not* use this technique at a bigger scale (understand: for all the materials of a game, for example).

It makes a world of difference when zooming close to the surface. Ex.: http://www.fl-tw.com/opengl/screen1.jpg (normal bump mapping) http://www.fl-tw.com/opengl/screen2.jpg (offset bump mapping)

The sharp artifacts aren't too bad comparatively to the small price you pay for the increase of quality..

Y.

mogumbo
01-12-2004, 10:52 AM
I don't know. Those sharp angle artifacts still bug me, but it works well for certain types of textures. Organic things with lots of curves seem to look fine with this technique. Now I want to try to implement the VDM method that Relic posted, but that will take some time and be way more computationally expensive.

Thanks for the Windows version, ml.

Gorg
01-12-2004, 11:38 AM
I do not have a complete understanding of the tangent space, so I might be talking complete crap, but I assume that we can compute the tangent space vertex value.(vertex coordinate in tangent space).

If you interpolate that for the fragment program, then you could add the height and calculate the tangent space eye vector from the tip of the bump instead of the polygon surface and maybe that would help with the artifacts.

That's of course assuming it's possible to to get a vertex position in tangent space.

mogumbo
01-12-2004, 11:53 AM
I believe, by the definition of tangent space, the z value of any vertex in tangent space is zero. However, it might be possible to modify the eye vector in the fragment program before computing the texture offset. I think you would have to pass information to fragment program to tell it the scale of the texture or something like that. Rrrg. That makes my brain hurt :P

JustHanging
01-12-2004, 12:09 PM
Is the parallax effect correct in the sense that if you mix your bumps with real geometry, will it look funny?

-Ilkka

mogumbo
01-12-2004, 01:10 PM
It's not perfect, but you could adjust the scale and bias to make it look right for different surfaces. After all, computer graphics is just a bunch of approximations.

SeskaPeel
01-12-2004, 01:30 PM
mogumbo :
very nice effect, and so easy to implement. Indeed a very clever idea to express the eye vector in tangent space.

Could you explain how you got those 0.04 and -0.02 for scale / bias ?

Maybe it could be possible to deduce them mathematically for each pixel, and would reduce the artifact you get on the too vertical displacements ?

I'm going to add this to my island scene to see if it gets better look. If it does, I'll post screen shots.

Thanks,
SeskaPeel.

davepermen
01-12-2004, 01:45 PM
It's really nice looking. I just had to get the correct glut32.dll first.

I think this is about a perfect algo for 2d games in a 3d world. I'm thinking of Duke Nukem - Manhattan Project style games. It would add a lot of depth, i guess.

mogumbo
01-12-2004, 01:51 PM
The scale and bias are just estimates, and if you used this fragment program for a wide variety of textures you would probably want to pass the scale and bias as a local parameter.

The correct value for the scale would be the texture's physical height divided by width. For example, a texture of sand on a beach might cover 5x5 meters while the dips and ridges in the sand rise .2 meters. The correct scale would be .2 / 5 = .04.

Setting the bias to 'scale' * -0.5 puts the reference plane right at the median height of the texture.

sqrt[-1]
01-12-2004, 03:33 PM
Real nice effect, I notice in the fragment program you do

# remove scale and bias from the normal map
SUB normal, normal, 0.5;
MUL normal, normal, 2.0;

Any reason you cannot do?
MAD normal, normal, 2.0, -1.0;

mogumbo
01-12-2004, 04:06 PM
Good call. I must have been coding too fast again http://www.opengl.org/discussion_boards/ubb/smile.gif

ZbuffeR
01-12-2004, 04:37 PM
This is the single best real-time effect I have ever seen !
(on my list there were realtime generalized shadows, dot3 bump mapping, and RTT based tricks)

I am really disappointed though not to be able to run it, as I only have a GF3. Any chance to translate this to register_combiners_2 (am i naive...) ? Or have a software emulation for fragment_programs ?

SirKnight
01-12-2004, 05:30 PM
For software you can see if Mesa has ARB_fp support yet and use that. I think also that this could be done using register combiners on a NV2x level chip. It would probably take a few passes, but really the only thing different I see here that wasn't already done on NV2x level hardware are those three instructions that was mentioned earlier in this thread that computes the "newtexcoord" variable.

Who knows, maybe I'll just make a geforce 3/4 version. http://www.opengl.org/discussion_boards/ubb/wink.gif


-SirKnight

Zeno
01-12-2004, 08:18 PM
Wow, huge image quality difference when close-up. Nice work and thanks a bunch for sharing http://www.opengl.org/discussion_boards/ubb/smile.gif

davepermen
01-13-2004, 01:05 AM
Originally posted by SirKnight:
Who knows, maybe I'll just make a geforce 3/4 version.

not possible as far as i remember the capabilities of texshaders..

radeon8500+ should be possible (fragmentshader), though..

marcus256
01-13-2004, 01:47 AM
Extremely impressive!

I especially liked how the rock wall (second texture) looked with offset mapping only (no bump mapping). I think people get all too excited and put exaggerated specular bump mapping all over the place, including on materials such as wood etc, just to give it more depth, without realizing that it looks very unrealistic.

Your offset mapping really solves this problem, making it possible to use only moderate bump mapping, or none at all, and get very, VERY convincing results.


[This message has been edited by marcus256 (edited 01-13-2004).]

Pentagram
01-13-2004, 03:38 AM
Anyone got a working glut32.dll the one I have(3.7 from sgi) gives qn error that it can't find "glutInitWithExit".
The screenshots certainly look promising!

Charles

marcus256
01-13-2004, 04:21 AM
I tried the one at Nates GLUT 4 win page, and it worked:
http://www.xmission.com/~nate/glut/glut-3.7.6-bin.zip

Humus
01-13-2004, 07:13 AM
Looks awesome http://www.opengl.org/discussion_boards/ubb/smile.gif
Gotta try this out.

davepermen
01-13-2004, 07:48 AM
yeah i yet talked with a friend about the possibility to add this to your shadowed bumpmaps, humus. together, that could look reeeeeally nice. if the math is correct done, it could get verrrrrrryy neat http://www.opengl.org/discussion_boards/ubb/biggrin.gif

krychek
01-13-2004, 08:49 AM
Looks real nice! This method requires one additional texture (the height map) when compared to ordinary normal mapping right?

This method should be great for non-specular surfaces because the depth can be percieved with just viewpoint changes unlike the traditional normalmapping where the movement of the light source is required for a good enough perception of the depth.

Zeno
01-13-2004, 09:20 AM
Originally posted by krychek:
Looks real nice! This method requires one additional texture (the height map) when compared to ordinary normal mapping right?

Technically, yes, but you can put the height in the alpha channel of the normal map and get the value out with the same lookup. I think RGB textures are stored internally as RGBA anyway, so if that's true you wouldn't be wasting any memory at all either.

mogumbo
01-13-2004, 09:37 AM
That actually won't work quite right. The height map and normal map are indexed by different texture coordinates, so they require seperate lookups.

knackered
01-13-2004, 10:06 AM
I must break my silence just to say...jesus christ that is a fantastic improvement to simple modulating bump-mapping! It's truly stunning, but I'm just waiting for dorbie to point to a 1987 paper describing the technique.

Won
01-13-2004, 10:38 AM
Wow, Mo. I love graphics: something can be beautiful both in result and implementation.

You bias/offset values seem to be a local linearization of the surface. I wonder if you could extend this technique to take into account local curvature. Instead of using scale/bias and a per-texel hieght, store the coefficients of a per-texel polynomial that represents the local shape of the surface. In fact, you could probably derive the normal map from this as well.

-Won

mogumbo
01-13-2004, 10:51 AM
knackered: Yeah, I'm just waiting for that too http://www.opengl.org/discussion_boards/ubb/wink.gif

Won: I'm not completely sure what you're getting at, but it sounds a lot like the VDM paper that Relic posted near the top of this forum.

Won
01-13-2004, 11:10 AM
Scanning that paper in my copy of the proceedings...you're right! It appears to be the same idea as VDM. Which might mean that this approach is a subset of VDM where curvature is uniformly 0 and without the silhouette mapping stuff.

Still, an awesome effect.

-Won

Edit:

On further inspection, I like Mogumbo's technique alot more. VDM requires a stupid about of texture memory.

[This message has been edited by Won (edited 01-13-2004).]

Humus
01-13-2004, 01:09 PM
Originally posted by davepermen:
yeah i yet talked with a friend about the possibility to add this to your shadowed bumpmaps, humus. together, that could look reeeeeally nice. if the math is correct done, it could get verrrrrrryy neat http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Well, I had the exact same thought. I've tried this technique out now, and it works very well. As pointed out though, it does require fairly smooth bumpmaps to look good. I will make a demo though which uses both this technique and self shadowing bumpmapping, alternatively upgrade the self shadow demo to use this technique too.

mogumbo
01-13-2004, 01:25 PM
You can probably get away with using any old bump map. I think it's the height map that needs to be smooth for it to look good. Can't wait to see what you can do with it, though.

davepermen
01-13-2004, 01:37 PM
Originally posted by Humus:
Well, I had the exact same thought. I've tried this technique out now, and it works very well. As pointed out though, it does require fairly smooth bumpmaps to look good. I will make a demo though which uses both this technique and self shadowing bumpmapping, alternatively upgrade the self shadow demo to use this technique too.

i just looked trough your demos .. and dreamed of the addon..

and i stopped at one..

the water demo. please, oh please, add it to the water demo! it looks so brilliant, it is so smooth, it would simply make it look perfect.

Pentagram
01-13-2004, 02:12 PM
Thanks for the working glut marcus256.

Yeah it looks really cool for such a simple trick. It looks quite nice without bumpmapping even.
As soon as I get heightmaps from the artists I will try to add it to my engine http://www.opengl.org/discussion_boards/ubb/smile.gif

Charles

Pentagram
01-13-2004, 02:14 PM
Originally posted by Pentagram:
Thanks for the working glut marcus256.

Yeah it looks really cool for such a simple trick. It looks quite nice without bumpmapping even.
As soon as I get heightmaps from the artists I will try to add it to my engine http://www.opengl.org/discussion_boards/ubb/smile.gif

I was wondering, maybe you could use this together with z-correct bumpmapping (the hack nividia presented a few years ago) to get an even better displacement mapping approximation. (It just shifts the depth of the pixel so it still doesn't have a silouette but it works for intersecting geometry...)

Charles


Eh I wanted to edit my message... but well wrong button http://www.opengl.org/discussion_boards/ubb/biggrin.gif


[This message has been edited by Pentagram (edited 01-13-2004).]

davepermen
01-13-2004, 02:16 PM
yeah, if you displace only "inwards", this should work like a charm together.. or so

davepermen
01-13-2004, 08:26 PM
hm. If you implement a real 3d renderer (with the 3d glasses), and your math is correct, all those bumps would look real 3d.

Fun thought, really fun.

SirKnight
01-13-2004, 10:24 PM
Hmmmm. You know dave, you just gave me an idea. http://www.opengl.org/discussion_boards/ubb/biggrin.gif I do have some 3d glasses that I got when I bought my geforce 4 when it came out so maybe I could make a "3d glasses" version too. http://www.opengl.org/discussion_boards/ubb/smile.gif Sounds like fun!


-SirKnight

[This message has been edited by SirKnight (edited 01-13-2004).]

Zengar
01-14-2004, 01:20 AM
It could be probably implemented with the help of DS_DT textures in texture shaders. Never done something with them anyway.

davepermen
01-14-2004, 01:52 AM
hm right. hm.. no.. you can not at least displace along a real 3d vector.. the full shader itself doesn't fit into textureshaders.. but you can of course try to find an as well fitting solution as possible..

SeskaPeel
01-14-2004, 03:28 AM
You can still plug the height map in the alpha channel of the normal map and fetch it two times, with two different tex coos. I tried to fetch either the normal either the diffuse map with the non offsetted tex coos, and it gives bad visual effect. So you really need to offset the texcoo for both diffuse and bump.

Still, I'm confused with the scale and bias. It doesn't sound like mathematically correct to me. It must come from the fact that I can't picture it correctly, but still, if someone wishes to explain this stuff analytically, ...

SeskaPeel.

mikiex
01-14-2004, 03:36 AM
reminds me of
http://www.cs.sunysb.edu/~oliveira/pubs/RTM_low_res.pdf

Zengar
01-14-2004, 04:06 AM
Originally posted by davepermen:
you can not at least displace along a real 3d vector..

Oh... I forgot about that MAD instruction... http://www.opengl.org/discussion_boards/ubb/rolleyes.gif
I guess I haven't got a slightest idea why and how that algorithm works.

Won
01-14-2004, 05:01 AM
I think I understand it. Maybe I can help myself (and others) understand it better if I try to explain it.

I assume you know how the normal mapping works. The question is: how is the offset computed?

1) for each vertex, transform the eye vector into tangent space so that 'z' represents the normal component

now for each fragment:

2) normalize the tangent-space eye vector. call it ~v

3) fetch the height of the texture, which represents the displacement of the surface. Scale and bias is used to map the fetched values of 0-1 to some range, such as -.02 to .02. You wouldn't need to do this with float textures, but you probably don't need the precision. Call the final height h.

4) to compute the offset, you have to find the intersection of a ray with direction ~v and height h to a a plane at height 0.(representing the texture surface). Because the x and y components of ~v represent the tangent plane coordinates, and 2D texture fetches ignore the z component, all you need to do is scale ~v by h.

I think that's it? Any corrections/clarifications?

-Won

Won
01-14-2004, 05:31 AM
You can probably get away with making the height map <= resolution than the diffuse map <= normal map. The first two might even be good candidates for texture compression.

It seems that nowadays the apparent shapes of graphics objects seem to be encoded in more and more sophisitcated ways. The VDM paper alludes to it be referring to macro/meso/micro-scale features, each of which occur at different frequencies.

Macro stuff is per-vertex or larger features, such as traditional geometry. Meso stuff is per-fragment, such as height, displacement or normal maps. Micro stuff is sub-fragment.

You don't actually ever see these micro features, but the effects through surface lighting models. Examples would be anisotropic lighting, specular highlights, BRDFs, etc. Typically, these aren't even considered geometric features, but maybe one day you'll be able to zoom in to some minneart-shaded velvet object close enough to see the individual fibers.

Boy, crossing these macro/meso/micro boundaries would sure make LOD a pain. I'm hoping a future generation of video cards would make the macro/meso transition relatively straightforward, at least.

-Won

duhroach
01-14-2004, 08:01 AM
Hum... prehaps you should name your technique before someone else does....

~Main

mogumbo
01-14-2004, 08:06 AM
It looks like Won has got it.

For the rockwall texture in the demo, the normal map and height map are actually half the resolution of the color map. It makes sense for the height map, since high-frequency features will cause ugly artifacts.

However, I could have spent more time making a full resolution normal map with a bit more high-frequency noise. Using low-frequency normal maps like this tends to make things look too much like plastic. Unfortunately, once you zoom down to the "macro-scale" level on a normal map, linear texture filtering always makes it look too smooth like plastic. Maybe we need detail normal maps :P

Hey duhroach. I think I already named it "offset mapping." Unless someone has a better idea. But I still can't believe someone hasn't already written a paper on this and named it years ago.

[This message has been edited by mogumbo (edited 01-14-2004).]

Ysaneya
01-14-2004, 08:20 AM
"Displacement bump mapping" ? http://www.opengl.org/discussion_boards/ubb/smile.gif

Y.

duhroach
01-14-2004, 08:28 AM
OT: Prehaps a technique like this was overlooked, simply because everyone "took" bump mapping as standard, and didn't question the ability to do more.

"Texture Offset Bump Mapping" would seem more appropriate IMHO..

~Main

SeskaPeel
01-14-2004, 08:37 AM
Could someone explain crystal clear point 1) and 4) of Won explanations ?

Thanks,
SeskaPeel.

mogumbo
01-14-2004, 08:40 AM
Actually, after reading the paper that mikiex posted above, it looks like this might be the same thing as relief texture mapping. What do you guys think? I sort of stopped reading after the first page of equations. This method certainly doesn't need 10 pages to be described.

jwatte
01-14-2004, 09:31 AM
Relief texture mapping goes much further, doing things like allowing items hidden behind crevices at some angles, and pre-warping texture data for rendering. However, this simplifiend method has the benefit that it'll slide into most modern production pipelines and engines quite easily.

Personally, I like the idea of using grayscale images for bump mapping a lot (rather than normal maps) because it allows you to do Z offset, and detail normal maps, without much trouble. In addition, it seems that if you used a grayscale bump map (i e height field) then applying texture coordinate warping based on that data would be very simple.

As has been said, couple this with self-shadowing bump maps (either for real, or using my simple-hack version at http://www.mindcontrol.org/~hplus/graphics/vdbump.html) and you'll probably get really quite amazing shading results!

Tom Nuydens
01-14-2004, 01:44 PM
I coded up a version with a horizon map: http://www.delphi3d.net/misc/images/uber_bumps.jpg . I'll try to add depth-correction as well, I'll put the demo online if it works out.

BTW, I also tried to find some prior art and googled for "bump mapping parallax". The first hit was... this thread! The second hit was the relief texture mapping paper http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

passalis
01-14-2004, 01:59 PM
About the naming of the method, I am not sure about 'Offset Mapping' since in the Nvidia's CG Browser there is an effect that is called Offset Bump Mapping. I think it tries to do something similar but it does not look that good.

As for relief texture mapping the guys that wrote that paper I think won a Nvidia's CG competition for demos last year. If you search nvidia's site maybe you'll find the link, and it has a couple of videos. A friend though that tried that source code at his fx5200 told me he got below 1 fps ;-)

SirKnight
01-14-2004, 02:15 PM
That looks really good Tom. I'm looking forward to the release of it. I'm still working on my own version that hopefully will be finished today if nothing else gets in my way (probably will). Of course don't expect anything you havn't seen before. http://www.opengl.org/discussion_boards/ubb/wink.gif I still think there will be ppl who will like it though.


-SirKnight

deadsoul
01-14-2004, 02:45 PM
Actually using the Alpha channel of the NormalMap works fine ( i've tried it using this
codebase ) after some hacking http://www.opengl.org/discussion_boards/ubb/smile.gif

saves some memory and a texture stage

Deadsoul

Jurjen Katsman
01-14-2004, 03:02 PM
Has anyone thought of a way to do something similar on a gf3 using nv_texture_shader? Perhaps just the offset mapping without the bump? I've been trying but it seems close to impossible...

Tom Nuydens
01-14-2004, 04:01 PM
Hmm... http://www.delphi3d.net/misc/images/uber_bumps_depth_replace.jpg

Looks okay in that particular shot, but fails miserably under all other camera angles http://www.opengl.org/discussion_boards/ubb/redface.gif

What I'm trying to do is bias the Z values based on the height (sampled from the texture) and the angle between the view vector and the surface. It seems to look right or wrong depending on distance. I get the feeling the non-linearity of the Z-buffer has something to do with it, but it's getting a bit too late to sort it out today. To be continued tomorrow http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

Humus
01-14-2004, 05:03 PM
Originally posted by davepermen:
i just looked trough your demos .. and dreamed of the addon..

and i stopped at one..

the water demo. please, oh please, add it to the water demo! it looks so brilliant, it is so smooth, it would simply make it look perfect.

Well, I'm not sure it would improve much on that demo. There's no base map to displace. Could displace the normal map though, but the water contains lots of high frequency stuff though, so I suspect it wouldn't work out all that well.

Zengar
01-14-2004, 09:26 PM
If you still need a good name I'll offer "real bump mapping"

Humus
01-14-2004, 09:59 PM
Well, my version is online too. Guess it's it's kind of blasphemy to say that on this forum given that it's a D3D app, but as an excuse it's because I upgraded an old app. http://www.opengl.org/discussion_boards/ubb/wink.gif

Screenshot: http://esprit.campus.luth.se/~humus/3D/selfshadowbumpmapping_large.jpg
http://esprit.campus.luth.se/~humus/

mogumbo
01-14-2004, 10:40 PM
Nice demo, Humus. The shadows do a nice job of hiding some of the ugly artifacts from the offset mapping http://www.opengl.org/discussion_boards/ubb/smile.gif

I noticed a couple posts complaining about things looking wrong at a distance, which is a good observation. Keep in mind that the offset I calculate in the fragment program is a small-angle approximation. This code

# calculate offset
TEX height, fragment.texcoord[0], texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

should really look like this:

# calculate offset
TEX height, fragment.texcoord[0], texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
RCP temp, eyevects.z;
MUL height, height, temp;
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

All I did there was add a couple instructions to divide the offset by the eye vector's z component. This is more mathematically correct in one sense because it increases the steep angle offsets to their proper lengths.

However, all these offsets assume that the texcoord their pointing to is the same height as the texcoord they're starting at, which is almost never true. So making the offsets the correct length causes extra swimming in the textures and actually makes things look worse in my opinion. In a nutshell, I'm using a small angle approximation to tone down artifacts that result from another completely different approximation.

Anyway, I hope that explanation makes some sense. It's really hard to describe this stuff without drawing pictures :P

[This message has been edited by mogumbo (edited 01-14-2004).]

Ysaneya
01-15-2004, 12:10 AM
Humus, on my Radeon 9700 with catalyst 3.10, the horizon maps for the walls and the ground were swapped. It actually looked better without self shadows http://www.opengl.org/discussion_boards/ubb/smile.gif

Y.

Tom Nuydens
01-15-2004, 12:51 AM
Originally posted by mogumbo:
I noticed a couple posts complaining about things looking wrong at a distance, which is a good observation.

If you're referring to my post (which is the only one in the thread to contain the word "distance" http://www.opengl.org/discussion_boards/ubb/smile.gif ), I was talking about my experiments with depth correction, not about your basic technique.

BTW, to those trying to come up with a GF3 version... You have the offset_texture_2d texture shader operation. Wouldn't it be possible to render the required offsets into a DSDT texture rectangle and then use that in a second pass to actually offset the texcoords? I'm not sure if this will fit on the 4 available texture units, but it's worth investigating...

-- Tom

Jurjen Katsman
01-15-2004, 01:44 AM
BTW, to those trying to come up with a GF3 version... You have the offset_texture_2d texture shader operation. Wouldn't it be possible to render the required offsets into a DSDT texture rectangle and then use that in a second pass to actually offset the texcoords? I'm not sure if this will fit on the 4 available texture units, but it's worth investigating...


Using some intermediate texture that would sortof work, but one big problem.. The texture read done by the offset_texture_2d operation is a non-projected 2d texture fetch, without division by q... In other words, you effectively won't get perspective correct texturing, which will look incredibly bad... Unless I'm missing something...

apathy
01-15-2004, 01:53 AM
Excellent technique! I'll be trying this out in our rendering engine once I get a spare hour or so.

This is just the sort of thing I've been looking for since viewing normal mapped geometry up close looks rubbish due to the lack of perceived depth variation over the surface.

NocturnDragon
01-15-2004, 03:46 AM
Originally posted by Ysaneya:
Humus, on my Radeon 9700 with catalyst 3.10, the horizon maps for the walls and the ground were swapped.
The same here with a 9600 and 3.10

Won
01-15-2004, 04:46 AM
Mogumbo --

Yeah. When I first looked through the code, I missed the small angle approximation. However, when I was trying to modify it, I noticed something was not quite right. http://www.opengl.org/discussion_boards/ubb/smile.gif Still, your approximation is obviously justified, and apparently looks better. I'll make the change and see for myself.

Actually, I was trying to do two things:
Try to use TXB instead of TEX to reduce sampling artifacts. It isn't enough to just use a lower rez texture, because MIP-mapping will pick the highest rez it can every time. I was also trying to "bias" the offset by using "per-fragment tangent space" by adding another normalized normal map fetch before the offset computation. This might also have to be TXB'd.

However, if what you say is true (that the approximation has beneficial side effects) then the per-fragment tangent space thing probably wouldn't look any better, or it might add too many artifacts.

-Won

PS How about this for a name: Mobumps. That's what I've been calling it. http://www.opengl.org/discussion_boards/ubb/smile.gif

SebastianSylvan
01-15-2004, 05:57 AM
Hi. First post for me. Just got the username for this.

Anyway I have an idea to reduce the sharp-angle artifacts.

Basically the problem exists because the current algorithm assumes that the height at the point of intersection between the eye-ray and the "virtual surface" is at the same height from the actual surface as the original position. This is not always the case.

Consider a ridge in the heightmap. Imagine rendering a point lying exactly under that ridge and having the eye look from a pretty steep angle. Now the offset will calculate the intersection of the eye-ray with a plane lying at this height, but this intersection point is outside the ridge so the height is wrong. So what SHOULD happen here? The heigh should be lower (making the offset smaller).

So the idea is that you take the new texture coordinate and find the height of that. Then you average this with the first height to get an offset that's closer to what it should be (if the heigh at the intersection point is lower than the offset will be shortened, if it is higher the offset will be longer - exactly like you'd expect). Unless the heightmap is really high frequency it would result in a binary search converging on the exact intersection point of the eye-ray and the virtual surface very rapidly.

I'm guessing you'd need one or maybe two of these "lookup-heightmap-and-average-then-repeat" cycles to get rid of most artifacts.

Yeah it adds instructions and texture fetches but it's still pretty cheap compared to some other effects.

I couldn't get it to build so I can't test it, but some of you have it working so maybe you can try it. Maybe you could scale by 1/z without artifacts doing this as well.

Won
01-15-2004, 06:19 AM
Interesting. So you can do a Newton's method kind of thing. You also get the "slope" from the normal map, which could also make the offset computation more accurate.

But the real problem is a sampling problem. In the plane intersection approach, the offset is unbounded, and can cause the approximation to not converge. The small angle approximation (which is pretty darn easy to compute) actually helps here because it "clamps" the maximum offset in a smooth way. The hard case here are silhouette boundaries.

Also, the tangent-space information is only guaranteed to be good for the current fragment (or the triangle), so the value for "height" is somewhat suspect if the offset goes far enough. Perhaps if you used object-space normal maps...hmmm...

-Won

SebastianSylvan
01-15-2004, 06:26 AM
Yes you need to clamp the offset somehow. Otherwise it'll go to infinity at grazing angles.

Either just clamp it or do

scale = 1 - exp (-scale);

or something to make sure it smothly clamps to 1.0...

It would be interesting to see this in action. You could actually do a loop that averages new height values until the difference between the two values is close to 0. But that could potentially lead to infinite loops in some cases...

EDIT:

Hmm.. just realised that that clamping scheme won't work. So maybe keep the small angle approximation...

Anyway the "pretty much flat geometry" assumption is better than "no sharp edges in heightmap" assumption. =)

[This message has been edited by SebastianSylvan (edited 01-15-2004).]

[This message has been edited by SebastianSylvan (edited 01-15-2004).]

Tom Nuydens
01-15-2004, 06:33 AM
Originally posted by Jurjen Katsman:
Using some intermediate texture that would sortof work, but one big problem.. The texture read done by the offset_texture_2d operation is a non-projected 2d texture fetch, without division by q...

Ah, true. GeForce4 can do a projective dependent texture read, though. See GL_NV_texture_shader3.

-- Tom

SebastianSylvan
01-15-2004, 06:55 AM
I'm just gonna throw this one out there.

(just thought of it).

How about if my multipli the height by the dot product of the original normal and the normal (clamped to zero?) at the point of sampling?

I'm assuming object space bumpmaps here, btw.

That way if you have a curved surface the height will go towards 0 if you offset to far which will push the offset back towards the origin (it will divide the offset by 2 for the next step).

This _should_ (again, haven't thought it through, I'm late for a dinner with friends =)) make the offset clamp smoothly when necessary (when the geometry isn't flat).

vember
01-15-2004, 07:16 AM
wow.. this looks way better than I expected. http://www.opengl.org/discussion_boards/ubb/smile.gif

I thought of a simliar technique last year, with the intention to fake a raytrace through a heightfield (for underwater refractions) but I never got round implementing it. I'm really curious about it now.. http://www.opengl.org/discussion_boards/ubb/smile.gif

Won
01-15-2004, 07:33 AM
Linked from GPGPU:
http://personal.telefonica.terra.es/web/atejadalacaci/raycast/index.html

MZ
01-15-2004, 09:51 AM
Originally posted by Zengar:
If you still need a good name I'll offer "real bump mapping"

If you still need a good name I'll offer "displaced bump mapping"

Zengar
01-15-2004, 10:07 AM
Mobumps sounds nice http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif

mogumbo
01-15-2004, 10:19 AM
Yep, it still needs a good name. The old fashioned bump mapping seems to be referred to as "offset bump mapping" most of the time, so the word "offset" is kind of taken. "Displacement" is already used to refer to a fully 3D effect, and this is 2D. "Bump" is traditionally used to refer to lighting effects.

What about "per-pixel offset mapping" or "per-pixel parallax mapping"?

"Mobumps" and "momapping" both have character, but they put my vanity meter off the scale http://www.opengl.org/discussion_boards/ubb/wink.gif

Mikkel Gjoel
01-15-2004, 10:55 AM
In in the line of "people trying to get-it-right", I have created an image to illustrate the technique.
http://www.userwebs.dk/gjoel/offset_bumpmapping.jpg
(hrmpf, no [img]-tag)

- is that about right?


\\hornet

[This message has been edited by hornet (edited 01-15-2004).]

mogumbo
01-15-2004, 11:22 AM
That's pretty close, but I don't understand the black line at the bottom. Try this: The original tex coord is where the eye vector hits the red line. Then find the point on the blue line directly above or below the original tex coord. Draw a vector horizontally from that point to the eye vector. That horizontal vector would be the offset. Now add the offset to the original tex coord and you have your new tex coord.

Doh! That's incomplete, because I didn't account for the small angle approximation. I guess it's better to just draw it like the math in the fragment program. Draw the eye vector with length 1.0. Then scale the eye vector by the difference in height of the blue line and the red line. Then the horizontal component of the the scaled eye vector is your offset.

[This message has been edited by mogumbo (edited 01-15-2004).]

Tom Nuydens
01-15-2004, 11:36 AM
I like "parallax mapping". "Parallax" was the first keyword that came to my mind when trying to find similar approaches on the web.

Meanwhile, I'm beginning to understand why z-correct bumpmapping never took off in a big way. Latest attempt:
http://www.delphi3d.net/misc/images/uber_bumps_depth_replace2.jpg http://www.delphi3d.net/misc/images/uber_bumps_depth_replace3.jpg

It looks correct-ish now, but still starts to suffer from artefacts when you move a bit further away. The intersection lines also tend to change shape somewhat as you move around, which is quite distracting. All in all an interesting experiment, but hardly a robust solution. The same goes for the horizon mapping, btw, so the best solution for real-world application is probably to forget about the extra bells and whistles and use your technique in its pure form.

-- Tom

mogumbo
01-15-2004, 11:41 AM
Either way, those are some sweet screenshots. Are you going to post that demo when you're done?

Humus
01-15-2004, 11:47 AM
Originally posted by Ysaneya:
Humus, on my Radeon 9700 with catalyst 3.10, the horizon maps for the walls and the ground were swapped. It actually looked better without self shadows http://www.opengl.org/discussion_boards/ubb/smile.gif


Originally posted by NocturnDragon:
The same here with a 9600 and 3.10

Make sure you're not just overwriting the previous version of the demo. Otherwise it will just use the stored horizon maps from the old demo. Remove the old demo before extracting it.

Jared
01-15-2004, 12:27 PM
guess i'll have to register and post, too. even if its just the result of adding 4 lines and changing the bias.
http://festini.device-zero.de/uglyhack.jpg

of course picking those edges that actually look good and dont show artifacts *cough*

Mikkel Gjoel
01-15-2004, 12:56 PM
jared - do you actually get silhuette changes? How is that possible?

mogumbo - I updated the image, but it's quite difficult to illustrate the method with the small-angle. Get's sort of confusing http://www.opengl.org/discussion_boards/ubb/smile.gif

<edit> http://www.userwebs.dk/gjoel/offset_bumpmapping2.jpg
</edit>

\\hornet

[This message has been edited by hornet (edited 01-15-2004).]

davepermen
01-15-2004, 01:05 PM
i guess if carmack sees this thread, doom3 gets another delay
espencially the silhuette wich gets possibly finally more correct, too..

but there will be one thing. if this technique gets trendy, shadow volumes will essencially have to die. with depth-correct mapping, and silhuette correct techniques, shadow maps can throw bumpy shadows onto bumpy surfaces..

mogumbo
01-15-2004, 01:11 PM
I've still got my money on soft shadows. If they can be computed fast enough they can be cast on any surface, bumpy or not, and they'll always look decent.

Pentagram
01-15-2004, 01:42 PM
Ok it's an official feature of Tenebrae now http://www.opengl.org/discussion_boards/ubb/smile.gif
No decent screens yet waiting for the artists to render heightmaps for me...
If you actually walk around in game there seems to be a little bit of weird shifting around but maybe my implementation is ****ed http://www.opengl.org/discussion_boards/ubb/wink.gif

Considering the achitecture of doom3 it would be quite easy to add this techique but they would need to make a lot of extra art http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Charles

Tom Nuydens
01-15-2004, 02:11 PM
I've put my demo up at http://www.delphi3d.net/download/uberbump.zip for anyone who wants to have a go. I'll do a proper news post about it on my site tomorrow, so you have until then to offer some destructive criticism http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

Adrian
01-15-2004, 02:49 PM
Originally posted by Tom Nuydens:
I've put my demo up at http://www.delphi3d.net/download/uberbump.zip for anyone who wants to have a go. I'll do a proper news post about it on my site tomorrow, so you have until then to offer some destructive criticism http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

It's really nice, but the ground texture does 'roll' very noticeably if you get fairly close to the ground, look at the ground and move sideways.

dorbie
01-15-2004, 03:00 PM
I suppose second order displacement would improve the effect even more under some circumstances. This is all excellent work!

I'm interested in how a silhouette can change. I haven't fully grokked that one. I suppose the basic requirement is to displace in not out (i.e. away from the viewer), however with planar facets how do you know you've missed, or can you track the derivative in curved tangent space for -ve normal? I suppose if the tangent space z component is +ve w.r.t. the normal then you've missed, although you might still risk hitting a bump. Doesn't this take multiple dependent lookups to determine this? What I might call second & third order effects reading new displacements at each point while (optionally) interpolating changes in tangent space interpolants? I'd love to hear a description of why the silhouette displacement works from an implementor since my initial impression of the algorithm suggests it doesn't without further improvements and chained lookups.

[This message has been edited by dorbie (edited 01-15-2004).]

Tom Nuydens
01-15-2004, 03:09 PM
Originally posted by Adrian:
It's really nice, but the ground texture does 'roll' very noticeably if you get fairly close to the ground, look at the ground and move sideways.

I used twice the scale and bias values Mogumbo used in his shader. Using smaller values removes this artefact but makes the bumps less pronounced as well.

-- Tom

dorbie
01-15-2004, 03:20 PM
P.S. it's clear that most people aren't getting silhouette displacement and maybe even Jared's shot doesn't for the parts he's talking about.

It may still be possible though, using an itterative displacement, testing against the curving normal product or perhaps a fixed itteration count in conjunction with a horizon map test for the final exit ray.


[This message has been edited by dorbie (edited 01-15-2004).]

mogumbo
01-15-2004, 03:27 PM
Well, the original method I posted is a quick and dirty way to get a parallax effect. Silhouettes look better when you displace inward (use a more negative bias), but that tends to damage the steep bumps that are facing toward you. Displacing outward does the opposite. So for most textures, the best bias is in between those two extremes (-0.5 times the scale).

I've played around with second order offsets, but haven't been able to get anything that looks good yet. The textures start to swim around too much.

LarsMiddendorf
01-15-2004, 04:09 PM
Great work!
I modified the demo from Delphi3D.net so that the edge of the cube is also displaced, by killing pixel with texcoord.x<0. If you do this for all edges, bumpmapping with a correct silhoutte should be possible.
Image: http://larsmiddendorf.tripod.com/betterbump.JPG

dorbie
01-15-2004, 04:19 PM
That only works if you have a coord you can threshold at the edge of a polygon and that polygon is the edge of your model. It's interesting but very constrained. What's needed is a general solution for curved faceted surfaces and that requires itteration in tangent space while interpolating changes in the tangent space vectors. (IMHO)

That's all gravy though, this effect is awesome. Very cool that so many coders went and implemented it in a day and posted the results.

jwatte
01-15-2004, 06:48 PM
If you build a horizon map, perhaps you can also build a map that tells you how far to move within the base color map based on where you sample.

The basic problem is, when viewed from the side, that a protrusion should be protruded from a point on the surface closer than the point on the surface that's being sampled. This kind of bump mapping fakes that, but can't fake it when the place you're sampling is level with the main surface -- hence the "high frequency causes artifacts" problem.

Now, if you have a polynomial of some sort which tells you how far to move the sampling in each direction, based on view direction, you can actually start approximating this -- I think that might work better than the "Newton" method that others suggested, because if the sampled point is flat, then the Newton method doesn't know whether to move or not unless it probes a number of points out towards the viewer, keeping the "best" one using something like error minimization.

SirKnight
01-15-2004, 08:48 PM
I must say this is one of most interesting threads on here in a long time. We need more like this. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

Jared
01-15-2004, 10:28 PM
Originally posted by dorbie:
I'm interested in how a silhouette can change. I haven't fully grokked that one. I suppose the basic requirement is to displace in not out (i.e. away from the viewer), however with planar facets how do you know you've missed, or can you track the derivative in curved tangent space for -ve normal?

thats the reason why i called it uglyhack. its making it look a bit better at some edges, but most of the time fails miserably. just like lars i changed the bias to displace inwards and kill fragment with texcoords beyond (0,0)-(3,1).

if there would be a way to find the correct intersection between view vector and heightmap it would work just fine, for the pillar we could even get rid of geometry, just place a billboard to fire up the fragment program and do the work there according to curvature and heightmap. but that somehow sounds like all those fragment-program-raytracer-too-
much-for-realtime-demos.

funny how i end up with the same problems over and over again: a decent and fast method to find a ray-heightfield intersection without stepping through the map or needing trees (something like multi resolution heightmaps). without loops and branching all i can see is making x hardcoded steps and hoping for a good approximation.

time to have a closer look at how they did it in their vdm paper.

JustHanging
01-15-2004, 10:34 PM
Silhouettes are quite easy to do. Everybody knows how to do edge antialiasing, right? Detect silhouettes and draw antialiased lines on them.

Instead of lines you can draw quads extruded out from the surface with alpha ranging from 1 (on the surface) to 0 (away from the surface). If you subtract 1-height from this alpha and use alpha test GREATER, 0, the quad will come out shaped like the silhouette sampled from the height map on that edge.

Using a naive silhouette tracking algo (edges with front and backfacing faces) will result in some popping and bad quality, but if you instead calculate a dot product between each vertex normal and eye->vertex vector and extrude the quad along the line where this dot product is 0, the silhouette will change smoothly. In practice, detect edges where the dot is <0 in the other end and >0 in the other. There can only be 2 or 0 per triangle. If there are 2, (reverse) interpolate to find the points on the edges where dot=0 and draw a quad between them.

-Ilkka

vember
01-15-2004, 11:53 PM
Originally posted by Pentagram:
Considering the achitecture of doom3 it would be quite easy to add this techique but they would need to make a lot of extra art http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Charles[/B]

But since they're making a lot of their art as high-poly models converted to low-poly with normalmaps, they could generate them automatically. The same goes for hand-drawn heightmaps, I reckon they don't draw normalmaps manually in photoshop http://www.opengl.org/discussion_boards/ubb/wink.gif so they must have the heightmaps stored somewhere..

But they probably have to do quite a bit of work to avoid artefacts.. Wonder how long that would delay the game.. :P

Jurjen Katsman
01-16-2004, 12:13 AM
Ah, true. GeForce4 can do a projective dependent texture read, though. See GL_NV_texture_shader3.
-- Tom


Which unfortunately isn't available on the Xbox =) Oh well http://www.opengl.org/discussion_boards/ubb/smile.gif

OldMan
01-16-2004, 01:01 AM
Originally posted by vember:
But since they're making a lot of their art as high-poly models converted to low-poly with normalmaps, they could generate them automatically. The same goes for hand-drawn heightmaps, I reckon they don't draw normalmaps manually in photoshop http://www.opengl.org/discussion_boards/ubb/wink.gif so they must have the heightmaps stored somewhere..

But they probably have to do quite a bit of work to avoid artefacts.. Wonder how long that would delay the game.. :P


hehe if he see an anouncment of a new delay to D3 we know why http://www.opengl.org/discussion_boards/ubb/smile.gif

I am just mad to be away from home and any capable equip do try it myself.

I think now someone that got it should (when having time of course) prepare a full description paper or tutorial. And I agree that this is the best thread here for very long time (we don't see many threads this size without any ATIvs NV war)

mogumbo
01-16-2004, 01:39 AM
I agree. I'm working on the paper right now. And I think I'm going to call the technique "parallax mapping." ...unless someone can give me a good reason to call it something else. "offset mapping" is just too vague.

krychek
01-16-2004, 02:48 AM
For getting bumped silhouettes, how about calculating barycentric coordinates to determine if we the new texcoord is outside the texture triangle? Will that be too costly?
Or, if the mesh triangles are mapped to unique portions on the texture, then say in the alpha of the color texture we store a value that different for adjacent texture triangles and we can kill the fragment if the value at the newly sampled position is different from the original (or a value to the fragment shader).

Edit: ..And this killing should only be done for tris at the silhouette :p

[This message has been edited by krychek (edited 01-16-2004).]

davepermen
01-16-2004, 03:40 AM
yeah. i think for polypump technologies, they should be capable to precalculate, estimate, compress and store information on how to displace depending on the view, and, how to act on the edge..

but this analysis on how to store that data effectively (as it would be a 4d texture theoretically), and how to generate the actual info, and how to extract and use it in as less fragment program code as posssible, i think THAT would be worth a big siggraph topic.

hehe.

but yeah, its great to see how this small "uglyhack" inspired everyone. i thought about this idea long ago, but thought, its too simple, it simply can not look good.

if i only tried it...

SebastianSylvan
01-16-2004, 04:41 AM
If you have the normal map you should be able to derive a heightmap which is "close enough".
The heightmaps should be pretty low-res anyway to avoid too sharp edges (and the artifacts that come with them).

Do an Euler integration of the surface basically.

So there's no need to wait for the artist to render heightmaps!

dorbie
01-16-2004, 05:25 AM
JustHanging, we're talking about fragment silhouettes, I'm not sure it's as easy as you seem to think. As far as I can tell, basically you're talking about polygon subdivision with true displacement maps only at the silhouette quads.

That's cheating :-). It'll work but subdivision displacement is generally known to work. I also think that in some cases the hull silhouette polygons may not be the only polygons that contribute fragments to the displaced silhouette. Nothing's perfect though, even itterating through curving tangent space could take many samples and have it's own problems.

[This message has been edited by dorbie (edited 01-16-2004).]

dorbie
01-16-2004, 06:31 AM
Sebastian, Doom3 has several useful things that would be available to generate the displacement maps. The most important is the original high poly model from which the normal map was produced. This is probably ideal for a displacement map surface offset calculations from the simpler mesh. The second is the detailed bump map for info not in the geometry, this could be added to the displacement map after a scale operation. One of the problems that might arise is that a solution may need to adjust the simplified hull geometry or support signed displacement (above and below the surface, to and from the viewer). It's not clear what would be the best choice of hull position for displacement and what would be the best approach +ve -ve or signed displacement.

I'm comming to realize that silhouettes are very badly behaved with fragment displacement. For correct tangent space displacement the offset calculation is tan(incident_angle)*offset so at all silhouette edges this displacement tends towards infinite as V.N approaches 0. So this is a obviously a problem.

maxuser
01-16-2004, 07:57 AM
First off, all great looking screenshots. So next we want fragment-level silhouettes, right?

My first thought was the same as JustHanging's: alpha-blended, surface-orthogonal (or view-orthogonal) quads extruded from silhouette edges. But even after computing the quads' silhouette textures, this'll breaks down for low-poly models, degenerating into an n-view sprite.

So let's go back to basics. To determine if a triangle is front-facing, you can simply dot the view direction and the face normal; if it's positive, the triangle is facing the viewer (assuming the view direction is *toward* the view, not *from* the view). Otherwise, it's culled (if back-face culling is enabled).

With fragments, it's not much different. Instead, we dot the normal from the normal map with the view direction, presumably all in tangent space. If the dot product is positive, the fragment is facing the viewer. Otherwise, kill the fragment, since it's facing away. The catch is that this has to be done in the context of the displaced texcoords. This probably goes without saying, but any fragment-level silhouette algorithm requires that the physical geometry (i.e. vertices and triangles) bounds the virtual geometry (i.e. the bumpy surface we're trying to approximate), since fragments are produced only *within* polygons. I haven't read the RTM paper yet ( http://www.cs.sunysb.edu/~oliveira/pubs/RTM_low_res.pdf ), but look at the dormers in the before and after pictures on the first page; notice that the dormers don't really jut out, it's just that the rest of the roof is pushed in, and the appropriate portion of that polygon is carved away.

So let's assume that all height offsets are negative, i.e. inward from the bounding physical geometry. Applying the moBump offset has the effect of pulling texcoords from the back side of the surface, around the silhouette and into view. A diagram would really help here...hornet, if I may...so look at http://www.userwebs.dk/gjoel/offset_bumpmapping2.jpg and imagine what would happen if the height was negative; the texcoord offset would go to the left instead of the right, pulling a sample that's away from the viewer, which near the silhouette would mean from the back side of the surface, even for front-facing triangles.

Now, sample the normal map from that offset texcoord, and when dotted with the view direction, kill fragments with negative dot products. I haven't thought through all the details, but I imagine this could work for even self-occluding surfaces with "internal silhouettes." But the fear would be holes in the surface where the wrong fragments are killed due to error in the small-angle approximation, which gets worse near the silhouette. (So-called "innocent-bystander" fragments. http://www.opengl.org/discussion_boards/ubb/smile.gif ) I'll have to look at the math more closely to see if that can be accounted for.

P.S. I was so inspired when I started reading this thread a couple of days ago (thanks Won, for pointing me this way) that I immediately ordered a Radeon 9800 Pro ME, which just arrived. My aging GeForce3, which has served me well, had been forcing me to the sidelines of this fantastic discussion. But no longer.

Won
01-16-2004, 08:36 AM
You'll probably also want to make it depth correct. Then, you run into the annoying practical problem -- that depth-writing fragment programs suffer from invariance, precision and performance problems. Also, it would be hard to use "faux geometry" to generate shadow volumes.

Is this technique similar to ray-casting a height field in a bounded volume?

-Won

arekkusu
01-16-2004, 11:05 AM
I ported the demo to Mac OS X, here: http://homepage.mac.com/arekkusu/SW/other.html

The TGA loader was tossed due to its hate for big-endian architectures. Otherwise mostly the same.

Thanks mogumbo, this is a very good trick.

jwatte
01-16-2004, 01:57 PM
without loops and branching all i can see is making x hardcoded steps and hoping for a good approximation.


I'm pretty sure you can bake some expression/polynomial for how far to step into a texture. For example, if the two input parameters are angle-to-normal, and phi-around-normal, then you could use an RGBA texture to give you a first-order polynomial in angle, and third-order polynomial around phi, to tell you how far to step. Or, for "polynomial" perhaps "spherical harmonic" might work better. Also, more than 4 coefficients for expression evaluation might work better; so use two displacement distance textures.

This ends up costing one or two extra texture accesses per fragment, and a little bit more math. And a lot more pre-processing :-)

Another alternative is to make this expression express not sampling displacement, but height displacement, based on incoming angle/direction. Actually, that seems smarter, overall; no extra dependent read needed. Just a height map pre-baking step, and some math to turn the eye vector/normal into the right coefficient multipliers.

Coop
01-16-2004, 02:14 PM
I made a test demo using this great idea. You can find it here: www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip (http://www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip)
The interesting thing is that it works on gf3! Yes, it's possible http://www.opengl.org/discussion_boards/ubb/biggrin.gif. For details look at readme.txt.
The bad thing is that it's a DX app, sorry...

dorbie
01-16-2004, 02:23 PM
I don't think you can avoid the dependent read jwatte. There are two elements here tan of incident angle multiplied by the locally dependent displacement. That locality requires surface displacement data. It does use height displacement to determine lateral displacement based on angle and it depends on the height fetch to do it.

[This message has been edited by dorbie (edited 01-16-2004).]

SirKnight
01-16-2004, 04:14 PM
Well it looks like making a GeForce 3/4 version is turning out more difficult than I thought. lol. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

I was writing the shaders in Cg and the FP20 profile doesn't like this snippet at all:




// calc offset
float4 offset = hmap * 0.04 - 0.02;
offset = offset * ts_eye + map_coords;

float4 nmap = expand( tex2D( normal_map, offset ) );


(125) : fatal error C9999: Dependent texture operations don't meet restrictions
of texture shaders

hehe, opps. http://www.opengl.org/discussion_boards/ubb/smile.gif

Must be too complicated of a dependent read for nv20 hardware. I didn't do much dependent reads when I had my geforce 4 so I wasn't aware of this at first. Now I know. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Maybe this algorithm is too complicated for NV2X hardware but I'm going to keep working at it. Surely I can "trick" the hardware into doing it somehow. There were some posts here about texture shader stuff so maybe i'll go back to those and read them more closely. http://www.opengl.org/discussion_boards/ubb/wink.gif For some reason it's just interesting to me to try to get this technique to work on the older hardware. *shrug*


BTW, thanks mods for keeping this thread clean. I noticed some posts were recently deleted which were trying to take this thread off topic. GO MODS GO! http://www.opengl.org/discussion_boards/ubb/biggrin.gif



-SirKnight

SirKnight
01-16-2004, 04:16 PM
Originally posted by coop:
I made a test demo using this great idea. You can find it here: www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip (http://www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip)
The interesting thing is that it works on gf3! Yes, it's possible http://www.opengl.org/discussion_boards/ubb/biggrin.gif. For details look at readme.txt.
The bad thing is that it's a DX app, sorry...


Hmm, interesting. I'll go check this out now. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

mogumbo
01-16-2004, 04:16 PM
Okay, I finally got this white paper close enough to being finished. I'll update it later after I do a better literature search and after I get some sleep, but it describes parallax mapping well enough.

If anyone feels like reading it, let me know if you see any glaring flaws.
http://www.infiscape.com/doc/parallax_mapping.pdf

SirKnight
01-16-2004, 04:30 PM
That's cool coop!

Now if I go find out what the opengl (NV) equivalent is for those pixel shader instructions you use there I may just have it!

-SirKnight

SirKnight
01-16-2004, 04:43 PM
Neat. In Cg to generate the "texm3x2pad & texm3x2tex" instructions for ps_1_x DX profile is "tex2D_dp3x2(..." The opengl FP20 Cg profile has the exact same function that does the exact same thing. Which generates the dot_product_2d or dot_product_rectange combinations in NV_texture_shader. Looks like I'll have a opengl geforce 3 verison now. http://www.opengl.org/discussion_boards/ubb/biggrin.gif yay!

Thanks coop, you saved me a bit of work figuring out how to get the gf3 level hardware to run this algorithm. Although the fun at figuring it out on my own is gone, at least I'll get it working now, which is the important thing. http://www.opengl.org/discussion_boards/ubb/smile.gif


-SirKnight

SirKnight
01-16-2004, 04:55 PM
Good paper mogumbo. I did notice one thing while I skimmed it (i'll do a more in-depth read later). On the second page, the s in OpenGL's and the following A in ARB_vertex_program are ran together.

-SirKnight

Lars
01-16-2004, 05:40 PM
It is a it confusing that figure 1 is after figure two. Also you should use the same descriptions in figures and text. For example always T<sub>o</sub>, or always T(original).

I think you also should add an image of the heightmap used in the comparisons at the end.

A damn fine technique...
Thanks

Lars

mogumbo
01-16-2004, 10:08 PM
Thanks for the tips guys. Those swapped figures were annoying. I guess I have no future in desktop publishing.

jwatte
01-16-2004, 10:40 PM
Dorbie,

Yes, I agree that one dependent read is needed.

What I meant was: The naive implementation of my first suggestion for how to improve profile fitting would use TWO dependent reads, so the "extra" dependent read you get rid of by baking a vector-dependent displacement as a texture function brings us back to the original one.

Coop
01-17-2004, 12:23 AM
I just thought that I'll put some info here about how to achieve the effect on gf3/4.
I won't be able to keep my demo on my server forever, and someone may find this information useful.

The basic step is to notice that:

u' = (height*scale + bias)*Ex + u
= height*scale*Ex + bias*Ex + u
= (height, 1, 1) dot (scale*Ex, bias*Ex, u)

v' = (height*scale + bias)*Ey + v
= height*scale*Ey + bias*Ey + v
= (height, 1, 1) dot (scale*Ey, bias*Ey, v)

All these calculations can be done using two dot products followed by a texture sampling. This is exactly what GL_DOT_PRODUCT_NV and GL_DOT_PRODUCT_TEXTURE_2D_NV modes (under NV_texture_shader) or texm3x2pad and texm3x2tex instructions (under DX) do. You need a texture with the height map in the r channel and with the green and blue channels set to one. This texture is sampled in tmu 0. Next two tmus perform the dot products. Second terms of the dot products are prepared in the vertex program and interpolated as texcoords. Tmu 3 holds the final texture to sample.
As can be seen it takes 3 tmus to achieve the effect, leaving one tmu for something else (for example normal map).

[This message has been edited by coop (edited 01-17-2004).]

satan
01-17-2004, 01:06 AM
Originally posted by SirKnight:
Looks like I'll have a opengl geforce 3 verison now. http://www.opengl.org/discussion_boards/ubb/biggrin.gif yay!
-SirKnight

Hi SirKnight,

is there any chance that you put up your version somewhere, so that we poor non FX/Radeon 9500 up owners can see this nice effect on our inferior hardware?
Would be really, really nice.
Of course with source and as a Linux Version it would be nicest http://www.opengl.org/discussion_boards/ubb/smile.gif.

Many thanks to mogumbo for this great technique and to coop and SirKnight for making it work on lower spec hardware.

All I need now is some time to test it in my little engine.

mogumbo
01-17-2004, 09:46 AM
Ha! I found it. A 2001 paper that talks about the same technique. They even gave it the same name, parallax mapping. So why hasn't anyone here heard of this before? It looks like the only thing I've added to this is the limiting of the texcoord offsets. ...I can't believe I spent all that time on the whitepaper (which needs to be severly updated now).
http://vrsj.t.u-tokyo.ac.jp/ic-at/papers/01205.pdf

sk
01-17-2004, 10:52 AM
Mogumbo:

Some other prior art has been mentioned in a recent gdalgorithms thread "Bump + Offset Mapping". Tom Forsyth recollects "Parallax Texturing" which he vaguely attributes to Corrinne Yu '97, but I can't find anything on the 'net about that except Tom's description in a later a talk:
http://www.muckyfoot.com/downloads/files/Tom_Forsyth_Meltdown2001.zip

Cem Cebenoyan also cites this demo: http://developer.nvidia.com/dev_content/cg/cg_examples/pages/grass_rendering.htm

The mailing list archive is available at the following address, but the thread hasn't surfaced yet:
http://sourceforge.net/mailarchive/forum.php?forum_id=6188

Edit: Fixed up link properly

I should also say that you shouldn't feel negative about all of these examples surfacing, even if you did spend a lot of time on the whitepaper. Unlike the other authors you've brought the idea -- only really attractive now with dynamic lighting + normal mapping -- to the masses.


[This message has been edited by sk (edited 01-17-2004).]

SirKnight
01-17-2004, 11:49 AM
Originally posted by satan:
Hi SirKnight,

is there any chance that you put up your version somewhere, so that we poor non FX/Radeon 9500 up owners can see this nice effect on our inferior hardware?
Would be really, really nice.
Of course with source and as a Linux Version it would be nicest http://www.opengl.org/discussion_boards/ubb/smile.gif.

Many thanks to mogumbo for this great technique and to coop and SirKnight for making it work on lower spec hardware.

All I need now is some time to test it in my little engine.


Yeah once I get it complete I'm going to post the program (compiled for win32) and source. What I'm doing is keeping my version looking exactly like the original program using this technique, just changing the shaders to Cg (for easier reading) and stepping everything back to work on older hardware. I'm making sure my code I write will work on any platform and so far I'm pretty sure it should. Just as long as you have Cg installed in linux or whatever platform you use. I have linux but I dont have Cg setup or my video drivers installed so I can't test it's crossplatformness myself.

Also when I include the cg files I'm assuming they are in a directory called Cg as I do this: #include <Cg/cg.h> etc... So you'll have to make sure the cg headers are in a Cg folder like that.

Speaking of such, I need to see if my webpage is still up or has been deleted. I havn't touched that thing in a long time.

Oh also keep in mind that my version will only work on the GeForce 3 or 4 Ti. So radeon 8500 users will have to write their own fragment shaders as mine are for the fp20 (NV) profile. And of course for those kind of shaders can't be done in Cg as Cg does not have a radeon 8500 profile. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

alex_lionhead
01-17-2004, 04:10 PM
hello all - my first post <blush> http://www.opengl.org/discussion_boards/ubb/smile.gif
I've implemented a working bumpy silhouette solution for arbitrary meshes, if anyone is interested. more on that in a sec.

first, great thread! mogumbo, thanks for posting in the first place - I had read the VDM paper about a week before I saw your post, and had thought long and hard about it. I thought there might be a scaled back, more approximate version that used less memory, but decided it wouldn't look good. but your post inspired me to look again! offset mapping is very cool.

you can see a pic of my implementation with bumpy silhoeuttes here: http://www.lionhead.com/personal/aevans/silmap.gif
on the left you see fake, normal mapped, offset-mapped object with per pixel silhouette calculation. it's an 80 poly geosphere. on the right, you can see the 7680 face original geometry that the normal map etc was calculated from, for reference.
the top row and the bottom row are the same - except for wireframe mode to reveal the lack of tesselation on the left!

there are still some artefacts and things I hope to clear up - but it looks good so far, I think!

sadly I can't post any code or exes since they are property of my employer. but I can roughly describe the algorithm, inspired as it is by VDM and this thread.

leaving offset mapping out for a second, and in fact ignoring shading within the body of the object... what I do for silhouette is: first up, my displacements are all "into" the object, for obvious reasons. then, for each fragment on screen, I rotate the eye ray into tangent space (as in offset mapping). I then express it in polar coordinates theta,phi. as suggested by the VDM paper, there is a cutoff angle of phi (phi=0 is looking straight down, phi=PI/2 is glancing across) where the ray never intersects the *real* surface - it just glances off. this cutoff angle changes depending on theta - ie where you're looking from. so, using exactly the same shader maths as you might use to encode shadowed-bumps (ie, either harmonics or volume texture or whatever) you encode a per-texel phi-cutoff as a 1d function of theta.

this is all you need for per fragment silhouettes! per fragment you compare your eye-phi against the phi-cuttoff for the given eye-theta, and kill the pixel if the angle is too steep.

inside the object, I'm doing vanilla offset mapping. I also tried, as some have alluded to in this thread, encoding the offset amount not just as a constant height per texel, but as a function of theta,phi viewing angle. I used 4th order spherical harmonics to encode this; however given the cost (16bytes per texel) in fact offset mapping looked much better, so for now I'm using offset mapping. this means that for example the shading won't exactly match the silhouette. but, never mind! if you were to throw in shadows as well (not shown in the pic), it would hide the artefacts even more.

on a final note, most of the offset mapping discussions so far have been for tiling bumpmaps. in fact, the vast majority of my work on this (about 90%!) was investigating ways to handle the seams you get with a normal mapped object - what do you do if the offsetted uv goes outside the seam etc? I solved this by extending the geometry along the seams using an extrapolation of curved PN triangle, and then raytraced these geometry fins back onto the original surface. in this way, I could extend "real geometry" into every pixel of my normal map, rather than just duplicating the colors around the borders, as is usually done. combined with some other tricks -- heheh-- it does a pretty good job of hiding the seams, IMHO. for example, there is an equator line seam all the way round the sphere in the pic, that is quite hard to see.

anyway, cheers everyone for an inspiring discussion. I hope this pushes it a bit further - I've only just got this running today so there must be much further to go yet. - alex

satan
01-17-2004, 04:32 PM
Originally posted by SirKnight:

Yeah once I get it complete I'm going to post the program (compiled for win32) and source.
...
Just as long as you have Cg installed in linux or whatever platform you use.
...
Oh also keep in mind that my version will only work on the GeForce 3 or 4 Ti.
-SirKnight

First of all that sounds great and take you all the time you need, regardless what other people post here.
As long as there is source I think I will get it working. CG is available for Linux so I see no problems there. And to be honest I did not really care for the Radeon version, as long as it runs on my GF4 Ti I am happy.

SirKnight
01-17-2004, 07:05 PM
blah nevermind.

Damn 4 texture unit limit.

Stay tuned...

[This message has been edited by SirKnight (edited 01-17-2004).]

[This message has been edited by SirKnight (edited 01-17-2004).]

SirKnight
01-17-2004, 07:37 PM
Something is freaking screwed here. Perhaps this is a bug in the cg compiler but it's not generating the texture shader it's supposed to (that should match coop's pixel shader).

When my Cg frag prog is like this:



void bump_and_offset_fp( in float4 inColor : COLOR0,
in float4 map_coords : TEXCOORD0,
in float4 offsetU : TEXCOORD1,
in float4 offsetV : TEXCOORD2,
in float4 ts_light : TEXCOORD3,

out float4 outColor : COLOR0,



uniform sampler2D height_map, // tex0
uniform sampler2D rgb_map )
{
float4 hmap = tex2D( height_map, map_coords.xy );
float2 newuv = float2( dot( offsetU.xyz, hmap.xyz ), dot( offsetV.xyz, hmap.xyz ) );

float4 offset_tex = tex2D( rgb_map, newuv.xy );

float4 normal = inColor;

blah blah


I get the following texture shader (with some reg combiner stuff I wont post cuz it's not important):



!!TS1.0
texture_2d();
dot_product_2d_1of2(tex0);
dot_product_2d_2of2(tex0);


Ok well what I need is this:



!!TS1.0
texture_2d();
dot_product_2d_1of2(tex0);
dot_product_2d_2of2(tex0);
texture2d();


Ok so no problem, I just add the line "uniform sampler2D normal_map" after the rgb_map sampler line. Then I change the line where I declare "normal" to "float4 normal = expand( tex2D( normal_map, map_coords.xy ) );"

Alright so the cg compiler should see that extra sampler I added and that tex2D call I made and add the texture_2d() in the forth row in the texture shader then let me use it like I need to. Doing this _should_ give me the equivilent to coop's DX PS 1.1 shader but in opengl form. Instead I get the error "bump_and_offset.cg(91) : error C6028: Texture unit 0 already bound to sampler 'n
ormal_map'" Which it actually throws at me 4 times for some odd reason.

Is this perhaps a bug in the compiler because I know I should be able to do this. Having a texture shader like the one above I said I wanted (with the texture_2d() in the forth row) should be fine, if not then the pixel shader that coop uses would not work. For reference here it is:



ps_1_1

tex t0 // (h, 1, 1)
texm3x2pad t1, t0 // u' = (scale*Ex, bias*Ex, u) dot (h, 1, 1)
texm3x2tex t2, t0 // v' = (scale*Ey, bias*Ey, v) dot (h, 1, 1) - sample texture with (u', v')
tex t3 // normal

dp3_sat r0, v0_bx2, t3_bx2 // diffuse
mul r0, t2, r0


So like wtf?


-SirKnight

SirKnight
01-17-2004, 08:15 PM
Well I figured it out. Retarded me was using map_coords (in the frag shader) in two tex2D calls which FP20 does not like at all. For some reason I thought I could use map_coords which is bound to TEXCOORD0 in as many tex2D calls as I wanted, guess not. Then again it's late and I've already done some weird things so ya never know. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

...

Now that I think about it, I must have been thinking about being able to use a set of tex coords in a vertex shader in as many calculations as I want and just carried that thought over to fragment shaders, which was wrong. Again, I'm overestimating NV2x capabilities from me being used to being able to do pretty much as I want on CineFX hardware. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

[This message has been edited by SirKnight (edited 01-17-2004).]

jwatte
01-17-2004, 08:45 PM
you encode a per-texel phi-cutoff as a 1d function of theta


I don't get it. The cut-off varies by slope in more than one dimension. You'd need two dimensions (and, ideally, a function of two dimensions).

I was thinking that if you had 4D textures, this would be easy to encode. It's not that outlandish -- hardware could probably easily implement it, if it was a priority. Should be less transistors than projective or cube mapping texture coordinates, as you just take X+Y*XStride+Z*YStride+W*ZStride, where the strides are all powers of two (i e, shifts). Maybe filtering makes it harder.

Pretty much EVERYTHING is there already to do 4d texturing, including texture coordianates, except for the shifting of the w coordinate into the major bits of the texture address. Oh, well.

Actually, if you had 4D texturing, you could draw everything just as a bounding cube, and do holographic texture projection for what fragment you're supposed to see when looking at the cube face in position S,T from direction R,Q. Uses a bit of memory, but hey, unlike CPU frequencies, that hasn't stopped scaling with Moore's law yet. And the viewing angle is likely to be extremely well localized (== cache well).

/me goes googling for "holographic texturing"

dreddlox
01-17-2004, 09:23 PM
Around the edges of an object this algorithm wraps the texture to the other side of the object, which is undesirable...has anyone tried using data from the bumpmapping stage(particularily eye dot texelnormal) to see if the texel at that pixel is facing the camera and making it transparent if it isn't?
Also, I was wondering if it would be possible to render the color, normal and height to texture and then warp the whole scene at once in a second pass? it should be fairly easy to have a RGB tex for color and a RG(xy) texture for protrusion

SirKnight
01-17-2004, 09:49 PM
"holographic texture projection"

Sounds like something from Star Trek. http://www.opengl.org/discussion_boards/ubb/wink.gif


-SirKnight

jwatte
01-17-2004, 10:03 PM
http://www.mpi-sb.mpg.de/~magnor/ibrrefs/camahort_phd.pdf

It references "Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of SIGGRAPH 96," which I haven't found online.

What I was thinking of was similar to the PDF, except I wanted a bounding cube rather than bounding cylinder (and 6 different light field samples).

256x256x32x32 times 32 bpp is 256 MB, though. And that's for, like, a small tree. I'll shut up now.

Mikkel Gjoel
01-18-2004, 03:04 AM
alex/lionhead - very interresting indeed. It seems to work quite well almost everywhere.

I'm wondering about the "fins" though - I don't see them on the wireframe-render. Could you post a wire-image with those added?

alex_lionhead
01-18-2004, 04:15 AM
hornet: the fins arent rendered at run time, they're only there when I am "rendering" the polygons into my texture - in other words, they are there to extend the validity of the normal map into regions of the texture that would normally be "unused". I simply place a large fin (20 texels) around every triangle, and evaluate the tangent space, uvs and so on using an extrapolation of the values inside the triangle. sorry if that didn't make sense.

jwatte: apart from your general point about 4d textures being handy, I'm not sure what you mean in this case! the function I am talking about is a function of u,v and theta - and nothing else. it is exactly the same as the cutoff map used in the VDM paper, except they have an extra dimension for curvature, making theirs 4D. but if you're doing normal mapping, you don't need the curvature because it is fixed at a given uv (since I don't tile over arbitrary surfaces). if you want a better description of the cutoff map, refer to the VDM paper. anyway, as described there, wihtout c, it's just 3d map - u,v,theta - and that encodes well into eg a volume texture, just as you would use for shadowed bumps. cheers - alex

jwatte
01-18-2004, 09:54 AM
@alex:

First, the problem I'm trying to solve is that the cut-off varies by theta AND phi -- if you have a large bump to the left of you, the horizon is different to the left, compared to the right.

Second, it's not true that curvature doesn't matter, because you have to measure GLOBAL curvature of the object in general to get the horizon shift based on curvature. You don't have that from a normal map. You could get it by multi-tapping a displacement map, but then we're right back at the "how far out to look for bumps" problem that we started with.

The solution I then wanted to do was to render a 4D light field on a flat quad. The 4D texture is generated based on u, v, and two directional parameters; these can either be theta,phi, or they can be "view dot tangent" and "view dot binormal" (which seems better, as sampling density is linear in screen space as the quad turns glancing).

The output would be of the form "what would I see, if I looked at this point, from this direction". We're totally in image-based-rendering and light-field-rendering territory now, though. The paper I cited uses a single q,r coordinate set by wrapping a sphere around the object. I'm thinking that the sphere causes displacement, and wrapping a cube, and 6 separate 4D textures, around the object, would be the most accurate.

Each side of the cube, and each 4D texture, would work a little like a hologram, except no derivation of interference patterns or reference waveform required :-)

alex_lionhead
01-18-2004, 01:11 PM
jwatte: ah, I see what you mean now! last things first. what you describe - parameterising the offset required according to u,v, theta and phi, yes, that is indeed 4d! I attempted a solution to that by encoding at each u,v a set of spherical harmonic coefficients, which you can think of a as a representation of a cubemap - a different one at each uv. admittedly very low frequency, but whatever. it gives you the required "4d" texture lookup. another approach to 4d texture lookups is to pack a 2d array of small 2d images - imagine a texture of 32x32 tiles, each of which is 32x32. you use your first 2 cooredinates to lookup which tile, and the second 2 to look within the tile. this takes quite a few instructions but works well. overall I generally prefer the SH approach, especially when 2 of the coordinates are a direction theta/phi, it works really well. as you say, there are lots of applications of this kind of 4D texture. In the specific case of encoding "given this uv and this theta phi, what is my offset distance?" I found that the cost of the SH "4d texture" wasnt justified in the quality increase it gave over "standard" parralax/offset-mapping. but that may well have been my quickly hacked implementation.

returning to the "phi-cutoff" thing, I think we've been talking at cross purposes. I'm not trying to solve the distance-of-offset question mentioned above and which you described - Im only trying to solve a binary question: given this incoming value of theta, which values of phi cause an intersection with the surface, and which don't? ie - which ones run off the edge? because then I can "fix" the silhouette. then you have to do "something else" to solve the hard, 4D, "ok so how much do I offset then?". it's exactly as described in the VDM paper, which I don't have to hand but anyway - I'll give this one a rest as a result since they explain it well in there. and in case it causes any confusion, I think I may have swapped theta and phi from the "usual" - eg as used in vdm http://www.opengl.org/discussion_boards/ubb/wink.gif sorry. my bad. does it make more sense to you if you reread my post with theta and phi swapped? to reiterate my usage- theta is angle from 0 to 2PI of the eye projected onto tangent plane; and phi is the up-down glancing-ness, from 0=eye ray points perpendicular to surface to Pi/2=glancing across. what my 3d (u,v,theta) map does is tell you for a given direction (looking left, or right, as you call it) what *range of phis will give some kind of offset* and which ones should have no fragment at all - ie are near the silhouette.

sorry for the long posts everyone. I'll shut up now.

pocketmoon
01-18-2004, 02:55 PM
Originally posted by alex_lionhead:
sorry for the long posts everyone. I'll shut up now.[/B]

Don't do that - It's good to see the pros on here spreading a little wisdom http://www.opengl.org/discussion_boards/ubb/smile.gif

BTW. Loved 'Paper' http://www.opengl.org/discussion_boards/ubb/smile.gif

jwatte
01-18-2004, 04:51 PM
@alex: Yeah, I mumbled about someone maybe doing an SH per UV in one of the early posts. Great to hear someone's actually doing it!

When you're packing texture tiles, which is what I imagined doing to fit a 4D function into a 3D volume, the problem is that filtering (MIP mapping and/or anisotropic) will pick up the neighbor tiles, rather than wrap or filter correctly. You can get around this by coding the filters yourself, but for 4D, this turns out to be 16 texture fetches for basic trilinear filtering... And if you try to go 2D with 2D tiles so you can use built-in bilinear, you run out of resolution quickly because of the highest-resolution texture limit (2kx2k or whatever, turns into 64x64 tiles with 32x32 pixels each).

I see now that, because you only need a binary result of the look-up, you can put the fourth dimension in the result of the 3D look-up, so you're doing fine on horizon mapping. I was too deep into salivating about light field theory ;-)

maxuser
01-18-2004, 05:24 PM
Alex,

I'd considered the same approach, i.e. a 4D map of (s, t, theta, phi), similar to a BRDF, but was discouraged by the memory requirements. It hadn't occurred to me to use an (s, t, theta) map to lookup the precomputed max phi for which the given (s, t, theta) is occluded by the surrounding surface (assuming phi is 0 at the tangent plane and pi/2 at the normal).

Isn't this just horizon mapping? Instead of determining line-of-sight to a light source for shadowing, you're determining line-of-sight to the viewer for silhouette detection.

mogumbo
01-18-2004, 05:29 PM
Cool images, Alex. I'm still lost on the curvature part, though. It seems to me that looking up a glancing angle cutoff from a texture would not be enough. You would need to bias that value with the curvature of the surface. Higher convexity would increase the number of pixels being killed. Concavity would probably have to prevent all pixel killing.

The only way I can think to encode surface curvature in a general way is by passing a u and v curvature as a per-vertex parameter. If your surfaces were more complex you might even need to encode curvature along more than two axes.

Am I making any sense? Did I miss something here? Better yet, is someone going to post a demo of this? http://www.opengl.org/discussion_boards/ubb/smile.gif

ir6666
01-18-2004, 05:32 PM
Originally posted by coop:
sampling. This is exactly what GL_DOT_PRODUCT_NV and GL_DOT_PRODUCT_TEXTURE_2D_NV modes (under NV_texture_shader) or texm3x2pad and texm3x2tex instructions (under DX) do.

And what about perspective correctness? Doesnt it go bananas when looking from steep angles? BTW i cannot run your demo on my pc, i get a crash. (maybe also add 32 bit modes?). It crashes windowed and fullscreen, 16 and 24 bit on my GFFX5900.




[This message has been edited by ir6666 (edited 01-18-2004).]

evanGLizr
01-18-2004, 05:45 PM
Originally posted by ir6666:
BTW i cannot run your demo on my pc, i get a crash. (maybe also add 32 bit modes?). It crashes windowed and fullscreen, 16 and 24 bit on my GFFX5900.

It happened the same to me (ATI 9700 pro). I had to "nop-out" a few MMX/SSE instructions to make it run on my Pentium IV ("Access violation" for the memory operand of the instruction). After noping them out, you get the first frame rendered (looks nice) but you cannot move or anything.

Coop
01-19-2004, 12:22 AM
perspective correctness:
As far as I know all these operations are perspective corrected on gf3. I guess even offset bump mapping (OFFSET_TEXTURE_2D_NV)is perspective correct on gf3, you only cannot use projective texturing for it. From what I see there's no visible distortions you can normally see when texturing is not perspective correct. There's a small 'swimming' around high bumps from the heightmap, but I think they are a 'feature' of the technique.
about crashes:
This is strange. I wrote the demo at home (Athlon XP, GF3) and it worked fine of course. I just run in on my computer at work (P4, GFFX5900U) and it also work fine. I'll try to test it on other machines...

Coop
01-19-2004, 01:02 AM
I think I found it. I turns out that __declspec(align(16)) doesn't work exactly as I thought http://www.opengl.org/discussion_boards/ubb/smile.gif. You can download rebuilded dlls and exe from www.skarb.telsat.wroc.pl/coop/data/bin.zip. (http://www.skarb.telsat.wroc.pl/coop/data/bin.zip.)

alex_lionhead
01-19-2004, 01:35 AM
jwatte: yes, implementing those 4d filters yourself is an impractical b***h http://www.opengl.org/discussion_boards/ubb/smile.gif but the SH technique works well in some cases if you don't mind a very low frequeny approximation to your two "angular" axes...

mogumbo: its true that the kill angle depends on curvature in a horrid way. if you were precomputing this for a texture without knowing in advance what geometry it was going on, you'd be stuffed - and need to parameterise by curvature, as they do in VDM paper. (incidentally its only 1d, since you only need to measure curvature in the plane of the normal+eye ray).
the reason I get away without it, is that I'm doing normal mapping - so each texture coord corresponds exactly to a specific piece of mesh, with a specific curvature. in maths terms, in this case, c is not an independent variable - it is entirely a function of u,v. so in other words, during my calc phase I'm computing the killphi angle "pre-corrected" for the curvature at that given uv. convex points of the surface get different kill values from concave etc, - my precomp texturemaps are affected by the big scale geometry as well as the bumps!

and yes, this technique really is just horizon mapping, as I mentioned in my first post http://www.opengl.org/discussion_boards/ubb/wink.gif

cheers all.

Jared
01-19-2004, 01:51 AM
going over that i see a 3d lookup texture, storing a max angle for which the fragment is visible and the correct tex coords. so (memory usage aside) we would already get offset/parallex/whatever mapping and decent outlines?

the remaining problem is geometry. what if the new coords are on the next triangle and it has a different normal or even texture. i still see an unused alpha channel, so maybe that could be used to store some kind of texture index. curvature should be possible to precalculate and consider in the lookup texture.

if it wouldnt result in a huge texture i'd suggest to reduce the geometry to a point in space, with s,t as evenly spaced polar coordinates and a height representing the length of the vector from the point along s,t (guess thats the point where your artists will hate and the evenly spaced points will bite you). just rendering a billboard where the object should be and offsetting the texture coords according to object orientation relative to the viewer...

its making my head hurt, wonder if thats even making sense and especially it wouldnt have much to do with rendering anymore, would it?

so assuming i didnt (again) miss something, so far it would be one 3d lookup texture and probably one real texture (guess it might make more sense to use one big texture for the whole object instead of multiple small ones.. especially since i wouldnt know how to access different textures depending on an index from within the fragment program). sounds pretty limiting and wasting tons of memory.. hey, just like many siggraph papers presented in our seminar ,-)

ssmax
01-19-2004, 03:02 AM
I'm not sure if this has been mentioned (quite possibly though!) - as I'm too lazy to read all the previous posts but it seems to me you can combine the parallax mapping with horizon map functionality.

Given you have

uvOffset = f(u, v, theta, phi)

then if you use the light vector in tangent space instead of the eye vector using the same functions you can determine self-shadowing - ie if uvOffset is (0,0) then the pixel is visible to the light else it's self shadowed.

That's pretty cool. And the demos so far are too http://www.opengl.org/discussion_boards/ubb/smile.gif

Has anyone thought of using PTM (Polynomial Texture Maps (Dan Gelb HP research)) with this I wonder (I see SH have been mentioned as well already), 4d textures in hardware would be awesome with some decent form of compression.

Factor
01-19-2004, 03:07 AM
Just a very simple idea how to get the shilouets working. I don't know if it works at all, I couldn't try it out yet. So, what would happen if we have a thin border around the diffuse texture with transparent pixels and all the height values would be negativ. The biggest problem is that this would require GL_CLAMP wrapping, so we can forget about tiling. Maybe it's b*llsh*t and doesn't work at all...and do not laugh at my bad english http://www.opengl.org/discussion_boards/ubb/smile.gif

Iestyn
01-19-2004, 03:52 PM
Originally posted by mogumbo:
Ha! I found it. A 2001 paper that talks about the same technique. They even gave it the same name, parallax mapping. So why hasn't anyone here heard of this before? It looks like the only thing I've added to this is the limiting of the texcoord offsets. ...I can't believe I spent all that time on the whitepaper (which needs to be severly updated now).
http://vrsj.t.u-tokyo.ac.jp/ic-at/papers/01205.pdf

Did you notice that their screenshots show modified silhouettes, yet there's not a single mention of how that is achieved! http://www.opengl.org/discussion_boards/ubb/smile.gif

I imagine it's something akin to how Alex is doing it, but that's some lax paper-writing! (well, unless they were saving that idea so they could write a second paper http://www.opengl.org/discussion_boards/ubb/wink.gif )

mogumbo
01-19-2004, 04:02 PM
Yeah, I noticed that. By the looks of that silhouette screenshot I think they're just using a clamped texture and setting the border color to black or transparent. If that's all they're doing then it's certainly not a general solution.

Iestyn
01-19-2004, 04:05 PM
Originally posted by mogumbo:
Yeah, I noticed that. By the looks of that silhouette screenshot I think they're just using a clamped texture and setting the border color to black or transparent. If that's all they're doing then it's certainly not a general solution.

You know, that's exactly what I thought! But then I thought, they're Japanese... Japanese people aren't lazy, quite the opposite... surely they wouldn't! Bah, what a swizz, eh? http://www.opengl.org/discussion_boards/ubb/smile.gif

Is their math identical to yours? I like their diagram, very clear (and suggestive of how a better, second-order, two-step (or nth-order, n-step) approximation can be done, as several people suggested in this thread).

dorbie
01-19-2004, 04:36 PM
The math is the same, in the paper it's described as U' = U+tan(theta)u*depth(U,V) and V' = V+tan(theta)v*depth(U,V). (using small u and v as subscript). This is the same as the offset described in this thread as tan(viewing_incidence) * depth_offset, obviously this offset is just a magnitude along the subscript u & v (i.e. s & t of the view vector projected into tangent space(normalized first I think)).

I don't think the undocumented silhouette is lazy or a separate idea not described in their paper, I think their offset is due texture clamping as mentioned much earlier in the thread (texkill in the thread). It isn't general, but it pretty much happens trivially under the right constrained circumstances.

Unquestionably the same idea. It's amazing that they called it parallax mapping. I thought the name was a bit strange. Great minds obviously think alike.


[This message has been edited by dorbie (edited 01-19-2004).]

mogumbo
01-19-2004, 04:49 PM
Ha http://www.opengl.org/discussion_boards/ubb/smile.gif I laughed out loud when I saw that we gave it the same name. It's uncanny.

There's one small difference in the math (described in section 4.3 in my whitepaper). When I first implemented it (as described in the 2001 paper) I didn't like all the texture swimming that I saw, so I limited the offsets so that they can't fly off to infinity. It's a hack, but it looks cleaner and it cuts 2 instructions out of the fragment program.

dorbie
01-19-2004, 05:05 PM
I was going to mention your addition of the clamp but I never bothered, it's definitely an important enhancement for some applicatios. I think it matters even more with curved tangent space, but even steeply sloped facets suffer without it. They don't really talk about the tangent space basis on a per vertex level. It seems to be treated per face in their paper. In general they seem to approach the problem from a surface reconstruction point of view rather than enhanced 'wrinkled surfaces' (to borrow from Blinn), but they do throw the brick example in there to leave everyone in no doubt.

Jaeger
01-19-2004, 09:00 PM
Very interesting stuff here.

However, it seems many graphics cards, mine included, don't support GL_ARB_fragment_program.

How can you emulate this technique without that feature?

( hoping for some implementation hints to help get it working on my ATI Radeon mobility 9000 )

Iestyn
01-19-2004, 09:21 PM
Originally posted by Jaeger:
Very interesting stuff here.

However, it seems many graphics cards, mine included, don't support GL_ARB_fragment_program.

How can you emulate this technique without that feature?

( hoping for some implementation hints to help get it working on my ATI Radeon mobility 9000 )


This paper http://vrsj.t.u-tokyo.ac.jp/ic-at/papers/01205.pdf that Mogumbo eventually found actually shows how to implement it using EMBM support! Very cunning, that.

I'm not sure you can get Alex Evans' silhouette modification without new hardware, though.

[This message has been edited by Iestyn (edited 01-19-2004).]

[This message has been edited by Iestyn (edited 01-20-2004).]

Jaeger
01-19-2004, 10:07 PM
I can't download this for some reason.
www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip (http://www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip)

If someone could mirror it or send it to me via any messenger program id appreciate it.

OldMan
01-19-2004, 10:53 PM
mirrored: http://www.inf.ufsc.br/~stein/offsetbump.zip

ssmax
01-20-2004, 03:28 AM
Given the approximation that is used when computing the offset that all heights are relatively the same as they are small, would it improve matters much if per texel you had a height value that was the average of all the others except for the current texel? - in fact the average of all the possible occluding texel heights would be better...

You could then use this height as the approximate height used to calculate the offsets.

dreddlox
01-20-2004, 04:24 AM
ssmax: good timing, I was just about to mention using the mipmaps to increase the accuracy of the approximation before seeing your post rofl

basically if your tangentspace eye vector went from 5,5 at height 1 to 3,3 at height -1, your mipmap level should be 2 and your height query coord should be 4,4, if you catch my drift

in directx using ps_2_0 you can custom select mipmap level using the asm command 'texldb' which uses the w part of your texcoord vector as the mipmap bias, should be an equiv opengl command to that.

but to calculate the mipmap level you should use this:
p1 is the point where the eye vector would intersect a height 1 texel
p2 is the point where it'd intersect a height -1 texel
tex.w = log2((max(p1.x,p2.x) + max(p1.y,p2.y))/2)

on the topic of silhouettes, I dont think its possible to draw them using parallax mapping
parallax mapping gives an approximation whereas silhouette mapping requires precision
there is 1 way I can see though, but it requires every triangle to be individually textured, then have the area of the texture thats outside of the triangle transparent, then recode so that the the height range is 0 - 1 instead of -1 - 1 that i've been seeing in all the assembly in this thread, put in your vertex shader that every vertex should be stretched outwards in screenspace by normal*height of texture/distance from eye
after all that, you proberbly would want to draw the scene in completely back to front poly order

what would be good would be if the pixel shader could have access to all 3 vertices output data as well as the interpolated data, so that a normal derivitive could be established and a parabolic arc of the eye vector(in tangent space) could be traced, but im rambling now and need sleep, nite all

Jared
01-20-2004, 08:10 AM
hm, making the parts outside transparent doesnt sound so good. as you said, it would force you to do everything back to front and i dont think thats helping the depth buffer.
reminds me: killed fragments will NOT be written to the depth buffer, will they?

also, i somehow see parallex mapping and correct silhouette going different ways. maybe it should rather be two seperate things.

Iestyn
01-20-2004, 10:53 AM
Originally posted by alex_lionhead:
yes, this technique really is just horizon mapping, as I mentioned in my first post http://www.opengl.org/discussion_boards/ubb/wink.gif

If you're still there, I was wondering if you'd tried varying the resolution of the 3D horizon map - can you reduce the [u,v] resolution to 2x or 4x lower than that of the normal map and get away with it? And then, how many 'phi' divisions (for which to calc theta-cutoff) do you need to make it look good in the other dimension? Oh, and are 8-bit values fine for encoding theta-cutoff?

Just interested in the potential of this for real-world usability http://www.opengl.org/discussion_boards/ubb/smile.gif

[I guess I should be comparing bandwidth/memory/render-time for models that look the same using no tricks, just normal mapping, parallax mapping and then parallax + horizon maps... I'm not sure what the tradeoff would be]

[This message has been edited by Iestyn (edited 01-20-2004).]

alex_lionhead
01-20-2004, 03:17 PM
iestyn: still here http://www.opengl.org/discussion_boards/ubb/wink.gif I spent a further day developing the code and making it more robust. I haven't tried dropping res by 2x, but it was on my todo list, I think its a good idea. incidentally you need quite a few slices to your horizon map volume texture, sadly - otherwise the silhouette suffers.

the silhouette trick as I implemented it needed ps2.0 hardware- but Im not an expert at squeezing this stuff out of gf4 and down. it might be possible...

my current view on the whole situation, from a doing-it-in-the-real-world is, offset maps are so cheap and easy to try, they're worth evaluating on a case by case basis. the silhouette trick looks good on only a small class of examples - namely, those with just enough displacement to actually make a noticably bumpy silhouette (if you think about it, thats not that common!) - folds of cloth is the best example I can think of. it fails in similar cases to offset mapping (sorry, doh, I mean parallax mapping heh) - hard edges and large displacements are bad. so, for now, I'm leaving the silhoeutte stuff on hold in most cases. normally you can spend the video memory more profitably elsewhere, eg for actual self shadowing, or just a higher res normal map in the first place.

doubtless someone will think of a neater, cheaper trick to do the silhouettes, and then the whole enjoyable r&d spin cycle can start over. hurrah! http://www.opengl.org/discussion_boards/ubb/wink.gif

tellaman
01-21-2004, 05:41 AM
http://media.pc.ign.com/media/482/482383/img_1799027.html

looks like the guys at crytek already knew bout this technique, i also believe they're using some detail normal map as someone suggested

dorbie
01-21-2004, 06:30 AM
Nothing in those screenshots indicates anything more than simple bumpmapping.

KuriousOrange
01-21-2004, 12:00 PM
Yes, far cry looks like it's using plain bumpmapping.
I feel a little disappointed - far cry was supposed to be stunning, but from those screenshots it looks a little....dull...just like any other PC game, more recently like Halo...terrain looks choppy & low res, there's a distinct lack of shadows cast by anything. Looks like they've spent all their time on the human model rendering rather than the environment, shame as I don't think people are that arsed about what the avatars look like in a game, just the world and how it reacts to interaction.
Anyway, I digress from the thrust of this thread.

dorbie
01-21-2004, 02:00 PM
I wasn't criticizing the game, just correcting an inaccurate statement. I think there's a lot of impressive stuff in FarCry, those screenshots don't do it justice.

[This message has been edited by dorbie (edited 01-21-2004).]

duhroach
01-21-2004, 03:15 PM
doubtless someone will think of a neater, cheaper trick to do the silhouettes, and then the whole enjoyable r&d spin cycle can start over. hurrah!



Why not just forgo the "simulation" aspect of all this, and just go for true per-pixel displacement mapping? In essence we can calculate our virtual position of a pixel, given the physical position and a normal (with magnitude). With this method we could essencially actually just replace bump mapping entirely.
I know this gentlemen did it a few years ago, not sure where he is, or why he never did anything with it.
http://www.shaderstudio.com/images/Displace.jpg
thoughts?
~Main

[This message has been edited by duhroach (edited 01-21-2004).]

divide
01-22-2004, 09:27 AM
Why not just forgo the "simulation" aspect of all this, and just go for true per-pixel displacement mapping?

I'm currently writing such an engine (http://divide.3dvf.net/d2k4/index.htm); however it cannot use opengl since it is per-pixel computation of a non-polygonal rendering.

Csiki
01-22-2004, 12:42 PM
So will we have a demo for Geforce3/4 with OpenGL?
I would like to see in real time on my machine. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

DarkWIng
01-22-2004, 01:41 PM
Originally posted by Csiki:
So will we have a demo for Geforce3/4 with OpenGL?
I would like to see in real time on my machine. http://www.opengl.org/discussion_boards/ubb/biggrin.gif
There was a DX demo posted for GF3/4 (http://www.skarb.telsat.wroc.pl/coop/data/offsetbump.zip). It takes you about 10 minutes to re-implement it in OpenGL. Works just nice but gets totaly screwd up with high frequency heightmaps.


[This message has been edited by DarkWIng (edited 01-22-2004).]

SirKnight
01-22-2004, 06:19 PM
http://sirknighttg.tripod.com/ogl/parallax_nv2x.zip

Ok here it is, the OpenGL NV2X version! I didn't mean for it to take so long, it was done last saturday night but I didnt get a chance to get it all packaged up and uploaded till now. I got busy doing some other things and classes started back, plus the last two days I was having uploading problems. Damn tripod.

Ok first you need to know how to get it working. Unpackaging the rar into a folder and trying to run the program will not work. http://www.opengl.org/discussion_boards/ubb/biggrin.gif You need to download mogumbo's original demo and extract all of his tga files over into the folder you created which has my files in it. Then it can be run. I did not include his tgas because it just made my rar kinda big (for my 56k almost anything is big) and it was taking way too long to upload so I had to do some trimming.

If you are using a linux platform then you will have to compile the code yourself as my executable I include is for win32 only. Also the Makefile does not include any of the Cg libraries so using the Makefile as it is now wont work. So editing of the Makefile will have to be done before you can compile the demo. Also be sure you have Cg for linux installed, obviously.

Ok now some observations:
While this parallax technique can be done on the nv2x level of hardware, it can't be done good. The problem is the nv2x hardware is rather limited so normalization becomes an issue here. Since you can only access 4 textures at once on this level of hardware, I can not use a normalization cubemap for the light vector like I would have liked. The first three texture units are taken up by the modified height map, then the two dot_product_2d_xxx texture shader instructions. This leaves me with only one texture unit left and I have no other choice than to put the normal map in there. This forces me to pass the tangent space light vector in some other input like one of the color registers. This sucks because I can not then normalize it on a per pixel basis in the fragment shader, this hardware just can't really do that. There was this one trick but it's not a true normalization. The problem here is that when the light vector gets interpolated across the surface, it denormalizes causing obvious problems. At a distance everything looks pretty good, but once you start to move closer to the surface, you can see the extruded bumps physically be pushed back in causing the surface to look flat like it would normally do with conventional bumpmapping. I doubt I'd reccommend the use of parallax bumpmapping on nv2x hardware because of this.

One thing you may notice is the shading on the parallax bumped surfaces doesn't shade the same way as the regular bump mapped surface. I'm not exactly sure why but I was able to make it look better by multiplying the N.L part by the z component of the tangent space light (L), then multiplying that by light color and rgb map, finally saturating the result. I do not show this in the code I have uploaded though. http://www.opengl.org/discussion_boards/ubb/smile.gif

All in all, doing this on newer hardware that supports ARB_fragment_program looks far superior and I would not hesitate to use it on that code path. As far as a nv2x path, I'd probably not use it. I just don't like the denormalization of the light.

This seems kinda weird finally releasing this like a week and a half later than I really could have done but o well, I can't help it when things happen and I don't get a chance to do what I want. So, I hope someone likes this demo. If not I really don't give a rat's arse. http://www.opengl.org/discussion_boards/ubb/wink.gif It was fun for me doing it and it did teach me a few things in the process. Like never to forget to call cgLoadProgram(id) again. http://www.opengl.org/discussion_boards/ubb/wink.gif I can't believe I did that. I spent an ungodly amount of time trying to figure out what the hell was going on. I was really puzzled for a long time. Then once I realized I wasn't calling that function...well you can imagine how much of retard I felt like after that. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Ok...I think that is all I wanted to say about it. TTFN!

EDIT: Oh yeah I almost forgot. http://www.opengl.org/discussion_boards/ubb/biggrin.gif As in the original demo, hitting 't' will cycle through the textures. The second texture looks the best I think. Also, don't expect this demo to look exactly like the arb_fp version as the arb_fp version looks a million times better. Even plain ol regular bumpmapping with arb_fp (or nv) makes any thing else (geforce 3/4 and the like) look like poo.


-SirKnight



[This message has been edited by SirKnight (edited 01-23-2004).]

Csiki
01-23-2004, 12:03 AM
The rar file seems to be corrupt.
When I try to download I get 289,923 and 289,925 sizes, but none of the downloaded files are good. http://www.opengl.org/discussion_boards/ubb/frown.gif

dorbie
01-23-2004, 02:16 AM
A corrupt file would be consistent. I can't download this either, it says it's a rar but it comes up as garbage html, if I "save as" it tries to save with an html extension. There's something ugly about the way this link is being served.

Renaming the file allows me to open in winrar with a directory listing but extracting indicates some corrupt files in the package.


[This message has been edited by dorbie (edited 01-23-2004).]

SirKnight
01-23-2004, 09:11 AM
Damn, sorry about that. I think I'll make it a zip instead, even though rar compresses better. And I'll try moving the file on my site to a different folder, maybe that will help. If all else fails I'll just email it to ppl who want it. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

ScottManDeath
01-23-2004, 10:36 AM
mhmm, but what about multiple light sources?I see the problem to decide which perturbed base texture to use? Maybe blend between all? or take into account all offsets from all light sources and use the final offset as lookup coordinate?

dorbie
01-23-2004, 10:41 AM
Eh? It's the view vector that causes the offset not the light vector. The offset would be the same for all lights.

ScottManDeath
01-23-2004, 11:05 AM
oooops, that I must have missed somehow. http://www.opengl.org/discussion_boards/ubb/wink.gif Now nothing prevents me from implementing it also.........

SirKnight
01-23-2004, 12:42 PM
Ok I fixed it. I edited my message with the new link. Just rick click, save target as and all will be well. I tried it out myself so if it doesn't work for you, then it's your fault. http://www.opengl.org/discussion_boards/ubb/wink.gif hehe.


-SirKnight

Csiki
01-23-2004, 01:55 PM
Originally posted by SirKnight:
Ok I fixed it. I edited my message with the new link. Just rick click, save target as and all will be well. I tried it out myself so if it doesn't work for you, then it's your fault. http://www.opengl.org/discussion_boards/ubb/wink.gif hehe.
-SirKnight

Thanks, very nice. http://www.opengl.org/discussion_boards/ubb/biggrin.gif
The simple bump map doesn't work (just it's place there), it must be too difficult for my card. http://www.opengl.org/discussion_boards/ubb/smile.gif

SirKnight
01-23-2004, 03:11 PM
Hmm, that's odd. Are you sure you have all the textures installed? I'm using the FP20 and ARBVP profiles which are for the nv2x level so I don't see why it wouldn't work. Just because I dev'ed it on a GeForce FX shouldn't mean anything since it's the profiles that figure out the code for the hardware I'm targetting.

Can you list all of the textures you have in the directory you created?

-SirKnight

SirKnight
01-23-2004, 03:16 PM
Actually I'll do even better. I'll tell you what textures should be there and you can see if they are all installed.

brick.tga
brick_h.tga
brick_normal.tga
rockwall.tga
rockwall_h.tga
rockwall_normal.tga
roots.tga
roots_h.tga
roots_normal.tga

The other tgas that have _height are for when you use the ARB_fp mode. The "grate" tgas are not used.


-SirKnight

Csiki
01-24-2004, 12:58 AM
I can change the textures (and have all listed textures), but the second section (bump map) doesn't appear.

HamsterofDeath
01-26-2004, 12:30 AM
could this be used for 3d-decals ?

Humus
01-27-2004, 04:45 PM
In case anyone's interested, I have updated my self shadowing with offset mapping demo to also include shadow mapping combined with this technique. So now the global shadows properly follows the bumps on the surfaces too. http://www.opengl.org/discussion_boards/ubb/smile.gif
http://esprit.campus.luth.se/~humus/

Iestyn
01-27-2004, 05:24 PM
Originally posted by Humus:
In case anyone's interested, I have updated my self shadowing with offset mapping demo to also include shadow mapping combined with this technique. So now the global shadows properly follows the bumps on the surfaces too. http://www.opengl.org/discussion_boards/ubb/smile.gif
http://esprit.campus.luth.se/~humus/

Hmm, the concentric rings artifact when the light gets close to the brick walls - do you know what causes that? I'm sure that wasn't there in the earlier version... did you reduce the resolution of the horizon map?

Humus
01-28-2004, 03:40 AM
Hmm ... I've been looking and looking, and I'm not seeing them. I know I had them before, but tweaked it until they were gone. Not sure why some people are seeing them anyway. In any case though, the reason is because I'm using a 8 bit shadow map.

SirKnight
01-28-2004, 09:17 AM
200th post!

Cool demo Humus, too bad it won't work in linux though. I had to reboot into windows to run it. Dang ol windows. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

Binaur
01-29-2004, 10:44 AM
Originally posted by duhroach:
Why not just forgo the "simulation" aspect of all this, and just go for true per-pixel displacement mapping? In essence we can calculate our virtual position of a pixel, given the physical position and a normal (with magnitude). With this method we could essencially actually just replace bump mapping entirely.
I know this gentlemen did it a few years ago, not sure where he is, or why he never did anything with it.
http://www.shaderstudio.com/images/Displace.jpg
thoughts?
~Main

[This message has been edited by duhroach (edited 01-21-2004).]

I've been following this thread in detail. I'm the creator of ShaderStudio and my technique is a hybrid version of OffsetMapping and ReliefMapping. The only difference is I evaluate the warping equations per-pixel, which includes the perspective projection. Of course I had all the similar problems with offsetmapping, i.e visibility and occlusion as the warping equations are a many to one mapping.
I like the ongoing ideas to solve this problem, as any form of image based displacement will have to deal with overlap and visibility occlusion.

arekkusu
01-29-2004, 06:43 PM
I'm having problems integrating self shadowing into this technique. I have it working with two self-shadowing light sources (and a detail texture map) but there is an ugly artifact due to the heading map interpolation.

Maybe those of you who have this working (humus, Tom Nuydens) can take a look at this thread (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011444.html) ?

duhroach
01-30-2004, 06:19 AM
Of course I had all the similar problems with offsetmapping, i.e visibility and occlusion as the warping equations are a many to one mapping.
I like the ongoing ideas to solve this problem, as any form of image based displacement will have to deal with overlap and visibility occlusion.


I came up with a similar solution to per pixel offset mapping. I've been trying to eliminate the need for the extra texture overhead of VDM's My solution actually required a 3 step process:
1. Fragment shader calculates the offset position of this pixel based upon it's normal (ie we calculate where the virtual geometry would be in screen space)
2. We rebuild the map on the CPU (because fragment processors won't let us directly plot pixels)
3. Re-render the remade map.

I've run into a few problems which have required me to take a jog back to the research board for a while. Most of which include the problem that occurs during stage 1, when the same pixel can generate multiple offsets (who gets the spot?) A limited solution is a 2 pass equation for front and back faces, but that's very limited.

Another problem I came over was the fact i needed to rebuild the process on the CPU. I've been trying to devise a reversal algorithm for the virtual position (pixel + normal) which would allow me to eliminate the virtual position entirely. But I have yet to discover one.

Thirdly I'm still having the "blocky" results that every other PPD method seems to have, which is a result of the non-interpolated texture coords. Haven't had time to hit this one hard yet.

Anyone else have some ideas?

~Main

mogumbo
01-30-2004, 07:55 AM
Hey duhroach, that sounds a lot like relief texture mapping. Check out the 9th post on the 2nd page of this thread. There's a paper on it there. ....or just do a Google search on it.

JustHanging
01-30-2004, 08:46 AM
An inverse vesion of relief textures (or so they call it) has been developed too, check http://www.cgshaders.org/shaders/show.php?id=44

None of these techniques (with the exception of this thread) have shown anything you couldn't do with polygons faster and easier, though http://www.opengl.org/discussion_boards/ubb/frown.gif Of course it might be due to the fact that they've only been used in tech demos, not anything serious.

-Ilkka

Anybody
02-24-2004, 10:11 AM
This effect is the very best I have ever seen.

But can someone explain me if it is really necessary to normalize the normal map?

Frame Rate with normalization of the normal map: 34
Frame Rate without it: 55

dleech
02-24-2004, 11:33 AM
can someone send me that directx 8 version of this ..the original and the mirror are gone. (OffsetBump.Zip)

my email is dan@outlawstudios.com, or I will also check back here to see if anyone put up a new mirror of the code


Thanks a bunch!
Dan Leech

SirKnight
02-24-2004, 01:50 PM
Originally posted by Anybody:
This effect is the very best I have ever seen.

But can someone explain me if it is really necessary to normalize the normal map?

Frame Rate with normalization of the normal map: 34
Frame Rate without it: 55

Well, to achieve the highest rendering quality (and it's more correct to do so) you really should normalize the samples from the normal map. But if you're getting that large of a performance decrease and you don't notice any difference between normalizing it and not normalizing it then perhaps you shouldn't bother with the renormalization. I'm suprised 3 instructions causes that big of a performance decrease.


-SirKnight

Anybody
02-24-2004, 02:05 PM
It is only the RSQ instruction. I am surprised about that too but a square root seems to be very performance decreasing. (I have a GeForce FX 5600 so I don't think my card is too old...)

santyhamer
02-26-2004, 06:24 PM
Simply impressive.

Die carmack... MOGUMBO is our new gurú!

davepermen
02-27-2004, 12:32 AM
Originally posted by Anybody:
It is only the RSQ instruction. I am surprised about that too but a square root seems to be very performance decreasing. (I have a GeForce FX 5600 so I don't think my card is too old...)

try it with a normalization cubemap. this was in some nvidia paper i think. no the gfFX you should always try both and take the faster one.. becaues it depends on the whole shader..

just try..

Anybody
02-27-2004, 05:44 AM
Originally posted by davepermen:
try it with a normalization cubemap. this was in some nvidia paper i think. no the gfFX you should always try both and take the faster one.. becaues it depends on the whole shader..

just try..

I found the paper. I will know try to implement it and hope it will be faster. I really need this effects! I will use it with all my objects I think...

DJSnow
02-28-2004, 03:42 PM
>>have shown anything you couldn't do with
>>polygons faster and easier,
yes, todays rasterizers are catching up quickly; so rendering/rasterization wouldn't bee such a big problem in the near future...