PDA

View Full Version : Curved Relief Mapping with correct silhouettes



fpo
12-10-2004, 03:48 PM
Hi... just created a new relief mapping shader supporting correct silhouettes and would like to share some images of the first test.

I use the surface curvature information passed in at vertices to perform a correct ray intersection in fragment shader that allows the relief to be visible at the object silhouette.

Compare the results from the previous tangent space version that considered the surface always flat to the new one that supports curvature information.

Old shader screenshot (http://www.paralelo.com.br/img/relief_curved_old.jpg)

New shader screenshot (http://www.paralelo.com.br/img/relief_curved_new.jpg)

MikeC
12-10-2004, 04:21 PM
Shun-SHENG duh gao-WAHN...

V-man
12-10-2004, 09:35 PM
Hmm, the second sphere looks smaller. Are you using discard in there to kill some fragments at the edges?

Can you make a demo with a single quad?

991060
12-10-2004, 10:42 PM
Oh, man, this is simply stunning, when can we have a demo?

Lurker_pas
12-11-2004, 12:16 AM
The picture is awesome!
I wonder on what card you have rendered it/on what cards it is possible/what is the performance?

I have read once an interesting discussion about ray-tracing vs rasterization. Someone said that none of these will win, but they will be replaced with their combination. It's great to see this happen...

And I think I'll add yet another request for a demo (and/or avi for those (me) with old junk). I'm dying to see how these bumps change with view direction!
See you!

jwatte
12-11-2004, 07:23 AM
Impressive! I imagine you use a fairly high-resolution normal and displacement map?

Does the map tile? If so, how do you tell the cases where you want to kill pixels from the cases where you don't?

SirKnight
12-11-2004, 08:26 AM
Awesome! That does look so much better.

-SirKnight

fpo
12-11-2004, 08:56 AM
Ok, here we go...


Originally posted by MikeC:
Shun-SHENG duh gao-WAHN...What does that mean?!?! English please...
Looking over the web I found a reference in FireFly (scifi series) and translates to 'holy testicle tuesday'?!?!? Should I take it as a complement???


Originally posted by V-man:
Hmm, the second sphere looks smaller. Are you using discard in there to kill some fragments at the edges?
Can you make a demo with a single quad?Yes I use discard to define the silhouette. Yes it is possible to render planar objects too. Actulay a planar surface is just a special case in the new more generic curved surface shader (where curvature is infinite or just a too large value to make up a plane).


Originally posted by 991060:
Oh, man, this is simply stunning, when can we have a demo?Soon... but next week I will be out of Brazil for some work meetings. I will try to get it done before the holidays. I still have to organize the demo inteface options now and do a few modifications to the curvature mesh offline preprocess (passed in 4th component of tangent space vectors using a floar4 for tangent vectors).


Originally posted by Lurker_pas:
The picture is awesome!
I wonder on what card you have rendered it/on what cards it is possible/what is the performance?

I have read once an interesting discussion about ray-tracing vs rasterization. Someone said that none of these will win, but they will be replaced with their combination. It's great to see this happen...

And I think I'll add yet another request for a demo (and/or avi for those (me) with old junk). I'm dying to see how these bumps change with view direction!
See you!I use GeForce6800 GT (only one that runs this shader I think... maybe 3dlabs one can also do it). It works with Cg profile ARBFP1 and in D3D with pixel shader 2b.

Good point on the ray tracing thing... this is kind of mixed triangle rasterization and ray tracing. Cool!

I will place a video today so... and will make the demo available asap when I get it somehow complete with all options I want it to include.


Originally posted by jwatte:
Impressive! I imagine you use a fairly high-resolution normal and displacement map?
Does the map tile? If so, how do you tell the cases where you want to kill pixels from the cases where you don't?I'm using 512x512 normal and texture maps but 256x256 would also look ok. And yes, texture can tile as much as you want and mipmapping works fine in large tile cases.

The killed pixels are the ones where the ray-depthmap intersection does not hit the displaced surface.

See you...
FPO

SirKnight
12-11-2004, 10:26 AM
Looking over the web I found a reference in FireFly (scifi series) and translates to 'holy testicle tuesday'?!?!?
lol!



Should I take it as a complement???
Yes. That basically means "OMG HOLY CRAP THAT LOOKS AWESOME!!!!" :D

-SirKnight

MikeC
12-11-2004, 10:27 AM
Originally posted by fpo:


Shun-SHENG duh gao-WAHN...Looking over the web I found a reference in FireFly (scifi series) and translates to 'holy testicle tuesday'?!?!? Should I take it as a complement???
Very much so; in its original context, it's an exclamation of impressed awe. (Terribly offtopic, btw, but if anyone hasn't seen Firefly yet they're missing out. Partly because it has some interestingly novel and very effective "amateur" CGI aesthetics, with shaky camera and bad focus, but mostly because it's just flat-out the coolest TV ever made.)

Does this fragment-kill approach look good on sharp corner silhouettes, as well as curves? Assuming the relief textures match up on each side, I'd think it would.

I do wonder how much you'd need to tweak your geometry for this approach... it's going to be a while before anyone can _assume_ hardware good enough to run this. If you had, say, a relief-mapped sphere resting on a flat surface, and it didn't have any "bumps" on the underside, it'd appear to be floating in mid-air. I suppose this is just another example of "fake" geometry and physics not really mixing.

Still damned impressive, though.

fpo
12-11-2004, 10:57 AM
For the extra data needed on models, I need 4 extra floats per vertex for the new effect. Two for curvature at each tangent direction and two for texture mapping scale at each tangent direction. This can all be calculated offline once and stored with the model vertices in the geometry file.

MikeC
12-11-2004, 01:40 PM
Actually, I wasn't thinking about the extra data. I was thinking about the case where you have a big bunch of geometry art assets and want to switch between a bleeding-edge GL2.x renderer (with "fpo relief mapping") or a fallback renderer (with cruder or no bump mapping) at runtime depending on the hardware/driver available. Because the fragment-kill approach to silhouettes has the side-effect of "shrinking" geometry, a model that looks good in one renderer might not look good, or even correct, in the other.

Video looks amazing. There's a pale greenish fringe along the left-hand edge of the silhouette; it's very obvious in the last frame. Is this an artifact of the movie encoding? There was no sign of it in the JPG you posted.

Also, does your shadowing still work?

fpo
12-12-2004, 02:40 AM
Some other screenshots using the other maps (brickbump, brickwall, and rockwall).

new relief rock bump (http://www.paralelo.com.br/img/relief_curved_new_rockbump_sphere.jpg)
new relief rock wall (http://www.paralelo.com.br/img/relief_curved_new_rockwall_sphere.jpg)
new relief brick wall (http://www.paralelo.com.br/img/relief_curved_new_brickwall_sphere.jpg)

zeckensack
12-12-2004, 06:01 AM
This rocks!

Can you give a rough outline on the shader complexity? Instruction counts vertex/fragment? Number of dependency levels?

fpo
12-12-2004, 08:09 AM
Originally posted by zeckensack:
This rocks!
Can you give a rough outline on the shader complexity? Instruction counts vertex/fragment? Number of dependency levels?Compiled fragment shader assembler instruction count is at 248 now but can be optimized (too long shader but it is just the first test and can be optimized specialy in the search loops). I'm sure pro_optimizer can make it much smaller. Vertex shader is only 22 instructions.

It uses the same approach as the previous shader (linear search followed by a binary serach). For the linear search 16 steps are fine (16 texture reads) and for the the binary search about 6 dependent texture reads are enougth. Then two reads for color and normal textures.
(total 24 texure reads with 6 dependent ones)

davepermen
12-12-2004, 01:08 PM
looks nice. too bad it won't run on my 9600pro..

knackered
12-13-2004, 12:35 AM
Brilliant.
How does it work with shadow volumes? :)

SirKnight
12-13-2004, 07:00 AM
Originally posted by knackered:
Brilliant.
How does it work with shadow volumes? :) "Shadow volume reconstruction from depth maps" maybe?

-SirKnight

SKoder
12-13-2004, 10:05 AM
Who don't have GF76800 can download relief
mapping shader(for Render Monkey 1.5) from here:
Shader (http://evolution.times.lv/Relief2.rar)

It uses pultipass rendering.

SeskaPeel
12-13-2004, 12:13 PM
Would there be a way to avoid all heavy raytracing computation in the middle of the mesh and keep it only for the silhouettes ? This way you could switch to a cheaper approach for the "inside".

In a more general case, is there a way to make it adaptative, say depending on a value stored by vertex, or even in a texture ?

SeskaPeel.

knackered
12-13-2004, 10:26 PM
Originally posted by SirKnight:

Originally posted by knackered:
Brilliant.
How does it work with shadow volumes? :) "Shadow volume reconstruction from depth maps" maybe?

-SirKnightI think rendering this shader from the point of view of multiple lights to get the depth maps for subsequent edge detection may perhaps start to negate the vertex throughput savings achieved.
Maybe in a couple of years...

Pragma
12-14-2004, 06:31 AM
That's very cool. I have been following your work rather intently lately since it fits quite closely with my own research.

In developing a very similar technique for displacement mapping based on ray tracing I ran into the same problem: silhouettes are completely flat.

And I arrived at a similar solution to the one you have just mentioned (and was used for Wang et. al.'s View-dependent Displacement Mapping): generate principal curvatures per-vertex on the mesh, and use these to approximate a ray in texture space by a parabola instead of a straight line.

I found that the parabolic approximation worked very well on spheres (as you presented) and even on cylinders. But then I found that it failed on a torus. As far as I could tell this was not a bug in my code: it seems that an approximation to curvature based on local derivatives is actually quite poor, particularly in places such as saddle points. Have you had similar results with the torus? Could you perhaps post a picture? It would be great to see it on more complex models.

I believe that Wang et. al. realized the drawbacks of the parabolic ray approach, which is probably what led them to abandon it for their EG04 paper.

Edit: I had a version of the GDM paper without pictures, so I missed that figures 8c and 8d show clearly that VDM did in fact have the same problem with silhouettes that I had:
http://research.microsoft.com/users/xtong/xtong.html
I guess this also shows how easy it is to hide crappy results in a paper. :rolleyes:

SirKnight
12-14-2004, 08:35 AM
Do shadows still work with this new addition?

-SirKnight

fpo
12-14-2004, 07:26 PM
Hi Pragma, looks like we had similar ideas so. Do you also use a linear/binary search for the ray intersection?

And yes I used a parabola for the ray intersection as x*x is the simplest function to define the curved ray through texture space.

I made this demo the day before I had to fly out of Brazil to US for a few business meetings. So I did not have time to test it with other objects.

As soon as I get back home next week I will do some more work for general meshes and also the torus case. I thought in using negative curvatures for parts of the torus.

The bad thing on VDM and GDM is the 4D map that needs to be generated. This reduces the quality as sampling angles with low res textures gets you artifacts when looking too close. It is much simpler for artists to use standard depth maps.

Maybe the GDM approach could use a ray intersect like the one in my shader (linear/binrary serach) instead of sampling the 4D map.

And SirKnight, my new shader can also do self shadows and correct depth values even more precisely than previous version as depth range is better defined in it (previous one depth was an arbitrary factor and new one depth factor is in scene units).

Pragma
12-15-2004, 07:29 AM
I am actually not using linear/binary search. I started by implementing linear search, but you run into the issue of aliasing pretty fast. Essentially with linear search there is always a chance you miss a detail completely. I am using a technique based on John Hart's "Sphere Tracing" ( http://graphics.cs.uiuc.edu/~jch/papers/ ) . It is sort of a compromise between 4d/5d vdm/gdm and just 2d depth maps, since you need to keep a 3d map.

The choice of ray intersection technique is somewhat orthogonal to the way you define your ray, so I am sure that the GDM approach would work with linear search. This is basically what was done by Hirche et al ( http://portal.acm.org/citation.cfm?id=1006077 ) as well as Peng et. al. ( http://portal.acm.org/citation.cfm?id=1015773 ).

fpo
01-11-2005, 04:35 AM
Hi again Pragma... sorry for the long delay on getting back to this shader... was too busy at work recently.

But I found some time to generalize my curved relief mapping idea yesterday and was surprized by the excelent results I got!

I use a quadric to represent curvature (A.X^2+B.Y^2=Z) and only pass coefficients A and B at each vertex. To calculate the quadric coefficients for each mesh vertex I use a simple least square matrix solver just like shown on the paper garimella-2003-curvature.pdf (google for it).

I already tested my new shader with a sphere (constant positive curvature in two directions), cylinder (constant positive curvature in one direction and 0 on the other), torus (constant positive in one direction and positive/planar/negative in the other) and a teapot (all different curvatures all around).

I will post a demo/paper asap. Screenshots by the end of the day.

SirKnight
01-11-2005, 04:44 PM
That's pretty nice looking. There were some issues I noticed in that screenshot with the wood texture and the pyramids. That one pyramid on the left is quite "stair steppy." Other than that, everything else looked pretty good. BTW, how many instructions and all that did the old version have compared to your new version?

-SirKnight

BlackBox
01-17-2005, 01:40 AM
The images look great! :)
When can we expect more imformation or a paper? :p

fpo
01-19-2005, 09:28 AM
Hi... just finished support for shadows with the relief mapped objects.

Shadows from relief object into world have the nice silhouettes and shadows from world to relief object project correctly over the per fragment displaced surface.

NormalMap shadows (http://paralelo.com.br/img/relief_shadows_nodepth.jpg)
Curved ReliefMap shadows (http://paralelo.com.br/img/relief_shadows_fulldepth.jpg)

Also I made a new video showing the effect in action with camera/object movement. I also included the actual geometry being drawn in a red wirefarme so you can see exactaly what is going on.

Curved relief map video (http://paralelo.com.br/img/ReliefMappingCurved.wmv)

Enjoy! Paper and demo soon...

LogicalError
01-19-2005, 10:39 PM
It looks great, but i noticed an artifact:
when a stone goes into the distance it wobbles slightly (artifact seems to be stronger near the triangle edges)
you'd probably notice it more when using a more regular texture
the artifact is subtle however, and i doubt it would be noticable in practice..

Volker
01-19-2005, 11:28 PM
looks very nice.

but whats the performance?
compared to parallax mapping and other techniques?

also compared to a displaced high geometry version of the object?

did you test it with more arbitrary meshes?

how about inwards looking meshes (eg cornell box), how well does it perform at the seams?

cheers
Volker

fpo
01-20-2005, 05:47 AM
Yes there are some artifacts on sphere top/bottom as mapping there compresses too much. Also the top/bottom vertex degenerates the mapping completely. I'm working on having better support for such degenerated vertices in my tangent space computation routine. Also I'm testing ways to compensate for texture mapping expand/compress in shader.

The shader is much more complex than simple bump/parallax mapping. After all optimizations I could think about the shader compiles to 220 assembler instructions and uses 6 registers (0 to 5).

mbue
01-21-2005, 11:16 AM
Hello,

dramatic progress, amazing pictures.

How do you handle sharp edges? E. g. a cube or actual level geometry (corner of a brick wall - the ray that is cast on the texture has to change direction).

Would it be possible to have a look at your code?

Regards,

mbue

gaby
01-27-2005, 07:30 AM
Really impressiv ! Good job !

Gaby

knackered
03-21-2005, 04:03 AM
any more progress on a demo?

michagl
04-26-2005, 01:57 PM
my browser only shows one page for this thread, but "fpo's" user profile says he recently posted #105 here.... i'm seeing if posting will get me the other pages.

edit: is there something funny about this thread, or just the bbs database...

FPO's user profile says:

4 new Relief Mapping shader (better Parallax Mapping?!?) (post #000105) OpenGL coding: advanced 03-19-2005

there are not 105 posts here or anything close... was a big chunk of this thread deleted?

anyhow, i'm just trying to bump it up in sight again if nothing else.

if this act is not kosher, my apologies in advance.

Mikkel Gjoel
04-27-2005, 02:04 AM
There are simply two threads, this one, and the longer one you referred to

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012842
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012454

dorbie
04-27-2005, 08:38 PM
While fpo's earlier thread was interesting because it more accurately resolved the new surface sample point than parallax mapping (through dependent read itteration to compute the ray-heightfield intersection) it did not handle silhouettes correctly.

His new algorithm improves on this and appears to treat silhouettes correctly so it's even more interesting.

I've locked the other thread because I want to avoid confusion, bumping the old thread is fine but people will get confused with mistaken cross references and misleading questions w.r.t. the techniques so I've deleted one new post in the old thread to keep the distinction clear.

Please check post dates before jumping to conclusions and have fun.

fpo, awesome work again, I can't wait for the demo.

fpo
04-28-2005, 11:18 AM
Hi... as it seams we have some people interested in the new demo here it is. It is not perfect and does have problems specially at silhouettes when depth it too large (sampling problems). Also I represent surface curvature by a simple quadric at each vertex in two directions only (tangent space directions) so it is a huge approximation for general surfaces. Works good with simple geometry like a cylinder but even on a sphere will get you bad artifacts on top/bottom where mapping/triaingles collapses to a single point.

I didn't post the this demo beofore because it was running too slow. But recently I tried the nVidia shader performance tool (nvshaderperf) and I must say it helped me optimizing the code. it was doing 10 Mpixles/sec before but now it is doing 45 Mpixels/sec (much better) and runs full screen at 85 fps with relief object taking all screen 800x600.

I have also optimized the other locally planar relief mapping implementation using tangent space (from my SIGGRAPH I3D 2005 paper) and it is now doing 85 Mpilxes/sec (quite fast).

Optimized locally planar tangent space relief mapping demo and final I3D2005 paper at:

Locally planar relief mapping with depth factor (http://fabio.policarpo.nom.br/files/reliefmap2.zip)
(aproximate self-shadows and depth correction, no silhouettes)

Locally planar relief mapping with depth in object space (http://fabio.policarpo.nom.br/files/reliefmap3.zip)
(exact self-shadows and depth correction, no silhouettes)

Optimized curved relief mapping demo at:

Curved relief mapping with depth in object space (http://fabio.policarpo.nom.br/files/reliefmap4.zip)
(exact self-shadows and depth correction, good silhouettes)

The first demo uses depth aspect so depth=0.1 will mean 10% of the tile width in scence space. Here is mapping expands/compresses through object surface apparent depth will also change. This version does not require any extra vertex data (just regular position and texcoord).

The other two demos uses object space depth so depth=5 will mean apparent depth of 5 scene units. This requires extra data in vertices. The planar one needs mapping scale information stored as 2 extra floats per vertex telling size of texture tile at each vertex). The curved one needs 4 extra floats per vertex as it also requires curvature information at each vertex (2 floats).

Enjoy, FPO

fpo
04-28-2005, 12:04 PM
Originally posted by Volker:

but whats the performance?
compared to parallax mapping and other techniques?
I used nvshaderperf to check the speed (pixel rate) of each shader. A Parallax mapping implementation gave me 220 Mpixels/sec and the locally planar relief mapping 85 Mpixels/sec. The curved relief mapping (with better silhouettes) is doing 45 Mpixels/sec. I'm sure more optimizations are possible but requires more time doing variations and checking performance.


Originally posted by Volker:

also compared to a displaced high geometry version of the object?
Forget about displacing a high resolution geomerty... I tested that and requires many million vertices and still looks much worse than the relief mapped version. The per-pixel displace is much better and texture fitering makes smooth trasitions from depth to depth... and triangulationg adds terrible artifacts to the displaced geometry. Trust me on that.


Originally posted by Volker:

did you test it with more arbitrary meshes?
how about inwards looking meshes (eg cornell box), how well does it perform at the seams?
Curvature is stored at each vertex with a quadric and interpolated through fragments. This is not perfect and surfaces with complex varing curvature will not work good. But depth correction is perfect for interpentarting surfaces like connection from two walls in a cornell box for example... no problem with that... you will see bricks intersecting correcly at the wall edges as you would expect from real geometry.

plasmonster
04-28-2005, 12:13 PM
Fantastic work, FPO. The paper is a real pleasure to read. Thank you for sharing!

michagl
04-28-2005, 12:27 PM
i'm downloading... just because you didn't say so explicitly, do these demos demonstrate proper silhouettes?

if i had card that could run the demos, the first thing i think i would do is try to aproximate a perfect sphere using a low resolution base geometry.

i read somewhere that the new doom game uses a technique of building normal maps by project a high resolution geometry onto a low resolution geometry. someone in this forum said the doom team was talking about using a similar technique to relief mapping in another thread.

i would think to make the technique completely ubiquitous it would need to not only be able to do detail displacement, but also be able to be counted on to just feel in the gaps between polygons with just per pixel smoothing in general... like i said, displaying a perfect sphere with an aproximate base geometry even though if the base geometry without displacement mapping was rendered it would resemble an angular sphere.

even more exciting for me personally would be to be able to change the geometry while not touching the maps and have the maps continue to aproximate the smooth surface as near perfectly as possible down to a few pixels between vertices. this way you could keep the displaced pixel depth to a mininum while using lod geometry to handle the displacement on a macroscopic scale. achieving this i imagine would mean passing some sort of per vertex linear slope data which the fragment shader could use to subtract the underlying geometry from its displacement. ie. a slope of 0 would be a perfect plane meaning total displacement. you would probably want to pass some 'intercept' per vertex data as well because you would probably want zero displacement on the vertices themselves as they are already fully displaced. does this make sense? everything of course would just be your usual per vertex linear aproximation with the hope that distortion will remain minimal.

btw, should a 6600 card run these shaders just as well minus some performance? i'm probably going to get a card so i can play with this stuff... but for me, i will need to pick up a whole new computer to match the card, so its a big jump.

sincerely,

michael

PS: at least in your other demo you were doing all of your modelview transforms inside your fragment shader... wouldn't you get a much greater speed up if you handed that to the vertex shader and let it interpolate the results rather than doing a full transform at every stage. at least that way you know that the tangent axies are orthonormalized going into the fragment setup.

SirKnight
04-28-2005, 04:21 PM
I will post the other implementations here soon... I want to make a single demo now with all possible options so you can switch between all possible implementations (planar, curved, with or without distance map and using scene space depth) while in the same view position.
Also while you're at it, throw in steep parallax mapping. :)

Just incase you don't know what I mean, here is a link: http://graphics.cs.brown.edu/games/SteepParallax/

-SirKnight

fpo
04-28-2005, 06:38 PM
I've seen that already... but do not know why a demo is not available yet. A friend of mine mailed them for a demo but got no reply yet. Have anyone here seen a demo for that technique? Looks good but without a running demo it is difficult to see what kind of artifacts it generates. Linear search generates bad saw-tooth like artifacts and distance maps a sort of warping around surface discontinuities. There is always a tradeoff and some maps are better with one technique and other maps might be better with another. If anyone has a working demo for steep-parallax mapping please post it.

michagl
04-30-2005, 08:08 PM
i've been studying the shader in your 3rd demo and i'm curious about a few things...


Originally posted by fpo:
For the extra data needed on models, I need 4 extra floats per vertex for the new effect. Two for curvature at each tangent direction and two for texture mapping scale at each tangent direction. This can all be calculated offline once and stored with the model vertices in the geometry file.first off though i noticed this bit... i would apreciate it if you could be more explcit about the 'mapping scale' attributes. i don't see them in the 3rd demo inputs. how are they derived and used?

(also i'm a little confused by your cg file... it looks like your vert shader and frag shader are in the same file. assuming this is true, i figure the final semantics are defined by the cg runtime api -- i need to get my hands on some new cg docs)

i was surprised to see that your 2nd demo ran on my old hardware no problem with decent speeds at double precision. your 3rd demo will not put out anything but wireframes though. i can't see why it wouldn't work with nv30 except maybe it requires too many instructions. it seems my card can maybe handle more instructions than required. i think maybe the 'discard' operator is beyond nv30, but i figure for a demo for old tech you could replace it with an alpha based discard and it might all work.

i also have some questions that i will probably find answers for in some new cg docs once i get around to it... but if anyone feels like sharing. i'm curious about how you are seemingly managing your preprocessor directives at run-time. i can't find anything about this in my cg docs... also you have multiple shaders in a single file... can't figure how to control that at run-time either. guess i need a new Cg SDK (hope its not too big)

oh yeah, also curious about the screenshot where you have a polished rock ball which appears to be shadowing what appears to be an external mesh and vice versa. can you explain exactly how you are achieving this please?

sincerely,

michael

fpo
05-01-2005, 05:16 AM
Good questions... here are the answers.

4 extra floats per vertex:
Two for curvature in each tangent direction. For example on a sphere two constant positive values at all vertices. For a cylinder one contant positive and one zero. For a teapot you have different curvature at each vertex (positieve, zero and negative).
Two texture map scale factors (stored in tangent vector w component). This gives the size of 1 texture tile in object space at that vertex. This is used for defining depth in object space (if you just want to define depth as an aspect or witdh-height / depth you do not need it but then when mapping compresses/expandes apparent depth will also compress/expand).

Yes I use all shaders in same .cg file. When using the cgCreateProgram() method you can pass in the entry point function name so you can pass same file multiple times with different entry points for selecting the technique you want to use. Also last parameter in cgCreateProgram() is a array of string compiler options (can add defines with -Dmydefine for example).

In this new shader I use shadow maps (cast and receive shadows through shadow maps). So when rendering the relief object I pass shadow map to the shader and make shadow comparisson inside shader to receive shdows. To cast shadows in scene with displace silhouette I have a smaller shader (no lighting for example... last method in .cg file). Then when rendering the shadow map I render the relief object using that shader and it will discard fragments at silhoette generating the silhouettes in the shadow map.

Was I clear for you? Any more questions? Should not run fast on NV3X hardware (slow... under 20 fps)... only fast on NV4X hardware (over 100 fps).

Brolingstanz
05-01-2005, 02:53 PM
i believe the correct term is "bitangent" when refering to surfaces like this...binormal when dealing with curves and such. the term has been misused for so long now by so many (including myself) i wonder if there's a point in trying to rectify the situation...

regards,
bonehead

fpo
05-01-2005, 05:49 PM
Originally posted by bonehead:
i believe the correct term is "bitangent" when refering to surfaces like this...binormal when dealing with curves and such. the term has been misused for so long now by so many (including myself) i wonder if there's a point in trying to rectify the situation...
Agree... think bitangent looks better. do not know why I used binormal... maybe saw it somewhere else like that. But bitangent sounds more correct to me.

Brolingstanz
05-01-2005, 06:46 PM
fpo, from what i've seen of your demo, you're entitled to call it whatever you want. i'm actually quite used to binormal now...i think for me there's no going back ;-)

regards,
bonehead

michagl
05-03-2005, 08:16 AM
so i've been doing a little thinking this morning and thought you might be able to use this bit fabio.

basicly my goal here to be able to use a pre-displaced geometry to minimize sampling error between vertices.

if you think of your triangle as having three angles xyz, and you assign a vector to each angle as such (2,1,1) (1,2,1) (1,1,2), then the vertex shader should interpolate these values so that they will aproximate the error between the linear and curved geometry at each fragment if scaled by one another. that is x*y*z.

you can substitute for 2 any number larger than 1 and still get the desired effect. after interpolation you can subtract 2 and then divide by what is left of the central fragment (in 2's case 1.375) to map the final value between 0 and 1.

so in the central fragment you will have a value of 1, at each vertex the value will be 0 and in the middle of each edge should be a value between 1 and 0 which i believe depends on what you substitue for 2.

for 3 you get 3 at the vertices, 4 between edges, and 8 in the center before remapping. after remapping you get 0,0.2,1 respectively.

the numbers you get ranging between 2 to 5 for the fragment in the middle of the edges are odd:

2: 0.181818181
3: 0.2
4: 0.193548387
5: 0.045454545

for instance for 3) (2^2-3)/(2^3-3) = 0.2

maybe values between 1 and 2 would yield better results. i really haven't analyzed this clearly, but it is the only way i can think of to the the proper constraints (namely 0 at the vertices, 1 in the middle, and something inbetween on the edges)

this forms a sort of pitched canopy over the triangle which could possibly be modulated by the curvature of the fragment.

i haven' gone further than this, but hopefully you could use this value to fit a displacement map on top of an already displaced geometry, so that you can minimize error between vertices by ensuring that the surface is also sampled with high enough frequency that your tangent/curvature data will always remain sufficiently accurate with respect to the camera.

finally there is one caveat for present hardware. you can only do this for a mesh where all of the xyz indices of every triangle overlap. winding is not an issue because you only need to think xyz in terms of the corner vectors which need not correspond with actual xyz indices with respect to winding. as far as i can tell, this is generally only possible for regular fan meshes with an even number of slices. for instance tiled hexagons(strips), or tiled octagons and diamonds, or just diamonds(right-triangle). you would set x at the center of the fan, then alternate between y and z around then fan, then connect all your fans together such that the pattern continues between fans.

a more robust solution would be for the hardware to pass its baryocentric coordinates to the frag shader... with that option there would be no need for per vertex data.

the per vertex data itself can be retrieved with a single scalar in a vertex shader, which could be used to pick from the array of the 3 xyz vectors.

for now i will leave it up to fabio or whoever to best decide what to do with the extra parameter (which is basicly the difference between in the ideal smooth surface, and its linear geometric aproximation, along the z axis of tangent space)

sincerely,

michael

EDIT: lucky me, the octagon diamond continuous subdivision process i'm using appears to follow the said xyz fanning rule perfectly even when the subdivision is discontinuous across varying levels of detail. so even though my meshes can appear somewhat irregular, they still follow the rule. (i was sort of expecting this due to the regular construction) ... however if such a technique proves fruitful, hopefully someday it will be possible to recieve the hardware's baryocentric coordinates directly in the frag shader, which would mean the method could be generalized to all meshes. anyhow, if you don't want to persue this fabio, you can bet i will. i figure at this point though that you are best positioned to see directly how this extra parameter could be put to use. -michael

fpo
05-03-2005, 05:40 PM
Yes me too ... those artifacts on top/bottom of sphere looks terrible. I will try to fix it soon ... some problem with the mapping scale I compute for each vertex there.

dorbie
05-03-2005, 05:53 PM
This thread presents a seminal contribution to graphics that will be remembered and perhaps referred to for a long time. Like others I do not want the important discussion lost under the inane distractions one poster keeps provoking. It's not OK to post on issues of moderation here either.

To those who had several posts deleted on two occasions, trust me I did you a favor, your edited record makes you look a LOT smarter, even if you don't appreciate that. (I didn't delete the obviously missing post, knackered deleted his own on topic post saying the demo was awesome but complaining of aliasing artifacts at the poles).

Please stick to the topic.

michagl
05-03-2005, 06:45 PM
EDIT: someone deleted a post which was quoted here... without it this post is a little out of the blue, but still relevant i guess.
personally i feel like everyone is expecting way to much of this algorithm.

i too see its amazing potential, and i'm not suggesting a limited spectrum of possibilities.

but i don't see why we need to be able to pull an angel out of a single quad for instance. i mean thats fine if you can billboard the quad and all and get it to work...

same goes for extreme displacement map scales. maybe you will get them to work out in the end. but all i'm hoping for is a pixel perfect silhouette that doesn't require triangles with single pixel areas.

i'm thinking it will be amazing if i can get by with geometry with triangles spanning maybe 8 pixels... if i could use this alogrithm to get a smooth silhouette across 8 pixels, i will be totally satisfied, everything else would just be icing on the cake.

personally i don't see a whole lot of potential in detail displacement mapping. detail mapping will always be limited by the linearity of the underlying geometry. if you look at fabio's screenshots (say the wooden ball with pyramids), even if you totally ignore all the defects in the displacement, you can still make out where the edges are in the mesh just by looking at the distortion in the texture/displacement even though the vertex count is considerably high. that is the displacement is still against a linear geometry. you notice this because that screen is the map which is not so noisy as to hide the base geometry. for me this isn't acceptable for the end game of computer graphics.

there needs to be some way of capturing the curvature implicitly. if it can't be stored in the map, then it needs to be accounted for within the fragment shader via the per vertex curvature attributes. room also needs to be made in the linear geometric hull to fit extra displacement due to curvature between vertices.

the normal scale of the displacement map also needs to take in account areas of high curvature (that is to say that there needs to be a per vertex depth scale attribute).

basicly all i'm trying to say is there is still a long way to go... but i figure as long as we are rasterizing triangles this routine is what the future of photo realistic virtual reality will look like.

to me this represents an epoch in computer graphics. we've exited the triangle age and entered the age of the pixel*.

triangles have taken a back seat to finite sampling of a vast array of surface attributes, but triangles no longer have center stage it would seem.