Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 8 of 8

Thread: Relief mapping artifact

  1. #1
    Junior Member Newbie
    Join Date
    Jul 2012
    Posts
    4

    Relief mapping artifact

    I'm trying to implement relief mapping based on http://www.inf.ufrgs.br/~oliveira/pu...M_I3D_2005.pdf this paper, and the sample shader code in the appendix.

    What I have works fine except for this strange artifact I'm getting:

    http://i.imgur.com/wXaOZ.png

    It's as if the ray's slope is greater than it should be and that that it's intersecting parts of the image that it shouldn't be able to reach. However, changing the slope doesn't seem to do all that much. This artifact occurs wherever the angle between two faces is greater then 180.

    Is anyone here experienced with this technique?

    Here's my ray cast function:

    Code :
    vec2 castRay(in sampler2D rm, in vec2 tc, in vec2 delta) {
    	const int nLinearSteps = 50;
    	const int nBinarySteps = 15;
     
    	float rayDepth = 0.0;
     
    	float stepSize = 1.0/float(nLinearSteps);
    	float texelDepth = texture(rm, tc).a;
     
    	float intersect = 0.0;
     
    	//linear test
    	for(int i = 0; i < nLinearSteps; i++) {
    		intersect = 1.0;
    		if(texelDepth > rayDepth) {
    			rayDepth += stepSize;
    			texelDepth = texture(rm,  tc + (delta * rayDepth)).a;
    			intersect = 0.0;
    		}
    	}
     
     
    	if(intersect < 0.9)
    		discard;
     
    	//"Rewind" to the point before the intersection, but only if there is an intersection
    	rayDepth -= (stepSize * intersect);
     
    	//binary search
    	for(int i = 0; i < nBinarySteps; i++) {
    		stepSize *= 0.5;
    		rayDepth += stepSize;
    		texelDepth = texture(rm, tc + (delta * rayDepth)).a;
    		if(texelDepth <= rayDepth) {
    			stepSize *= 2.0;
    			rayDepth -= stepSize;
    			stepSize *= 0.5;
    		}
    	}
     
    	return (tc + (delta * rayDepth * intersect));
     
    }

    In the main function, I have

    Code :
    vec2 delta = fReliefScale * -normTanView.xy/normTanView.z;

    “normTanView” is the normalized tangent-space view vector and “fReliefScale” is what modifies the slope of the ray.

    I'm using mikktspace to get the tangent and bitangent vectors.

  2. #2
    Junior Member Regular Contributor
    Join Date
    Mar 2004
    Location
    Austin, TX, USA
    Posts
    109
    Try disabling mipmapping for the texture(s) you are ray casting against. If the problem remains, that will at least rule out texcoord gradient calculation as a problem.

  3. #3
    Member Regular Contributor
    Join Date
    Jan 2005
    Location
    USA
    Posts
    411
    I remember when the author of that paper debuted the technique more or less in this forum. There was a long discussion where I probably said an embarrassing thing or two.

    Does anyone know if this has come along... been used effectively in any bigwig games? I was trying to recall just the other day if the technique suffered at shear angles like standard bump mapping or not. It looks pretty good in the paper but the silhouettes still look flat; but a few of the plates seem to pop out. It's not super clear that a discard is being done along the silhouette or not.

    I was working on some stuff at the time, and I thought this would be a great technique for filling in the spaces that were just a few pixels wide in a regularly tessellated (in screen space) procedural mesh.
    God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Mar 2004
    Location
    Austin, TX, USA
    Posts
    109
    It was used in Crysis, and probably a couple others. There have been some papers released with variations on the original technique that use acceleration structures to speed up the ray cast (distance fields, more or less) at the cost of preprocessing time and/or more texture memory, but the original technique is still pretty effective. Performance became pretty good starting with DX10 hardware.

    For shear angles... it's a good idea to clamp the maximum angle that you'll raycast at for performance reasons, to avoid skipping around texture memory too much while searching. Silhouettes can be accomplished by combining discard with either geometry fins that stick out from the edges (allows the silhouette to stick out from the surface) or no fins (the silhouette can only be inset into the surface). I doubt many games would actually implement this, though, because the discard would hurt performance by messing with fast depth culling. The typical game use case is making rocks and other bumpy things stick out from the ground, where you wouldn't be able to notice the silhouette.

  5. #5
    Member Regular Contributor
    Join Date
    Jan 2005
    Location
    USA
    Posts
    411
    So just out of curiosity. How does it stack up to just tessellating the surface with a geometry shader nowadays? I don't know much about geo shaders. I just found out the other day that apparently they run after the vertex shader, which is not how I had imagined it. Anyway. Is a geometry shader good for a lot in screen space? I guess the down side would be generating a bunch of self-defeating pixels along a silhouette??
    Last edited by michagl; 09-23-2012 at 07:12 PM.
    God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.

  6. #6
    Junior Member Regular Contributor
    Join Date
    Mar 2004
    Location
    Austin, TX, USA
    Posts
    109
    The geometry shader isn't designed to produce enough output triangles quickly enough to be used for that kind of detail tessellation. It has an upper limit on how many output triangles per input triangle it can produce, and performance also degrades the more you output from a geometry shader. Tessellation shaders from GL 4.x (control and evaluation shaders) can do a good job of replacing relief mapping with actual geometry with the same or better image quality.

  7. #7
    Member Regular Contributor
    Join Date
    Jan 2005
    Location
    USA
    Posts
    411
    Thanks, I will have to look into the tessellating shaders. I kind of assumed that was the main thing people were using geometry shaders for, and tessellating was for parametric surfaces (patches) only. Good to know.

    So what about relief mapping. Is is still the win for per pixel? I imagine anyway that rendering pixel sized triangles is never a good idea. Anyone?
    God have mercy on the soul that wanted hard decimal points and pure ctor conversion in GLSL.

  8. #8
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I kind of assumed that was the main thing people were using geometry shaders for
    Geometry shaders are primarily used for:

    1. Specialized point-to-quad conversion operations.

    2. Layered rendering. Writing different primitives to different layers of a layered framebuffer.

    3. Feeding transform feedback operations with specialized data, including multi-stream output. This probably won't be used as often now that we have Compute Shaders, but for pre-compute hardware (3.x), some of this could still be useful.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •