Relief mapping artifact

I’m trying to implement relief mapping based on http://www.inf.ufrgs.br/~oliveira/pubs_files/Policarpo_Oliveira_Comba_RTRM_I3D_2005.pdf this paper, and the sample shader code in the appendix.

What I have works fine except for this strange artifact I’m getting:

It’s as if the ray’s slope is greater than it should be and that that it’s intersecting parts of the image that it shouldn’t be able to reach. However, changing the slope doesn’t seem to do all that much. This artifact occurs wherever the angle between two faces is greater then 180.

Is anyone here experienced with this technique?

Here’s my ray cast function:

vec2 castRay(in sampler2D rm, in vec2 tc, in vec2 delta) {
	const int nLinearSteps = 50;
	const int nBinarySteps = 15;
	
	float rayDepth = 0.0;

	float stepSize = 1.0/float(nLinearSteps);
	float texelDepth = texture(rm, tc).a;

	float intersect = 0.0;

	//linear test
	for(int i = 0; i < nLinearSteps; i++) {
		intersect = 1.0;
		if(texelDepth > rayDepth) {
			rayDepth += stepSize;
			texelDepth = texture(rm,  tc + (delta * rayDepth)).a;
			intersect = 0.0;
		}
	}


	if(intersect < 0.9)
		discard;

	//"Rewind" to the point before the intersection, but only if there is an intersection
	rayDepth -= (stepSize * intersect);
	
	//binary search
	for(int i = 0; i < nBinarySteps; i++) {
		stepSize *= 0.5;
		rayDepth += stepSize;
		texelDepth = texture(rm, tc + (delta * rayDepth)).a;
		if(texelDepth <= rayDepth) {
			stepSize *= 2.0;
			rayDepth -= stepSize;
			stepSize *= 0.5;
		}
	}

	return (tc + (delta * rayDepth * intersect));
	
}

In the main function, I have

vec2 delta = fReliefScale * -normTanView.xy/normTanView.z;

“normTanView” is the normalized tangent-space view vector and “fReliefScale” is what modifies the slope of the ray.

I’m using mikktspace to get the tangent and bitangent vectors.

Try disabling mipmapping for the texture(s) you are ray casting against. If the problem remains, that will at least rule out texcoord gradient calculation as a problem.

I remember when the author of that paper debuted the technique more or less in this forum. There was a long discussion where I probably said an embarrassing thing or two.

Does anyone know if this has come along… been used effectively in any bigwig games? I was trying to recall just the other day if the technique suffered at shear angles like standard bump mapping or not. It looks pretty good in the paper but the silhouettes still look flat; but a few of the plates seem to pop out. It’s not super clear that a discard is being done along the silhouette or not.

I was working on some stuff at the time, and I thought this would be a great technique for filling in the spaces that were just a few pixels wide in a regularly tessellated (in screen space) procedural mesh.

It was used in Crysis, and probably a couple others. There have been some papers released with variations on the original technique that use acceleration structures to speed up the ray cast (distance fields, more or less) at the cost of preprocessing time and/or more texture memory, but the original technique is still pretty effective. Performance became pretty good starting with DX10 hardware.

For shear angles… it’s a good idea to clamp the maximum angle that you’ll raycast at for performance reasons, to avoid skipping around texture memory too much while searching. Silhouettes can be accomplished by combining discard with either geometry fins that stick out from the edges (allows the silhouette to stick out from the surface) or no fins (the silhouette can only be inset into the surface). I doubt many games would actually implement this, though, because the discard would hurt performance by messing with fast depth culling. The typical game use case is making rocks and other bumpy things stick out from the ground, where you wouldn’t be able to notice the silhouette.

So just out of curiosity. How does it stack up to just tessellating the surface with a geometry shader nowadays? I don’t know much about geo shaders. I just found out the other day that apparently they run after the vertex shader, which is not how I had imagined it. Anyway. Is a geometry shader good for a lot in screen space? I guess the down side would be generating a bunch of self-defeating pixels along a silhouette??

The geometry shader isn’t designed to produce enough output triangles quickly enough to be used for that kind of detail tessellation. It has an upper limit on how many output triangles per input triangle it can produce, and performance also degrades the more you output from a geometry shader. Tessellation shaders from GL 4.x (control and evaluation shaders) can do a good job of replacing relief mapping with actual geometry with the same or better image quality.

Thanks, I will have to look into the tessellating shaders. I kind of assumed that was the main thing people were using geometry shaders for, and tessellating was for parametric surfaces (patches) only. Good to know.

So what about relief mapping. Is is still the win for per pixel? I imagine anyway that rendering pixel sized triangles is never a good idea. Anyone?

I kind of assumed that was the main thing people were using geometry shaders for

Geometry shaders are primarily used for:

  1. Specialized point-to-quad conversion operations.

  2. Layered rendering. Writing different primitives to different layers of a layered framebuffer.

  3. Feeding transform feedback operations with specialized data, including multi-stream output. This probably won’t be used as often now that we have Compute Shaders, but for pre-compute hardware (3.x), some of this could still be useful.