Blur offscreen buffer (Soft Shadows)

I am working on a rendering engine the has shadow mapping implemented and i am attempting to blur my shadow map to give the “illusion” of soft shadows and to eliminate anti-aliasing issues. I am basically trying to implement the following article:

http://www.gamedev.net/reference/articles/article2193.asp

I am creating the shadow map just like you normally would (GL_DEPTH_COMPONENT) then i render only the shadowed areas to an off-screen buffer. The problem that i am having is that I cannot figure out how to “blur” that off-screen buffer. I am trying to accomplish this with a convolution (glConvolutionFilter2D), but have not been sucessful. I also have been reading that convolution is slow and not very well supported.

Do anyone have any suggestion/ideas on how I could better implement the previously mentioned article OR how to blur an off-screen buffer?

Thanks a bunch

Best way to blur is using a two pass render to texture approach. Each pass should use a simple fragment program and blurs horizontaly in first pass and then verticaly in second. If you look for glow effects you will find lots of detail on how to do it.

I use it in a glow effect I have and it works well. I have two pbuffers and render from first to second bluring horizontaly and then from second to first bluring verticaly. Then use the resulting texture in the final render.

Here is a simple Cg shader for such blur:

 struct vert2frag
{
	float4 hpos : POSITION;
	float4 color : COLOR0;
	float2 texcoord : TEXCOORD0;
};	

struct frag2screen
{
	float4 color : COLOR;
};

#define NUMPOINTS 7

frag2screen main_blur(
	vert2frag IN,
	uniform sampler2D texture:TEXUNIT0,
	uniform float3 dispuv[NUMPOINTS])
{
	frag2screen OUT;

	float3 tex=float3(0.0,0.0,0.0);
	for( int i=0;i<NUMPOINTS;i++ )
		tex+=f3tex2D(texture,IN.texcoord.xy+dispuv[i].xy)*dispuv[i].z;

	OUT.color.xyz=tex;
	OUT.color.w=1.0;
	
	return OUT;
}

I pass an array of displace values for the texture look up so you can define the horizontal or vertical blur by setting 0 on x or y components. And z component holds 1.0/NUM_POINTS or you can use more complex weighting formulas like a Gaussian.

Originally posted by fpo:
Each pass should use a simple fragment program and blurs horizontaly in first pass and then verticaly in second.

I don’t have much experience with fragment programs, so I was hoping to accomplish this without using them. But it looks like this might be my best solution. Do you have any suggestions on how to accomplish this w/o fragment programs?

Thanks for the quick reply

This is glow effect w/o fragment program/shader. This code use RenderTexture class (you can find it on sourceforge). Trick is to do ping-pong between two pbuffers. First do horizontal blur and then vertical using same gaussian coefficients.

  
RenderTexture* MakeGlow(RenderTexture* src, RenderTexture* rtt, int w, int h, int num_passes, float* c, int len, float scale)
{
	int i,j;
	src->BeginCapture();
	{
		glClear(GL_DEPTH_BUFFER_BIT);		
		glMatrixMode(GL_PROJECTION); glLoadIdentity();
		glMatrixMode(GL_MODELVIEW); glLoadIdentity();
	}
	src->EndCapture();
	rtt->BeginCapture();
	{
		glClear(GL_DEPTH_BUFFER_BIT);		
		glMatrixMode(GL_PROJECTION); glLoadIdentity();
		glMatrixMode(GL_MODELVIEW); glLoadIdentity();
	}
	rtt->EndCapture();
	float pola = len/2.0f;
	float dx = 1.0f/(float)w;
	float dy = 1.0f/(float)h;
	RenderTexture* t;

	for (i=0; i<num_passes; i++)
	{
		rtt->BeginCapture();
		{
			glClear(GL_COLOR_BUFFER_BIT);		
			glEnable(GL_TEXTURE_2D);
			src->Bind();
			glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
			glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
			glEnable(GL_BLEND);	glBlendFunc(GL_ONE, GL_ONE);
			glBegin(GL_QUADS);
			for (j=0; j<=len; j++)
			{
				float xofs = (float)(j-pola)*dx*scale;
				glColor4f(c[j], c[j], c[j], 1.0);
				glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f+xofs, -1.0f,  0.0f);
				glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f+xofs, -1.0f,  0.0f);
				glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f+xofs,  1.0f,  0.0f);
				glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f+xofs,  1.0f,  0.0f);
			}
			glEnd();
			glDisable(GL_BLEND);
		}
		rtt->EndCapture();
		t = src; src = rtt; rtt = t;
	}
	for (i=0; i<num_passes; i++)
	{
		rtt->BeginCapture();
		{
			glClear(GL_COLOR_BUFFER_BIT);		
			glEnable(GL_TEXTURE_2D);
			src->Bind();
			glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
			glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
			glEnable(GL_BLEND);	glBlendFunc(GL_ONE, GL_ONE);
			glBegin(GL_QUADS);
			for (j=0; j<=len; j++)
			{
				float yofs = (float)(j-pola)*dy*scale;
				glColor4f(c[j], c[j], c[j], 1.0);
				glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f+yofs,  0.0f);
				glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f+yofs,  0.0f);
				glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f+yofs,  0.0f);
				glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f+yofs,  0.0f);
			}
			glEnd();
			glDisable(GL_BLEND);
		}
		rtt->EndCapture();
		t = src; src = rtt; rtt = t;
	}
	
	return src;
}


And use it:
// size of texture
#define RTTX 128
#define RTTY 128

// select your convolution kernel size
#define BLUR_KERNEL_LEVEL 10
float blurKernel[BLUR_KERNEL_LEVEL+1] = {place your normalized gaussian coeeffs};

float blur_scale = 1.0f; // optional param for overbright control

rtt1->BeginCapture();
{
 RenderYourScene();
}
rtt1->EndCapture();
RenderTexture* BLUR = MakeGlow(rtt1, rtt2, RTTX, RTTY, 1, blurKernel, BLUR_KERNEL_LEVEL, blur_scale);

// now BLUR contain blured scene 

yooyo

Just as as side note: I implemented soft shadows as described in the article, but it doesn’t look really great:

  • Unshadowed regions can suddendly become slightly shaded (you blur no matter how much the z-values change)
  • Shadowed regions can look too bright (If a neighbour area e.g. in front of the object is not in the shadow the light blurs into areas that should be dark. This leads to a glowing effect.
  • Softness is independent of the distance between the shadow caster and the shadow receiver which looks odd.
  • Softness is given in screen space pixels which looks strange when you zoom in (The edges of the shadow appear less soft when zooming in)

But it’s a lot better than just leaving the shadow maps with their artefacts.
A whole lot better.

stefan,

How did you do step 3 (“Blurring the screen buffer”) from the previously mentioned article?

Did you use a fragment program/shader?

I haven’t try this out yet, but it might be working and add some quality improvement when doing two dimensional bluring off shadow maps.

First you render the depth shadow map from the light’s point of view as you normaly would. In a second pass we generate an RGBA8 /let’s call it/ FAT shadow mask buffer.

In the vertex shader we calculate the normalized vector from the point of view to the currently processed vertex, and multiply it by the world space normal vector. The 1-abs(x) and 1-abs(y) values are sent down to the fragment shader, where x and y is the first two components of the result vector of the previous operation.

We do the shadow comparison in the fragment shader and if the rendered pixel is in shadow we put 0 into red component, if it’s lit we put 255.
The 1-abs(x) and 1-abs(y) values from the vertex shader are stored in the g and b channels. Tha alpha channel is used for storing the occluder-receiver distance.

The bluring is done in the next pass. We bind the FAT buffer as a texture and use a two dimensional separable blur filter for softening the shadow mask boundaries stored int the FAT buffers red channel. The filter kernel has a constant maximum size. In the horizontal pass we multiply the kernel size with the values sampled from the g and alpha channels. We do the same for the vertical pass but this time the values are sampled from the b and alpha channels.
The values in the gb channels makes the kernel size adjusting with the viewing angle.

After this the shadow map could be used for darkening the pixels of the finished image, but this is not phisicaly correct. Instead maybe we can map this shadow buffer as a plus texture for rendering the scene objects and use the screen space positions of the rendered pixels to sample the grayscale, filtered shadow map.

@burnettryan: I’ve implemented it that way (I’m not in front of the program so I just hope I don’t lie):

  1. Draw the scene as usual from the light view into a pbuffer and store the depth values
  2. Use a fragment program to draw a black and white scene from the camera using the standard depth comparison using the values from #1
  3. Copy back buffer to a texture and use it to blur hoizontally with a fragment program.
  4. Copy back buffer to a texture and use it to blur vertically with a fragment program.
  5. Copy back buffer to a texture and use that texture to modulate lighting in the scene.
    (That’s pretty much as described in the article)

@knackered: If you have PSM or TSM nicely working it’ll introduce new and ugly artefacts. Maybe it helps if you suffer badly from aliasing. (and choose a very small radius)