Shadow- and cubemaps

There are plenty of good tutorials of how to implement shadow mapping with spot lights, but I haven’t found any that shows how to do it with cubemaps for omni directional lights. So, if anyone knew one or could explain it, it’d be great.
I know how to read the depth values into textures, and with cubemaps I guess I just have to do it for each of the sides. But the depth comparing is quite unclear to me. Can I perform it in fragment programs with cubemaps?

Also, how would I go about packing more depth precision into RGB channels?

Well, for cubemaps you have the pack the depth information into RGB(A) as they don’t currently support GL_DEPTH components of any sort. The problem is described from the spec for ARB_depth_texture:

(5) What about 1D, 3D and cube maps textures? Should depth textures
be supported?

  RESOLVED:  For 1D textures, yes, for orthogonality.  For 3D and cube map
  textures, no.  In both cases, the R coordinate that would be ordinarily
  be used for a shadow comparison is needed for texture lookup and won't
  contain a useful value.  In theory, the shadow functionality could be
  extended to provide useful behavior for such targets, but this
  enhancement is left to a future extension.

hm. currently the page of humus is inaccesible for me, but theoretically, there would be some demos on how to pack one float into the 4 channels of an ordinary rgba texture, and unpack again, to do high depth precicion cubic shadow mapping

check Humus Page . But possibly, he gets a new host, on ati, or so, and thats why it’s down currently…

Well, for cubemaps you have the pack the depth information into RGB(A) as they don’t currently support GL_DEPTH components of any sort.
How to do it? Can I get the pixel depth value in a fragment program and then return it as a RGB to be rendered, and later read back into cubemap face?

Originally posted by blender:
How to do it? Can I get the pixel depth value in a fragment program and then return it as a RGB to be rendered, and later read back into cubemap face?
Easy, pass the camera position as a uniform and calculate the distance for each vertex (CamPos - gl_Vertex.xyz)

This code store the squared distance in a RGBA texture. Note, no packing is performed.

// Vertex Shader
uniform vec3 uPOV; // Point of view
varying vec3 vDistanceVector;

void main()
{
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	vDistanceVector = uPOV - gl_Vertex.xyz;
}

// Fragment Shader
uniform float uInvSqRange; // 1.0 / LightRange^2
varying vec3 vDistanceVector;

void main()
{
	gl_FragColor	= vec4(dot(vDistanceVector,vDistanceVector) * uInvSqRange);
}

Set up a cubemap rendertarget (pbuffer) and render each face.

Sunray: so basically that’s a relative distance in the light range in range [0, 1]?

Then, in my light shader, do I look up the distance from the cubemap with light vector to a pixel and then check which one is greater?

Correct. Remember to multiply dot(light->pixel,light->pixel) with 1.0 / LightRange^2 in the light shader.

One thing though: I’m using Cg, and in the shadow vertex program, I should pass the light vector for the fragment program, but as what? If i make it a texture coordinate, won’t it get clamped to range [-1, 1]? I could calculate the relative distance in vertex shader and pass it as a tc, but would it do the trick?

Yeah, that’s an advatage of GLSL, you don’t have to specify an interpolator.

It won’t be clamped. Only color is clamped after vertex processing. (EDIT: Hmm, not sure if color is clamped, I’m probably wrong about that)

Everything appears shadowed when I tried (not surprised).

Time for an overview of my doings:

  1. Draw the scene into cubemap from the light’s point of view with shadow vertex and fragment programs enabled
    (can’t use a pbuffer, but this is done before anything else gets rendered to the screen)

  2. Draw ambient pass

  3. Draw scene with diffuse+specular light shaders enabled (shadow cubemap bound)

Here’s a code snipped of rendering the cubemaps:

void CGLRenderer::PreRender()
{

	glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
	SetRenderMode(RENDER_FILLED);
	glDisable(GL_BLEND);
	glDepthMask(1.0f);

	CLight::EnableShadowmapRendering();

	for(int i=0; i<CLight::GetLightCount(); i++)
	{
		CLight	*pLight	= CLight::GetLight(i);

		int	Size	= pLight->GetShadowmapSize();

		glViewport(0, 0, Size, Size);
		glMatrixMode(GL_PROJECTION);
		glLoadIdentity();
		gluPerspective(90,1.0,1,5000);
		glMatrixMode(GL_MODELVIEW);

		for(int j=0; j<6; j++)
		{
			Clear();
			pLight->SetupCubeShadowmap(j);
			StaticMeshContext.Draw();
			pLight->ReadCubeShadowmap(j);
		}
	}

	SetRenderMode();
	CLight::DisableShadowmapRendering();
}

void CLight::SetupCubeShadowmap(int Side)
{

	cgGLSetParameter1f(CLight::ShadowSquareLightIntensity, SQUARE(Intensity));

	glLoadIdentity();
	switch( Side )
	{
		case 0:
			glRotatef(90, 0, 1, 0);
			break;

		case 1:
			glRotatef(-90, 0, 1, 0);
			break;

		case 2:
			glRotatef(90, 1, 0, 0);
			break;

		case 3:
			glRotatef(-90, 1, 0, 0);
			break;

		case 4:
			break;

		case 5:
			glRotatef(180, 0, 1, 0);
			break;
	}
	glTranslatef(Position[0], Position[1], Position[2]);
}



void CLight::ReadCubeShadowmap(int Side)
{

	glEnable(GL_TEXTURE_CUBE_MAP_ARB);
	glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, CubeShadowmap->GetID());

	glCopyTexSubImage2D(CubemapSides[Side], 0, 0, 0, 0, 0, CubeShadowmap->GetSize(),
		CubeShadowmap->GetSize());

	glDisable(GL_TEXTURE_CUBE_MAP_ARB);
}

Here are the shaders:

Shaders

<LAME _QUESTION>
Sorry for that …
I never played with shadow mapping, but this method of cube map rendering seems good to me.

What are the improvements compared to classic shadow mapping ? Is all this mess about hardware acceleration ? I heard about “dual paraboloid” method, that is supposed to handle omnis, is that all about having a specific format for shadow textures not feasible with cube maps ?
</LAME_QUESTION>

Sorry again :slight_smile:
SeskaPeel.

SeskaPeel, AFAIK in dual paraboloid shadow mapping, you have two depth textures (front and back) that each have half the scene rendered with ‘paraboloid shaped view frustum’. It produces some artifacts with poorly tesselated scenes, and all the demos I’ve seen looked awful. Though i haven’t tried it out myself.

Blender: unless i’m mistaken, you are not scaling your to-light vector in the vertex shader. This vector has to be in the [0-1] range for it to work.

Y.

Ysaneya, so they are clamped?!

Then I can’t obviously calculate the light vector length in the fragment program.

Blender, thanks for the answer, but it doesn’t match my question, that was rather asking for a comparison between classic shadow mapping and cube map shadow mapping, where it was just stated that it didn’t support hardware acceleration.

I’m asking for more precisions, and maybe even how slow it could be to use cube maps instead of 2D hardware accelerated depth textures, and why.

SeskaPeel.

SeskaPeel, I think this cubemapping tehnique is pretty much HW accelerated. Or maybe you mean that the regular shadow mapping is directly HW accelerated i.e. there’re direct extensions to handle it, and with cubemaps you have to do with things that are not directly shadow rendering related.
If I just managed to get this to work, I might benchmark. I believe it would be fast, and those cubemaps wouldn’t have to be updated every frame (cubemap updating is probably the biggest performance eater in this case), just when the source or objects move.

I was also thinking how to get soft edged shadows with this:
If I pass e.g. 4 jittered lightsources to the shaders indtead of one, and then do a shadow compare for each one and then average the result, wouldn’t that “smoothen” the shadow edges slightly?
At least that’s what I did in my lightmapper once and it seem’d to work just fine.

Blender: the only thing you have to do is to divide your light vector by the light radius. A vertex outside the light radius will get a value higher than 1 but it doesn’t matter because there will be no lighting contribution outside the light radius.

It’s pretty fast too, but a bit slower than standard 2D shadow mapping. However with 2D shadow mapping you’d need to generate 6 90° spot-lights (or 2 180° ones, ie. dual paraboiloids), so it’s pretty much always a win in the end. Sampling a cube texture is slower than a normal 2D texture, and you have to do the distance calculations/comparisons in the pixel shader, but that’s not as bad as you’d expect. However one of the big performance problem is that you can no longer benefit from NVidia’s hardware PCF (if you’re using an NVidia card), and to antialias your shadow you need to average N samples around a fetched texel in the pixel shader. Then shading becomes a real bottleneck.

Y.

I’m using dual-paraboloid shadow maps in my engine, and they don’t look like crap. I don’t have an up-to-date demo available, but here are a couple of screenshots:
http://www.hut.fi/~ikuusela/images/Image1.jpg
http://www.hut.fi/~ikuusela/images/trouble.jpg

The real difference between the two techniques remains unclear until somebody bothers to write a test app using both of the techniques in a similar, real-world situation. In general:

-DP shadows are heavy on geometry, cubemaps on fragments. This difference is made bigger by the fact that having the geometry always well-tesselated allows you to move computations from fragment level to vertex level. The interesting thing is, that for low-poly models you’re practically always fill-limited, so the one with fastest fillrate wins. When the amount of geometry increases, the need of additional tesselation for DP decreases, closing the gap on the geometry side.

-Dual-paraboloid maps are faster to update than cubemaps (unless retesselation is required). With cubemaps it’s likely that several objects occupy more than one cubemap face, and they have to be drawn more than once. If you divide a DP map in top/bottom maps, often all moving objects belong to the bottom map.

-With DP maps it’s easy to use different resolutions for the two maps, I often use smaller resolution for the top maps, since most of the detail is below the lights.

-Cubemaps always give better or equal quality than DP maps, and they’re practically free of any distortion problems.

-More seams on cubemaps. Doesn’t matter for basic implementation, but can hurt some special effects (it’s hard to blur a cubemap).

-DP maps are harder to get working than cubemaps

-The ability to exploit NVidias PCF is a minor advantage, since it doesn’t work on all cards anyways. Besides, there are other ways to antialias the shadows, for example the penumbra maps used in my screenshots. They generally give superior quality as long as the shadowmap resolution is good enough to capture all the details. If not, things’ll look ugly :frowning: They’re slower for shadowmap updates, but very fast for static shadows.

So… No real answers here, but at least some points to consider.

-Ilkka

I’ve been observing the image in the cubemap textures if they are formed correctly, and they’re not. According to the formula Dist^2/Rad^2, the furthest pixel should be white and the closest black. Well, I’m having images where the scene is white in the back AND in front and black on the centre (occasionally). I believe it’s caused by bad shading. Since I can’t pass the light vector with the length of light to the fragment shader (since tc:s are clamped to [-1, 1]) I have to calculate the relative distance in vertex shader, and it gets interpolated cross surfaces. With bad tesselation, the distances appear distorted.
Lets say the the light is above a huge polygon. The vertices on front of light and in the back of it are within a distance, and gets shades of gray, and thus, close to camera, there is no black, but some gray because if interpolation.

EDIT: Hmm, not so sure if it’s just shading, since some separate objects close to light seems quite white too. :confused:

Texture coordinates are not clamped to [-1, 1]. You should calculate the distance in the pixel shader, not in the vertex shader.

Y.