PDA

View Full Version : Shadow- and cubemaps



blender
07-22-2004, 02:10 AM
There are plenty of good tutorials of how to implement shadow mapping with spot lights, but I haven't found any that shows how to do it with cubemaps for omni directional lights. So, if anyone knew one or could explain it, it'd be great.
I know how to read the depth values into textures, and with cubemaps I guess I just have to do it for each of the sides. But the depth comparing is quite unclear to me. Can I perform it in fragment programs with cubemaps?

Also, how would I go about packing more depth precision into RGB channels?

paulc
07-22-2004, 02:43 AM
Well, for cubemaps you have the pack the depth information into RGB(A) as they don't currently support GL_DEPTH components of any sort. The problem is described from the spec for ARB_depth_texture:

(5) What about 1D, 3D and cube maps textures? Should depth textures
be supported?

RESOLVED: For 1D textures, yes, for orthogonality. For 3D and cube map
textures, no. In both cases, the R coordinate that would be ordinarily
be used for a shadow comparison is needed for texture lookup and won't
contain a useful value. In theory, the shadow functionality could be
extended to provide useful behavior for such targets, but this
enhancement is left to a future extension.

davepermen
07-22-2004, 03:25 AM
hm. currently the page of humus is inaccesible for me, but theoretically, there would be some demos on how to pack one float into the 4 channels of an ordinary rgba texture, and unpack again, to do high depth precicion cubic shadow mapping

check Humus Page (http://esprit.campus.luth.se/~humus) . But possibly, he gets a new host, on ati, or so, and thats why it's down currently..

blender
07-22-2004, 03:27 AM
Well, for cubemaps you have the pack the depth information into RGB(A) as they don't currently support GL_DEPTH components of any sort. How to do it? Can I get the pixel depth value in a fragment program and then return it as a RGB to be rendered, and later read back into cubemap face?

Sunray
07-22-2004, 03:46 AM
Originally posted by blender:
How to do it? Can I get the pixel depth value in a fragment program and then return it as a RGB to be rendered, and later read back into cubemap face?Easy, pass the camera position as a uniform and calculate the distance for each vertex (CamPos - gl_Vertex.xyz)

This code store the squared distance in a RGBA texture. Note, no packing is performed.


// Vertex Shader
uniform vec3 uPOV; // Point of view
varying vec3 vDistanceVector;

void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vDistanceVector = uPOV - gl_Vertex.xyz;
}

// Fragment Shader
uniform float uInvSqRange; // 1.0 / LightRange^2
varying vec3 vDistanceVector;

void main()
{
gl_FragColor = vec4(dot(vDistanceVector,vDistanceVector) * uInvSqRange);
}Set up a cubemap rendertarget (pbuffer) and render each face.

blender
07-22-2004, 04:10 AM
Sunray: so basically that's a relative distance in the light range in range [0, 1]?

Then, in my light shader, do I look up the distance from the cubemap with light vector to a pixel and then check which one is greater?

Sunray
07-22-2004, 05:52 AM
Correct. Remember to multiply dot(light->pixel,light->pixel) with 1.0 / LightRange^2 in the light shader.

blender
07-22-2004, 06:00 AM
One thing though: I'm using Cg, and in the shadow vertex program, I should pass the light vector for the fragment program, but as what? If i make it a texture coordinate, won't it get clamped to range [-1, 1]? I could calculate the relative distance in vertex shader and pass it as a tc, but would it do the trick?

Sunray
07-22-2004, 06:13 AM
Yeah, that's an advatage of GLSL, you don't have to specify an interpolator.

It won't be clamped. Only color is clamped after vertex processing. (EDIT: Hmm, not sure if color is clamped, I'm probably wrong about that)

blender
07-22-2004, 06:54 AM
Everything appears shadowed when I tried (not surprised).

Time for an overview of my doings:

1. Draw the scene into cubemap from the light's point of view with shadow vertex and fragment programs enabled
(can't use a pbuffer, but this is done before anything else gets rendered to the screen)

2. Draw ambient pass

3. Draw scene with diffuse+specular light shaders enabled (shadow cubemap bound)

Here's a code snipped of rendering the cubemaps:

void CGLRenderer::PreRender()
{

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
SetRenderMode(RENDER_FILLED);
glDisable(GL_BLEND);
glDepthMask(1.0f);

CLight::EnableShadowmapRendering();

for(int i=0; i<CLight::GetLightCount(); i++)
{
CLight *pLight = CLight::GetLight(i);

int Size = pLight->GetShadowmapSize();

glViewport(0, 0, Size, Size);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90,1.0,1,5000);
glMatrixMode(GL_MODELVIEW);

for(int j=0; j<6; j++)
{
Clear();
pLight->SetupCubeShadowmap(j);
StaticMeshContext.Draw();
pLight->ReadCubeShadowmap(j);
}
}

SetRenderMode();
CLight::DisableShadowmapRendering();
}

void CLight::SetupCubeShadowmap(int Side)
{

cgGLSetParameter1f(CLight::ShadowSquareLightIntens ity, SQUARE(Intensity));

glLoadIdentity();
switch( Side )
{
case 0:
glRotatef(90, 0, 1, 0);
break;

case 1:
glRotatef(-90, 0, 1, 0);
break;

case 2:
glRotatef(90, 1, 0, 0);
break;

case 3:
glRotatef(-90, 1, 0, 0);
break;

case 4:
break;

case 5:
glRotatef(180, 0, 1, 0);
break;
}
glTranslatef(Position[0], Position[1], Position[2]);
}



void CLight::ReadCubeShadowmap(int Side)
{

glEnable(GL_TEXTURE_CUBE_MAP_ARB);
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, CubeShadowmap->GetID());

glCopyTexSubImage2D(CubemapSides[Side], 0, 0, 0, 0, 0, CubeShadowmap->GetSize(),
CubeShadowmap->GetSize());

glDisable(GL_TEXTURE_CUBE_MAP_ARB);
}Here are the shaders:

Shaders (http://www.nomorepasting.com/paste.php?pasteID=16974)

SeskaPeel
07-22-2004, 07:26 AM
<LAME _QUESTION>
Sorry for that ...
I never played with shadow mapping, but this method of cube map rendering seems good to me.

What are the improvements compared to classic shadow mapping ? Is all this mess about hardware acceleration ? I heard about "dual paraboloid" method, that is supposed to handle omnis, is that all about having a specific format for shadow textures not feasible with cube maps ?
</LAME_QUESTION>

Sorry again :)
SeskaPeel.

blender
07-22-2004, 08:07 AM
SeskaPeel, AFAIK in dual paraboloid shadow mapping, you have two depth textures (front and back) that each have half the scene rendered with 'paraboloid shaped view frustum'. It produces some artifacts with poorly tesselated scenes, and all the demos I've seen looked awful. Though i haven't tried it out myself.

Ysaneya
07-22-2004, 08:58 AM
Blender: unless i'm mistaken, you are not scaling your to-light vector in the vertex shader. This vector has to be in the [0-1] range for it to work.

Y.

blender
07-22-2004, 09:04 AM
Ysaneya, so they are clamped?!

Then I can't obviously calculate the light vector length in the fragment program.

SeskaPeel
07-22-2004, 10:37 AM
Blender, thanks for the answer, but it doesn't match my question, that was rather asking for a comparison between classic shadow mapping and cube map shadow mapping, where it was just stated that it didn't support hardware acceleration.

I'm asking for more precisions, and maybe even how slow it could be to use cube maps instead of 2D hardware accelerated depth textures, and why.

SeskaPeel.

blender
07-22-2004, 11:08 AM
SeskaPeel, I think this cubemapping tehnique is pretty much HW accelerated. Or maybe you mean that the regular shadow mapping is *directly* HW accelerated i.e. there're direct extensions to handle it, and with cubemaps you have to do with things that are not directly shadow rendering related.
If I just managed to get this to work, I might benchmark. I believe it would be fast, and those cubemaps wouldn't have to be updated every frame (cubemap updating is probably the biggest performance eater in this case), just when the source or objects move.

I was also thinking how to get soft edged shadows with this:
If I pass e.g. 4 jittered lightsources to the shaders indtead of one, and then do a shadow compare for each one and then average the result, wouldn't that "smoothen" the shadow edges slightly?
At least that's what I did in my lightmapper once and it seem'd to work just fine.

Ysaneya
07-22-2004, 01:15 PM
Blender: the only thing you have to do is to divide your light vector by the light radius. A vertex outside the light radius will get a value higher than 1 but it doesn't matter because there will be no lighting contribution outside the light radius.

It's pretty fast too, but a bit slower than standard 2D shadow mapping. However with 2D shadow mapping you'd need to generate 6 90 spot-lights (or 2 180 ones, ie. dual paraboiloids), so it's pretty much always a win in the end. Sampling a cube texture is slower than a normal 2D texture, and you have to do the distance calculations/comparisons in the pixel shader, but that's not as bad as you'd expect. However one of the big performance problem is that you can no longer benefit from NVidia's hardware PCF (if you're using an NVidia card), and to antialias your shadow you need to average N samples around a fetched texel in the pixel shader. Then shading becomes a real bottleneck.

Y.

JustHanging
07-23-2004, 02:55 AM
I'm using dual-paraboloid shadow maps in my engine, and they don't look like crap. I don't have an up-to-date demo available, but here are a couple of screenshots:
http://www.hut.fi/~ikuusela/images/Image1.jpg
http://www.hut.fi/~ikuusela/images/trouble.jpg

The real difference between the two techniques remains unclear until somebody bothers to write a test app using both of the techniques in a similar, real-world situation. In general:

-DP shadows are heavy on geometry, cubemaps on fragments. This difference is made bigger by the fact that having the geometry always well-tesselated allows you to move computations from fragment level to vertex level. The interesting thing is, that for low-poly models you're practically always fill-limited, so the one with fastest fillrate wins. When the amount of geometry increases, the need of additional tesselation for DP decreases, closing the gap on the geometry side.

-Dual-paraboloid maps are faster to update than cubemaps (unless retesselation is required). With cubemaps it's likely that several objects occupy more than one cubemap face, and they have to be drawn more than once. If you divide a DP map in top/bottom maps, often all moving objects belong to the bottom map.

-With DP maps it's easy to use different resolutions for the two maps, I often use smaller resolution for the top maps, since most of the detail is below the lights.

-Cubemaps always give better or equal quality than DP maps, and they're practically free of any distortion problems.

-More seams on cubemaps. Doesn't matter for basic implementation, but can hurt some special effects (it's hard to blur a cubemap).

-DP maps are harder to get working than cubemaps

-The ability to exploit NVidias PCF is a minor advantage, since it doesn't work on all cards anyways. Besides, there are other ways to antialias the shadows, for example the penumbra maps used in my screenshots. They generally give superior quality as long as the shadowmap resolution is good enough to capture all the details. If not, things'll look ugly :( They're slower for shadowmap updates, but very fast for static shadows.

So... No real answers here, but at least some points to consider.

-Ilkka

blender
07-23-2004, 03:02 AM
I've been observing the image in the cubemap textures if they are formed correctly, and they're not. According to the formula Dist^2/Rad^2, the furthest pixel should be white and the closest black. Well, I'm having images where the scene is white in the back AND in front and black on the centre (occasionally). I believe it's caused by bad shading. Since I can't pass the light vector with the length of light to the fragment shader (since tc:s are clamped to [-1, 1]) I have to calculate the relative distance in vertex shader, and it gets interpolated cross surfaces. With bad tesselation, the distances appear distorted.
Lets say the the light is above a huge polygon. The vertices on front of light and in the back of it are within a distance, and gets shades of gray, and thus, close to camera, there is no black, but some gray because if interpolation.

EDIT: Hmm, not so sure if it's just shading, since some separate objects close to light seems quite white too. :confused:

Ysaneya
07-23-2004, 03:16 AM
Texture coordinates are not clamped to [-1, 1]. You should calculate the distance in the pixel shader, not in the vertex shader.

Y.

blender
07-23-2004, 03:35 AM
Tried doing it in fragment shaders. Didn't change much. God damn it! Sometimes I really wonder why nothing ever works when I try implementing.

blender
07-23-2004, 04:02 AM
Picture of negative Z cube face (http://koti.mbnet.fi/blender/poista/figureA.jpg)

Ysaneya
07-23-2004, 06:53 AM
Looking at your shaders, there seems to be quite a few problems.

First of all, you should really make sure that when outputting the color to the cube shadow map in your fragment program, the results are not outside the [0-1] range. Adding a saturate command might help here, although i'm not sure why your near objects would appear brighter than your far objects, as you supposedly write the light-space distance.

You are using the squared distance - not that's it's a bad thing - but then you should use 32 bits to store it, not only 8 bits. You are writing the same scalar value to all the RGBA components of the cube map, which effectively means that you're encoding a squared distance with only 256 values - you're not going to get something of quality here, if that even works.. -. To use 32 bits, you should pack a float value into the 4 components via a FRAC instruction, and then extract it later with a DP4 instruction. Humus has a demo with shader code in ASM that shows that on his website, if it's still down just let me know your email address and i'll send it to you.

Finally, last but not least, i do not really understand how you're performing the comparison when rendering your scene with phong lighting and the cube shadow map. In your current code, you set your "Distance" variable to 1.0/IN.att (which i supposed from the name hold an attenuation value), while you should instead be comparing to the to-light distance (basically using the same formula you used to generate the cube shadow map values).

My own shader looks like this (in ASM, sorry):

For the vertex shader, final lighting equation:

# cube map shadow lookup:
TEMP r0;
SUB r0, lightPosS, inPos;
MUL r0, r0, scale.x;
MOV outTex3, -r0;(lightPosS is the light position, inPos is the vertex position, scale.x is 1.0 / Max Light radius).

Then the pixel shader:

TEX shadow, fragment.texcoord[3], texture[3], CUBE;
DP4 shadow, shadow, extract; # extract RGB to float

TEMP dist;
DP3 dist.w, fragment.texcoord[3], fragment.texcoord[3];
RSQ dist.x, dist.w;
MUL dist, dist.x, dist.w;
MUL dist, dist, 0.99; # bias to avoid Z-fighting
SGE shadow, shadow, dist; # comparisonWhere fragment.texcoord[3] is the vertex-to-light vector interpolated per-pixel (NOTE: it should NOT be normalized in the vertex shader!), and extract is a constant vector containing these values: (X=1, Y=1/256, 1/256^2, 1/256^3) - see Humus explanations about packing/unpacking a float to the RGBA and vice-versa.

Y.

Humus
07-23-2004, 04:15 PM
Originally posted by davepermen:
hm. currently the page of humus is inaccesible for me, but theoretically, there would be some demos on how to pack one float into the 4 channels of an ordinary rgba texture, and unpack again, to do high depth precicion cubic shadow mapping

check Humus Page (http://esprit.campus.luth.se/~humus) . But possibly, he gets a new host, on ati, or so, and thats why it's down currently..It's just apache that has died for whatever reason, and I don't have root access to restart the machine either. I'm gonna get my old roommate to restart it for me whenever I can reach him.

blender
07-24-2004, 02:08 AM
Ysaneya, thanks for your detailed reply. I'm sorry about that messed up shader code, I had done several changes to it became a bit confusing. The 1.0/Att thing was there because I needed the attenuation value, and I calculated it in the same way as the distance but only inverted.
I'll try that packing asap and I think I got the idea (if I'm not mistaken, BMP format used that also for some reason).

bionicman
07-25-2004, 12:36 PM
Can you guys please confirm for me that the same technique of packing/unpacking vertex distance into RGBA texture would work for regular shadow maps, i.e. not using cubemap, but simply projecting the generated shadowmap (with vertex to light distance packed into RGBA) and then during shadow determination/lighting pass, passing the vertex to light distance in the same fashion and comparing that with the upacked RGBA value from shadowmap?

Ysaneya
07-25-2004, 12:45 PM
Yes, there's no reason for it to not work.

SeskaPeel
07-26-2004, 01:38 AM
To use 32 bits, you should pack a float value into the 4 components via a FRAC instruction, and then extract it later with a DP4 instruction. would it be possible to have a deeper explanation on this ?

SeskaPeel.

Humus
07-26-2004, 01:43 AM
Cut'n'pasted from my "shadows that rocks" demo:

Pack:


PARAM packFactors = { 1.0, 256.0, 65536.0, 16777216.0 };

MUL dist, dist, packFactors;
FRC output, dist;Unpack:


PARAM extract = { 1.0, 0.00390625, 0.0000152587890625, 0.000000059604644775390625 };

DP4 shadow, shadow, extract;

SeskaPeel
07-26-2004, 03:01 AM
Thanks Humus, got that one.

Let's continue with the lameness : what improvements can provide glextended shadow mapping compared to packing a float in an RGBA texture ?

Is PCF for perspective correct filtering ? What is this supposed to do for me ?

SeskaPeel.

blender
07-27-2004, 02:26 AM
I'm probably also having some issues with Cg itself. Don't know if it bugs or something (I've had some weird lighting results that no one could really have figure out) so I'll give a shot for plain asm programs, just for testing. However, I'm not very familiar how to set it up and what's the correct syntax for arb specific programs (only have a pdf for nv equvailent, which seems to differ a bit).

This is how I load a vertex program:
glEnable(GL_VERTEX_PROGRAM_ARB);
glGenProgramsARB(1, &ID);
glBindProgramARB(GL_VERTEX_PROGRAM_ARB, ID);
glProgramStringARB(GL_VERTEX_PROGRAM_ARB, L_PROGRAM_FORMAT_ASCII_ARB, Size, Data);
glDisable(GL_VERTEX_PROGRAM_ARB);

And this is how I render:
glEnable(GL_VERTEX_PROGRAM_ARB);
glBindProgramARB(GL_VERTEX_PROGRAM_ARB, ID);
// Set up local vp variables
// Render
glDisable(GL_VERTEX_PROGRAM_ARB);

The same thing for fragment programs, except I change the type identifier.

Any good doc for arb vp/fp? The instructions are afaik pretty much the same as with nv, but setting up local variables etc. seems to differ.

blender
07-27-2004, 02:35 AM
Found a pdf for arb from the nv's site :D .