why no shadow cubemaps?

one thing thats bothered me for ages is why we dont have shadow cubemaps
ok maybe todays cards cant do it, that doesnt explain why in eg glsl theres no shadowCube sampler.

aint shadowcubemaps a logical feature?

How would they work?

Shadow textures work by doing a texture lookup with the (s,t) coordinate pair and comparing this to the r value of the texcoords. But for cubemaps and 3D textures you need the (s,t,r) coordinates for doing the lookup, so you’re running out of texcoord components :wink:

Of course you can use a standard texture target and do the comparison yourself.

I understand the general motivation for such a thing, but I was guessing that Zed was hoping for an oppotunity dazzle us with the details :wink:

I imaging there are some practical reasons why we haven’t seen such a thing. A good understanding of how something like this might be implemented may shed some light on things. Hardware, cost, spec, interactions, usage models, advantages, disadvantages… the devil is in the details.

I really haven’t given it much thought.

why not add (s,t,r) q in as well surely that should work,
the reasoning behind having cubeshadowmaps is i believe some hardware (nvidia cards for example render twice as quick if they just render the depth info)
why this is good is cause, at the moment im doing shadows with cubemaps but doing the comparison in the colorbuffer, effectively working at halfspeed (or even slightly yes than that)

God, didn’t we have this conversation often enough??

  1. R-Coordinate is in use already
  2. you can do it by using float-textures and compare it manually in a shader

Now to the more important stuff:
Shadowcubemaps are not efficient, because

  1. all sides are sized equally
  2. you can’t easily change the detail of the cubemap (or individual faces of the cubemap), because you would need several differently sized cubemaps, which burns memory
  3. you need to recreate the cubemap every frame, because there is not enough memory to store each shadow-cubemap => you really really don’t want to do more work to create it, as necessary
  4. use 6 2D textures instead, that’s more space-efficient (several sizes possible), you can use lower detail on less important faces
  5. actually you don’t even need 6 textures, but only 1, which you reuse several times
  6. you get the full speed of traditional 2D shadow-mapping, without additional extensions
  7. you can cull each face individually
  8. some other stuff i can’t explain properly in english

Did i forget something?

Good night,
Jan.

God, didn’t we have this conversation often enough??
sorry ive missed it before

jan youre a smart guy but your points 3-10 aint really valid
have u implemented both methods? (i have btw)

3/ ok this is true but really is anyone actually using different sized textures for the different viewpoints, at the moment i measure the light area onscreen and then pick the approriate sized (preexisting) cubemap to fill
4/what memory? a 256pixel sized cubemap aint exactlly expensive, esp when considered in the effect it produces ie shadows, compare that with a lot of games that use maybe a 1024x1024sized texture for a characters head
5/memory again? if your lights are dynamic (or include moving objects) yes u need to recreate the cubemap each frame.
6/ditto
7/huh? doesnt seem efficent
9/nothing stops u from doing this with cubemaps
at the moment im only rendering into the cubemap faces that contain shadowcaster polygons, ie u might only update one side of the cubemap

Will shadow-cubemaps be used in professional games?

The thing is, nVidia and ATI won’t waste resources on implementing something, that is not important.

And, no, professional games won’t use shadow-cubemaps. They will all use the 2D texture approach. Stalker does (see GPU Gems 2). The next id-engine almost certainly does (see John Carmacks video interview). And if the Unreal 3 engine uses shadow-mapping at all (which i don’t know), then it will use this approach, too.

And all the others? Well, everybody who wants to use omnidirectional shadow-mapping, can do it with 2D texturing. Even, if they don’t use advanced culling or lod techniques. So, there is no MISSING feature. But the moment a programmer wants to heavily optimize their shadows, he will use the 2D approach.

Therefore the decision is simple. There will be no shadow-cubemaps. Ever. Well - at least not in the “near” future.

That’s it. And therefore it does not make sense to discuss about it any further.

BTW: You want to use 256^2 sized shadow-maps? Ok, i agree, in that case memory is not such a big concern. However, i don’t know what kind of application you are doing. In a first-person engine a 256^2 sized shadow-map will, in most cases, not give very pleasing results. I was thinking about using 1024^2 sized textures, at least for important lights (i know, that costs a lot of processing power). However, one single cubemap would then take 18 MB (24 Bit).

With 2D textures you could use 2 1024^2 sized textures (6 MB) and first render to texture A and do shadow-mapping with texture A, then use texture B, and then use texture A again - to address the issue, that frequently reusing the SAME texture for rendering and texturing might not be efficient, as you stated above.

Jan.

Hey, what about Dual-Paraboloid Shadow Mapping, by the way?
http://www.doomiii.jp/slang/archives/flx_demo_pa_1_0.htm

It’s nonlinear though. :stuck_out_tongue:

what’s the difference between that method and using standard shadow mapping for two 180degreeFOV spot lights pointing in opposite directions?

i’d love to see the source for this… was totally stunned when i saw it the first time.

@knackered: standard linear projection (3D to 2D plane) can’t do 180° fov.
http://wouter.fov120.com/gfxengine/fisheyequake/compare.html

knackered,

As ZbuffeR already mentioned, it is not possible to make a 180 degree fov shadow map using linear projections like gluPerspective(). So, in order to do that, you have to take a nonlinear method such as the parabolic projection. (i.e. making a paraboloid map)

Nonlinear projections can be achieved by vertex shaders. In the case of the Dual-Paraboloid, you can see my GLSL shaders of the demo that you can download from the webpage I linked above.

However, by the way, the result of the paraboloid shadow map will be pretty bad if meshes are not tessellated enough. It is because the precision of the fragment depth linear-interpolation is be decided on the numbers of vertices. Consequently, this is gonna be a big problem in games and this is why the Dual-Paraboloid is not used in today’s games. (Certainly, the numbers of vertices in games are still increasing though)

Vexator,

I’m sorry that I don’t release the source code of the Dual-Paraboloid Shadow Mapping demo. But as I said, the shader source files used in the demo is included in “data\shaders”.

  1. use 6 2D textures instead, that’s more space-efficient (several sizes possible), you can use lower detail on less important faces
  2. actually you don’t even need 6 textures, but only 1, which you reuse several times
  3. you get the full speed of traditional 2D shadow-mapping, without additional extensions
  4. you can cull each face individually
    Considering that this rendering methadology utterly strips away from shadow mapping 100% of its advantages over shadow volumes, why would you bother? The principle advantages of shadow mapping are that it only requires N + 1 passes, while shadow volumes require 2N + 1 passes. The only way to retain this for point lights is to either use cube maps, or use 6 independent textures.

Hey, what about Dual-Paraboloid Shadow Mapping, by the way
not good, the geometry has be tesselated quite well

Therefore the decision is simple. There will be no shadow-cubemaps. Ever. Well - at least not in the “near” future
2 major points in favour of cubemaps over 2shadowmaps (from my testing even with cubeshadowmaps being 2x slower than rendering just the depth, they still are a lot quicker than emulating a pointlight with standard 2d shadowmaps

A/ they do NOT suffer from backprojection, the this allows them to run faster

B/with standard SMs when u draw the shadow recievers what do u do when the meshes straddle the SMs frustums eg in the down+left directions.
u either have to bind 2(or more) SMs and draw the geometry or draw the shadow reciever multiple times
this is obviously slower than cubemaps method of 1 texture bind and one reciving rendering pass

BTW: You want to use 256^2 sized shadow-maps? Ok, i agree, in that case memory is not such a big concern. However, i don’t know what kind of application you are doing. In a first-person engine a 256^2 sized shadow-map will, in most cases, not give very pleasing results. I was thinking about using 1024^2 sized textures, at least for important lights (i know, that costs a lot of processing power). However, one single cubemap would then take 18 MB (24 Bit)
ok this is true in my case im using smaller SM sizes than the majority of ppl cause A/ mine is a fast action game + B/ there are a lot of lights (btw im also using 8bit lights for extra speed), but i think youre overestimating the memory importance

Thanks for the pointer, slang.

Originally posted by Korval:
[QUOTE]The principle advantages of shadow mapping are that it only requires N + 1 passes, while shadow volumes require 2N + 1 passes.
I always thought it was that they didn’t require the huge amounts of fill as stencil volumes, and virtually no scene processing on either the CPU or GPU, unlike stencil volumes. There’s also the fact that you can use decently tesselated meshes for your dynamic objects…and they also enable cookie cut textures to correctly cast shadows.
I can’t remember the number of passes ever being signalled as a significant issue, not considering the other overwhelming disadvantages of stencil volumes anyhow.

they didn’t require the huge amounts of fill as stencil volumes
The hardware takes care of that by running at greater speeds when not drawing colors.

I can’t remember the number of passes ever being signalled as a significant issue, not considering the other overwhelming disadvantages of stencil volumes anyhow.
Signalled out by whom?

And you can’t deny that being able to go from 2n + 1 to n + 1 passes is an incredible performance boost in and of itself. It may as well be a 2x performance increase. Yes, you could stick with 2n+1 for shadow maps, and you would still get some performance increase. But the increase from an n+1 approach is still greater than what you get with 2n+1. Not only that, you get far fewer shader swaps with n+1 vs. 2n+1.

In terms of data plumbing and coordinate math, there are enough differences between cube maps and shadow cube maps to have delayed their direct hardware implementation. That much is obvious.

I am sure there will eventually be direct hardware support though, because it’s obviously useful and not all that difficult to support.

Edit: I should preview and proofread my posts, but I never do.

Originally posted by Korval:
The hardware takes care of that by running at greater speeds when not drawing colors.
I’ll take that as a joke at the expense of scalability.

Originally posted by Korval:
Signalled out by whom?
By every single person that has ever done a comparison between the two techniques, including myself, john carmack, charles bloom, etc.etc.

Cass, the problem is quite interesting though. You have the classic comparrison of texture vs r with a depth map but with a shadow cube map you have the comparrison against the major axis. It seems like this isn’t such a big hurdle; the per fragment selection of the test interpolant, because the major axis interpolant is selected prior to the coordinate divide with a cube map per pixel anyway. I don’t think cubemaps per se are the issue but that classic depth test texture doesn’t fit neatly into that spec. Specifically the major axis instead of the r coordinate used in the texture unit comparrison, but it is always known prior to the fetch, it would seem that the right design could support this for ‘free’. The required hardware support seems trivial requiring only some forethought, whether you handle cube edges correctly or not (multiple separate Ma per pixel offers its own solution).

The most significant issue is mapping linear Ma coordinates to frustum depth with suitable precision, or more realistically rendering the individual cube face depth maps with matching axis linear z (as opposed to screen linear z) and something like a vertex shader to asign a perspective correct varying of suitable precision linearly in eye z would do it. You could zbuffer the shadow map face rendering as normal but the depth texture would come from the axis linear perspective correct varying.

That’s probably the biggest reason they aren’t currently supported, the lack of projection support to get from linear cube map Ma coord to frustum Z on a cube face, and it’s not as trivial as it first seems, although I think my workaround could get us there, it’s not a huge departure from what’s been done by others before.