PDA

View Full Version : Cube Depth Textures



Nakoruru
08-09-2002, 12:31 PM
I just wanted to ask if there were plans by any of the IHVs to support cube depth textures in the near future. Without them only directional lights have no special cases. If you use the new perspective shadow map technique spotlight angles are restricted to 180 - FOV degrees (where FOV is the field of view angle of the camera). Omni-directional lights are possible, but the viewer can never actually look in the direction of the light.

Cube depth textures are needed to make everything more general. When you have cube depth textures, using a 2D texture just becomes an optimization of some special cases.

I'm sure this can be implemented easily enough using the next generation shading languages, but I would rather not have to implement percentage closer filtering myself ^_^

vincoof
08-28-2002, 12:27 PM
You still can simulate the "cube depth texture" by using 6 different textures. Of course it is very slow, but still it's a solution and can be handy for non-realtime cases.

Can't you spheremap depth textures ?

Nakoruru
08-28-2002, 01:13 PM
No, because the depth comparison done in hardware depends on the way that the texture coordinates are generated.

With 2D depth maps, the hardware compares the r coordinate with the fragment to get the result.

If you had a cube depth map you would also need some way to calculate the x,y,z position of each fragment relative to the origin of the light. From this you could calculate distance and compare it to the fragment depth and get the shadow result.

I would be happy with a pixel shader instruction get the x,y,z position of a fragment in eye space without wasting a texture coordinate or vertex program registers. With that I could write a pixel shader program to do cube depth mapping (I assume such hardware would come with 16 and 32 bit single channel textures that could be used as cube depth maps).

Failing that, I could waste some registers or a texture unit to get distance from the light.

Fragment position is just a general solution. If they created it as an extention to ARB_depth_texture then that would be cool as well, but I am beginning to favor programmable solutions over fixed function. As an example, I would like to see the fixed function stencil and depth buffers become obsolete (but I realize that it will be a while).

vincoof
08-28-2002, 10:14 PM
If you don't want to waste a texture coordinate or vertex programs, there's no much different ways for the information to come from. If you don't use standard OpenGL lighting you could probably use the glColor command, but you need the CPU to compute the values to send though glColor.

Humus
08-29-2002, 03:46 AM
With the R9700 I suppose you could render to a 16bit/channel cubemap and compute the radial distance in a pixel shader.

Nakoruru
08-29-2002, 04:15 AM
vinconf, Well, the whole point of the ARB_depth_map extension is that you do not waste textures or vertex program registers/instructions. The comparison is done by the texture unit. Cube depth maps would just be an extension where the appropriate calculations are also done by the texture unit. The calculations are different than for a 2D depth map, and that is why cube maps are not supported in hardware and require you to use textures and vertex/pixel programs to emulate them.

Humus, I think that I said that that is what I would do in my own post. But at the same time, if it was supported in the texture unit there would be no need to 'waste' resources on it. THe reason I put 'waste' in quotes is because I think that eventually hardware designers will start insisting that they not be required to add any more fixed function operations like ARB_depth_map. But, that won't happen until we have a lot more resources to play with.

Nakoruru
08-29-2002, 08:13 AM
It looks like the NV30's 'window position' is what I was looking for! At first I thought it was the 2D x/y of the fragment in the frame buffer, but it turns out to be the x, y, z, 1/w of the fragment. This means that with a transformation and some math you can get the distance from the light source. Compare that to the cube map depth and you can determine shadow.

Too bad that floating point texture maps are only available as rectangle maps on the NV30, otherwise they could be used as cube depth maps. Maybe the NV35 will have them ^_^