Help reconstructing pixel position from depth

Hi everyone,

I’m making changes to my deferred system. I want to delete the position texture (too much expensive GL_RGB32F) so in the light pass I have to reconstruct the pixel position from depth.
I tried for a few days without luke, so now I’m some how frustated.

What works (old position texture):
Geometry pass - Vertex:


vsPosition = ( ModelViewMatrix * vec4(in_Position, 1.0) ).xyz;
...
gl_Position =  MVP * vec4(in_Position, 1.0);

Geometry pass - Fragment:


fsPosition = vsPosition;

This is how I store old pixel positions on geometry pass.

So next is to figure how to get this without the texture. I tried MANY things that I found around the net (without luke…), what i have now is:

Light pass - fragment:


vec2 calcTexCoord()
{
    return gl_FragCoord.xy / ScreenSize;
}

vec3 positionFromDepth()
{    
    vec2 sp = calcTexCoord();
    float depth = texture2D(Texture4, sp).x * 2.0 - 1.0;
    vec4 pos = InvProjectionMatrix * vec4( sp*2.0-1.0, depth, 1.0);
    return pos.xyz/pos.w;
}

vec3 computePhongPointLight()
{
    vec3 color = vec3(0.0);
    vec2 texCoord = calcTexCoord();
    
    //vec3 position = texture2D( Texture0, texCoord ).xyz;
    vec3 position = positionFromDepth();
    vec3 difColor = texture2D( Texture1, texCoord ).xyz;
    vec3 specColor = texture2D( Texture2, texCoord ).xyz;
    vec3 normColor = texture2D( Texture3, texCoord ).xyz;
    
    vec3 lightDir = Light.position.xyz - position;
    vec3 lightDirNorm = normalize( lightDir );
    float sDotN = max( dot(lightDirNorm, normColor), 0.0);
    
    float att = 1.0;
    float distSqr = dot(lightDir, lightDir);
    float invAtt =  (Light.constantAtt + 
                    (Light.linearAtt*sqrt(distSqr)) + 
                    (Light.quadraticAtt*distSqr));
    att = 0.0;
    if (invAtt != 0.0)
        att = 1.0/invAtt;
    
    vec3 diffuse = difColor.rgb * Light.diffuse * sDotN;
    vec3 ambient = difColor.rgb * Light.ambient; // Cheat here
    
    vec3 vertexToEye = normalize(position);
    vec3 r = normalize(reflect(lightDirNorm, normColor));
    
    // SpecularPower
    vec3 specular = vec3(0.0);
    if ( sDotN > 0.0 )
    {                
        specular =     Light.specular.rgb * 
                    specColor * 
                    pow( max( dot(vertexToEye, r), 0.0), 60.0 ); // Change specular here!!!! value 60 must be an uniform
    }
    
    return (diffuse + specular + ambient)*att;
}

Where Texture0 is old position texture, Texture4 is depth texture, InvProjectionMatrix is the inverse projection matrix, and Light.position is computed as ViewMatrix * light_position.

I did some debug and I output the absolute diference:


vec3 pos1 = positionFromDepth();
vec3 pos2 = texture2D(Texture0, calcTexCoord()).xyz;
fragColor = vec4(abs(pos2.x-pos1.x), abs(pos2.y-pos1.y), abs(pos2.z-pos1.z), 1.);

The output is all black minus empty zones that are white.

I think it’s space error, something like: light position in view space and pixel position is in other space so “vec3 lightDir = Light.position - position” give bad results. But I can’t figure what’s happening, and how to solve it…
I will really apreciate any help (cause I don’t know what more to do), and I hope that someone with better math can help me :slight_smile:

Sorry for my bad English and thanks in advance.

See this active thread, and the solution I suggest:

One thing I notice about your approach that seems strange is that you are feeding NDC-SPACE positions into the inverse PROJECTION matrix, instead of feeding in CLIP-SPACE positions. And then you’re doing a perspective divide “after” getting back into EYE-SPACE (?) Gut says this is not equivalent, but haven’t cranked through anything on paper.

Yes I don’t undestand really well whats going on so I just search documentation and tested things, I’m reall lost.
You are suggesting that I have to replace the position texture with another depth texture (with linear depth), so now the render target will have two depth textures, right?
It’s not possible to get with normal depth texture or it’s just hard?

Thanks for your time

PD: I was trying what you say. CLIP-SPACE is without (*2.0 - 1.0) part right?
With something like this:



vec3 positionFromDepth() {         vec2 sp = calcTexCoord();     float depth = texture2D(Texture4, sp).x;     vec4 pos = InvProjectionMatrix * vec4( sp, depth, 1.0);     return pos.xyz; }

I will have position in view space? I’m tring to undo the steps maded in the old system,
What I have on my head is:

1- Get coordinates for the texture lookup
2- Retrive depth
3- create point in view projection space
4- Transform it to go to view space unsing inverse projection matrix
5- Use it

Something (or many things) are wrong with this because it doesnt work.

I tried your PositionFromDepth_DarkPhoton function without success.

What I get is:
When camera gets close to the light, all scene gets illuminated, when I go far from the light all scene gets darker.
I don’t undestand why, because If I’m not wrong pixel position from your function and light position are in eye space.
Light position is calculated in c++ as: ViewMatrix * world_position_light.

You’ll get it. Just go slow and make small incremental changes, backing up when you get unexpected results from the last micro-change.

Here’s a good pictorial reference for the space transformations:

You are suggesting that I have to replace the position texture with another depth texture (with linear depth)

No, not with linear depth (such as EYE-SPACE depth). Instead, with a standard WINDOW-SPACE depth texture. You get this by default by just rendering your depth information to a DEPTH or DEPTH_STENCIL texture (e.g. GL_DEPTH_COMPONENT24, GL_DEPTH24_STENCIL8, etc.) by attaching one of these textures to the depth attachment of your FBO and rendering like normal.

…so now the render target will have two depth textures, right?

No, just one. And it’s a single channel (e.g. GL_DEPTH_COMPONENT24).

It’s not possible to get with normal depth texture or it’s just hard?

I’m suggesting you just use a normal depth texture.

PD: I was trying what you say. CLIP-SPACE is without (*2.0 - 1.0) part right?

The x*2.0-1.0 converts a 0…1 value (e.g. WINDOW-SPACE) back into a -1…1 value (e.g. NDC-SPACE). CLIP-SPACE is one more space back from NDC which is before the perspective divide. See the diagram in the link above for details.

With something like this:

vec3 positionFromDepth()
{
vec2 sp = calcTexCoord();
float depth = texture2D(Texture4, sp).x;
vec4 pos = InvProjectionMatrix * vec4( sp, depth, 1.0);
return pos.xyz;
}

Problems here are that your sp and depth most likely are 0…1 values (WINDOW-SPACE) while what you would need to feed into the inverse projection matrix should be CLIP-SPACE.

Did you apply it to a standard WINDOW-SPACE depth value, as written by the pipeline to a standard depth attachment?

If so, show more code and tell me what you did. A short stand-alone GLUT test program would be even better.

Also, it can be very helpful for debugging to have a debug screen that displays a linearized depth value, where 0 (black) = near clip plane and 1 (white) = far clip plane. You can use the function I mentioned to produce an EYE-SPACE depth value. Then map -near…-far to 0…1 to get black…white.

…and light position are in eye space. Light position is calculated in c++ as: ViewMatrix * world_position_light.

A suggestion: you’ve got too many fish in the air right now. I’d forget the lighting and light position, and just render the linearized depth value on-screen for each pixel (black…white meaning Z=-near…-far) and get that 100% perfect before you throw lighting into this.

First of all thanks for your time and for you patience!

Okey, I will try to explain step by step what I have right now.

1 - Depth texture attached to a fbo. This depth has format GL_DEPTH_COMPONENT, internal format GL_DEPTH24_STENCIL8 and channel GL_FLOAT.
This depth has also GL_TEXTURE_COMPARE_MODE to default (I think it’s GL_NONE) so I can read from it.
The depth and the stencil just works fine because I use them all the time without problem (writing for sure).

2- Old texture position that works (GL_RGB32F) was computed as:
ViewMatrix * in_Position, where in_Position is the input vertex.
ViewMatrix are computed as inverse(camera_full_transform). I’m using GLM library.
So I write pixel positions in view space, right?

3- Pixel position reconstruction (yours function)


vec3 positionFromDepth()
            {
                vec2 ndc;
                vec3 eye;
                
                vec2 sp = calcTexCoord();
                float depth = texture2D(Texture4, sp).x;
                
                eye.z = ClipDistance.x * ClipDistance.y / ((depth * (ClipDistance.y - ClipDistance.x)) - ClipDistance.y);
                
                ndc.x = ((gl_FragCoord.x / ScreenSize.x) - 0.5) * 2.0;
                ndc.y = ((gl_FragCoord.y / ScreenSize.y) - 0.5) * 2.0;
                
                eye.x = ( (-ndc.x * eye.z) * (Right-Left) / (2*ClipDistance.x)
                      - eye.z * (Right+Left) / (2*ClipDistance.x) );
                eye.y = ( (-ndc.y * eye.z) * (Top-Bottom) / (2*ClipDistance.x)
                      - eye.z * (Top+Bottom) / (2*ClipDistance.x) );
                
                //eye.x = (-ndc.x * eye.z) * Right/ClipDistance.x;
                //eye.y = (-ndc.y * eye.z) * Top/ClipDistance.x;
            
            
                return eye;
            }

I have gDEBugger so I check everything many times (uniforms, etc)

4- Light position is passed as a uniform as: ViewMatrix * light_world_position, so again is in view space.
For test propose I simplify my phong light function to just attenuate by the distance, here it is:


vec3 computePhongPointLight()
            {
                vec3 color = vec3(0.0);
                vec2 texCoord = calcTexCoord();

                vec3 difColor = texture2D( Texture1, texCoord ).xyz;
                
                vec3 position_depth = positionFromDepth();
                vec3 position_texture = texture2D( Texture0, texCoord ).xyz;
                
                //vec3 position = position_texture; <--- WORK!
                vec3 position = position_depth;  // <-- DON'T WORK!
                
                vec3 lightDir = Light.position.xyz - position;

                
                float att = 1.0;
                float distSqr = dot(lightDir, lightDir);
                float invAtt =  (Light.constantAtt + 
                                (Light.linearAtt*sqrt(distSqr)) + 
                                (Light.quadraticAtt*distSqr));
                att = 0.0;
                if (invAtt != 0.0)
                    att = 1.0/invAtt;
                    
                return difColor*att;
            }

What I test here?
a) position = position_texture; <---- WORKS
b) position = position_depth; <— FAILS
c) position = vec3(position_texture.x, position_texture.y, position_depth.z); <---- FAILS
d) position = vec3(position_depth.x, position_depth.y position_texture.z); <------ FAILS

I also tested to display the depth (to visualitze them) and the old position texture on a full screen quad.

fragColor = vec4(position_depth.x, position_depth.y, position_depth.z, 1.0);
fragColor = vec4(position_texture.x, position_texture.y, position_texture.z, 1.0);

fragColor = vec4(abs(position_depth.x), abs(position_depth.y), abs(position_depth.z), 1.0);
fragColor = vec4(abs(position_texture.x), abs(position_texture.y), abs(position_texture.z), 1.0);

Notes: Nearly identical output both. It diferers on empty spaces, texure method are displayed as black, and depth method displayed as white. So i supose that first store (0, 0, 0) for empty spaces and depth stores 1.0 for empty spaces/infinity?
Anyway apart from empty spaces (sky) output was the same.

I also test the absolute diference of both to see whats diferent, like:
fragColor = vec4(abs(position_depth.x-position_texture.x), abs(position_depth.y-position_texture.y), abs(position_depth.z-position_texture.z), 1.0);

And the output was:
[ATTACH=CONFIG]318[/ATTACH]

So from what I can deduce is that only sky is shifted, but geometry must be the same. But not, because the illumination doesn’t work… :frowning:
(Thin white lines are for debuggin aabb, its post deferred pass, so it’s why it’s displayed)

Apart from this I tested many thinks, but without any sense.

I really apreciate your help, and if you need any more information (states, uniforms, anything) please ask

I cannot put the program cause is ~30.000 lines long… :slight_smile:

I think I found my problem!!!

Here goes:
When I compute light, first I do a stencil test pass and then I perform the light draw pass (for spot and point lights). The problem is that you can not read from depth while you have stencil test enabled and you have also bound the depth_stencil for stencil test. So I think this is why when I perform quad mesh tests it output the correct results, but when I do light computation values were wrong).

Question: I have to delete the stencil test optimitzation, right?

Finally after all this days figuring whats wrong… I was becoming mad because I was thinking my math were really really bad (they are just bad).

Thanks very much Dark Photon for your help, I really apreciate it!
Out of my happiness I decide to compress normal values from here Compact Normal Storage for small G-Buffers · Aras' website. For my surprise was really easy.

:):slight_smile:

That’s possible. Reading depth while testing (aka reading) stencil might do it. Though reading depth while “writing” stencil (which you’re probably also doing) is also likely to cause problems.

The spec does specify undefined behavior in terms of “textures” bound as render targets and shader inputs rather than as specific attachments, which suggests that your supposition is correct.

Question: I have to delete the stencil test optimitzation, right?

Would suggest you make sure this is the problem first. Establishing this is easy: after rendering your opaque depth buffer, just glBlitFramebuffer a copy of it over to another texture with the same res/format, and feed that copy into your shader sampler input, using the existing one as the depth/stencil attachment of your FBO. Then do you lighting passes (with stencil magic). If the problem goes away, then that’s probably it.

Rasterizing another G-buffer channel for a copy of depth is another option, though likely more expensive.

But re stencil-test optimization, let’s take a step back. Consider that it might be cheaper just to throw light quads at the GPU rather than do a bunch of state changes and batches per light rendered (possibly in general, depending on your scene/lighting/CPU/GPU, but especially with tile-based deferred). That allows you to batch a bunch of lights together and throw them at the GPU in one batch (possibly with one-sided depth test), with no state changes in between (just uniform updates before each batch) – very easy on the CPU and no pipeline bubbles. This is mentioned in a number of places but for instance, see Deferred Rendering for Current and Future Rendering Pipelines (Lauritzen, SIGGRAPH 2010).

…also re your white vs. black for “background” areas. You need to clear your depth buffer for the main depth buffer (gets cleared to FAR = 1.0 = white). If you rasterize a separate depth channel in the G-buffer, then you need to clear it to the FAR value as well OR ensure that all fragments on the G-buffer (that you care about) are overwritten so there are no leftover “trash” values.

Would suggest you make sure this is the problem first. Establishing this is easy: after rendering your opaque depth buffer, just glBlitFramebuffer a copy of it over to another texture with the same res/format, and feed that copy into your shader sampler input, using the existing one as the depth/stencil attachment of your FBO. Then do you lighting passes (with stencil magic). If the problem goes away, then that’s probably it.

I didn’t did a really indepth test to see if its 100% sure what I said, but I’m 99,9% sure. Reading from depth and reading from stencil is what it fails. I think it’s because you have to bind depth texture as render target (for stencil read test) and as a input texture (read depth values). I supose reading from depth and writing to stencil also fails.
I’m using Fedora 17 x64 with propertary nvidia drivers (uptodat).

…also re your white vs. black for “background” areas. You need to clear your depth buffer for the main depth buffer (gets cleared to FAR = 1.0 = white). If you rasterize a separate depth channel in the G-buffer, then you need to clear it to the FAR value as well OR ensure that all fragments on the G-buffer (that you care about) are overwritten so there are no leftover “trash” values.

Black sky was for position texture because I clear it with (0, 0, 0, 0), depth buffer it’s clear ok.

Anyway, I delete the stencil test. What I did (for replacment) was enable front face culling, enable depth test, disable depth write, set depth function to less or equal and render a cube. And now it seems to work.

I have some more questions,

  1. You know a better way to compute point/spot radius? Now I just solve the 2 degree equation for a given threshold. Whats threshold (now 1./16) it’s the optimus?

void PointLight::updateRadius()
{
    // radius = ( -(th*l) +/-sqrt(D) ) / ( 2*(th*q) )
    // D = (th*l)² - 4*(th*q) * (th*c-dif)

    float32 th = 1.f/16.f; // 1/16
    //float32 th = 1.f/8.f;
    //float32 th = 1.f/12.f;
    //float32 th = 1.f/14.f;

    float32 dif = Core::max(Core::max(_diffuse.x, _diffuse.y), _diffuse.z);
    float32 D = (th*_linearAtt)*(th*_linearAtt) - 4*(th*_quadraticAtt) * (th*_constantAtt-dif);

    if (D < 0.f)
    {
        _radiusCache = 0.f;
        return;
    }

    float32 div = 2 * (th*_quadraticAtt);

    if (div == 0.f)
    {
        _radiusCache = 0.f;
        return;
    }

    float32 u = -(th*_linearAtt) / ( 2*th*_quadraticAtt);
    float32 v = sqrt(D) / (2*th*_quadraticAtt);

    if ( (u+v) > (u-v) )
        _radiusCache = u+v;
    else
        _radiusCache = u-v;
}

  1. I’m intrested in tile-base deferred rending, have you got more information about this technique? (I’m interested in any optimized technique)

Edit: Finally I think I will go with two depth buffers because I need stencil test for deferred decals.
What format will be better? single 32 bit floating point texture or encode to a 8 bit channel RGBA?

Have you tried using a separate depth and a separate stencil attachment? In that case you only have to have the depth texture bound for texturing and for depth test (read-only) and the stencil texture bound for stencil test (read/write). This probably should work fine.

I didn’t know that I can separate the two, i belive that always come together. Now I’m using a packet linear depth into a RGB 24 bit texture, but I’m not really confortable with this.
So it’s possible to bound depth as target (because stencil test) and as texture (reconstruct positions) if it’s read only both operations? (I mean with separate depth/stencil)

I was thinking for example with deferred decal where I perform a stencil operation to draw it on specific mesh (and dont paint others in the volume). It’s possible with only one depth?

[QUOTE=Junky;1244976]I have some more questions,

  1. You know a better way to compute point/spot radius? Now I just solve the 2 degree equation for a given threshold. What’s threshold (now 1./16) it’s the optimus?[/quote]

Looking at your math I’m not 100% sure exactly what you’re doing here, but intuitively I think you’re asking about how to define a bounding solid (sphere, cone, something like that) around the light source’s area of influence to use for the lighting pass.

If you use a cone, two issues: cone angle, and cone length.

Regarding cone angle, I can tell you I really dislike OpenGL’s point light source cone angle attentuation function because it never fades out to 0 until 180 deg. This means the actual angle you need for a bounding cone may vary all over the map depending on your tone mapping function, overlapping lights, etcetc. So I don’t use it. Instead I use the D3D9 cone angle attenuation function because it is, guarenteed – no fooling, 100% gone by the outer cone angle.

Regarding cone length, that’s a bit tricky so you just have to be conservative. Thing is how far out its significant depends on tone mapping and other factors.

There’s been a lot of stuff over the years. Just search any SIGGRAPH or GDC presentations in the last 5 years for deferred shading stuff. Pair your deferred shading search with DICE, Crytek, or other game shops to increase your hits. Here’s some random stuff in the last few years that mentions it:


Battle-tested Deferred Rendering on PS3, XBox 360, and PC
S.T.A.L.K.E.R.: Clear Sky
Deferred Lighting and Post-processing on PS3
Parallel Graphics in Frostbite - Current & Future
Rendering Tech at Black Rock Studios
Crytek: Future Graphics in Games - Notes
Bending the Graphics Pipeline
CryEngine3: Reaching the Speed of Light
Deferred Rendering
Screen Space Classification for Efficient Deferred Shading

The concept is fairly simple: instead of taking a G-buffer read and lighting buffer write fill hit for EVERY light source, bin the light sources by which tile(s) of the screen they cover (tile = MxN pixel block), and render ALL of the light sources for each bin at once (i.e. read G-buffer -> +light+light+light+light+light+light -> write/blend lighting buffer). Essentially, it’s just batching light sources that overlap influence regions together.