somboon

02-22-2010, 05:35 AM

I try to understand the process of reconstructing view space vertex position (after tranforms by camera) from depth texture fro use in deferred shading.

Actually I have implement the process using the method that involved interpolation of view vector but don't fully understand it ,and it doesn't work with bounding-box light pass (may be it actually work but my implement is wrong).

The most simplistic way I read from many article state that I just unproject the vertex (x/w,y/w,z/w,1.0) with inverse projection matrix.

x/w and y/w can be retrieved easily by nonperspective interpolate gl_Position.xy/gl_Position.w ,I already use it to calculate screen space texture cooridinate (please correct me if I am wrong)

but how can I calculate z/w does it already store in depth texture or I have to calculate it somehow ?

Actually I have implement the process using the method that involved interpolation of view vector but don't fully understand it ,and it doesn't work with bounding-box light pass (may be it actually work but my implement is wrong).

The most simplistic way I read from many article state that I just unproject the vertex (x/w,y/w,z/w,1.0) with inverse projection matrix.

x/w and y/w can be retrieved easily by nonperspective interpolate gl_Position.xy/gl_Position.w ,I already use it to calculate screen space texture cooridinate (please correct me if I am wrong)

but how can I calculate z/w does it already store in depth texture or I have to calculate it somehow ?