# Thread: Help reconstructing pixel position from depth

1. Originally Posted by Junky
Question: I have to delete the stencil test optimitzation, right?
Have you tried using a separate depth and a separate stencil attachment? In that case you only have to have the depth texture bound for texturing and for depth test (read-only) and the stencil texture bound for stencil test (read/write). This probably should work fine.

2. I didn't know that I can separate the two, i belive that always come together. Now I'm using a packet linear depth into a RGB 24 bit texture, but I'm not really confortable with this.
So it's possible to bound depth as target (because stencil test) and as texture (reconstruct positions) if it's read only both operations? (I mean with separate depth/stencil)

I was thinking for example with deferred decal where I perform a stencil operation to draw it on specific mesh (and dont paint others in the volume). It's possible with only one depth?

3. Originally Posted by Junky
I have some more questions,

1) You know a better way to compute point/spot radius? Now I just solve the 2 degree equation for a given threshold. What's threshold (now 1./16) it's the optimus?
Looking at your math I'm not 100% sure exactly what you're doing here, but intuitively I think you're asking about how to define a bounding solid (sphere, cone, something like that) around the light source's area of influence to use for the lighting pass.

If you use a cone, two issues: cone angle, and cone length.

Regarding cone angle, I can tell you I really dislike OpenGL's point light source cone angle attentuation function because it never fades out to 0 until 180 deg. This means the actual angle you need for a bounding cone may vary all over the map depending on your tone mapping function, overlapping lights, etcetc. So I don't use it. Instead I use the D3D9 cone angle attenuation function because it is, guarenteed -- no fooling, 100% gone by the outer cone angle.

Regarding cone length, that's a bit tricky so you just have to be conservative. Thing is how far out its significant depends on tone mapping and other factors.

4. Originally Posted by Junky
There's been a lot of stuff over the years. Just search any SIGGRAPH or GDC presentations in the last 5 years for deferred shading stuff. Pair your deferred shading search with DICE, Crytek, or other game shops to increase your hits. Here's some random stuff in the last few years that mentions it:

Code :
```Battle-tested Deferred Rendering on PS3, XBox 360, and PC
S.T.A.L.K.E.R.: Clear Sky
Deferred Lighting and Post-processing on PS3
Parallel Graphics in Frostbite - Current & Future
Rendering Tech at Black Rock Studios
Crytek: Future Graphics in Games - Notes
Bending the Graphics Pipeline
CryEngine3: Reaching the Speed of Light
Deferred Rendering
Screen Space Classification for Efficient Deferred Shading```

The concept is fairly simple: instead of taking a G-buffer read and lighting buffer write fill hit for EVERY light source, bin the light sources by which tile(s) of the screen they cover (tile = MxN pixel block), and render ALL of the light sources for each bin at once (i.e. read G-buffer -> +light+light+light+light+light+light -> write/blend lighting buffer). Essentially, it's just batching light sources that overlap influence regions together.