position from depth

I was following the code from:
website

I’m a bit confused about these lines from the shader

out_vTexCoord = in_vTexCoordAndCornerIndex.xy;
out_vFrustumCornerVS = g_vFrustumCornersVS[in_vTexCoordAndCornerIndex.z]; (actually this line)

the shader is hlsl but it really no big deal to convert this to glsl but I’m a little confused about using in_vTexCoordAndCornerIndex.z for the index lookup.

Is it that the texcoord is an interpolation of when they did the depth stored in the texcoord?

I don’t read HLSL well, but it looks like this vertex shader use a precalulated table to convert corner index (stored as .z texcoord) to a “frustum corner”.

The relevant explanation is below in the blog post :

The farFrustumCornersVS array is what I send to my vertex shader as shader constants. Then you just need to have an index in your quad vertices that tells you which vertex belongs to which corner (which you could also do with shader math, if you want). Another approach would be to simply store the corner positions directly in the vertices as texCoord’s

(no interpolation here, as we are on the vertex shader)

Thanks ZbuffeR for responding.

The precalculated table is what i’m confused about. I understand that it passes the 4 corners of the frustum into the shader but I’m not sure what they were passes into the texcoord.z. If I think about harder is it that they store [0,1,2,3] into the texcoord.z corresponding to the actual texcoords of the screen quad like this.

(x,y) : z

(0,0) : 0 this would match the lower left of the far plane
(0,1) : 1 this would match the upper left of the far plane
(1,1) : 2 this would match the upper right of the far plane
(1,0) : 3 this would match the lower right of the far plane

xy are the actual texcoords and they use z to store a sort of index that it meant to match the far plane points.

matching the orientation of the far plane points with the quads texcoords when using an obb. Does this sound like what they are doing?