Quote Originally Posted by saski View Post
Code :
...
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, dim[0], dim[1], 0,
            GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL);

...I'm confused about the GL_UNSIGNED_INT_24_8. The depth texture sampling gives us a value between 0..1 yet the format is unsigned int. So how is this supposed to work? Does the shader internally converts the unsigned int to float when sampling with texture()?
The format and type args are just used when you provide texel data as input via the last arg. You provided NULL, so in practice they don't matter.

Internally the DEPTH24_STENCIL8 internal format is a 32-bit fixed-point format where 24-bits are fixed-point depth (0..2^24-1) and 8-bits are fixed-point stencil (0..2^8-1). How you end up with 0..1 depth values is that 0..24^-1 is mapped to 0..1 when the pipeline operates on it.