I know you will surely blame me for such a request. Anyway, any hints will be fine.
All my shadow mappings work great when on simple little demo. So I decided to put it in a bigger software environment but it simply not works.
I don’t know if the depth map is filled (it seems not but I can’t ensure it). What I could tell is that when I use RTT the light view point seems well: I can see everything. But when I change the FBO for the shadow mapping, nothing works.
I kept the usual transformations: scaling, bias… and such for the texture matrix. Do you think this can come from one of them ?
For more information: I don’t use shaders yet (as it is really slow on my machine), the light is set at a position (400,400,400,1) and looks at (0,0,0,1), the bias matrix is:
.9, .0, .0, .0,
.0, 1., .0, .0,
.0, .0, .9, .0,
.0, .0, .0, 1.
As some said, and as I read on some pages, most do not use any ‘real bias matrix’. They simply ensure that the coordinates will fit into the range [0,1]. This, for me, is not a bias matrix (maybe I’m wrong ?)
Using the bias I stippled at the first post gives good results for a given view point. But if I change things, this simply gives wrong results. Am I intended to modify that bias matrix for each point of view ? If so, then is there any way to calculate it ? According to nvidia doc about shadow mapping, bias is important, however I don’t understand how to calculate it.