I have created a depth texture only FBO(not a renderbuffer).
OGL say it is a valid FBO, but when I bind it and attempt to render anything to it, the results are actually drawn to the standard back buffer’s depth buffer instead.
Now my FBO is valid and bound… am I missing something? FBO’s have always worked fine for me before, but I have never done a depth only one…
The corruption appears as blocky low-res patterns of junk, while most of the screen is ok. When I resize main window to match it’s width with the size of my shadowmap depth buffer, the junk aligns so that I can almost see pixelated contents of my shadowmap on the screen, as if it was errornously used as the main depth buffer. Looks as if the driver allocated the same memory region for both depth buffers. Too agresive optimization?
I discovered two ways to modify code to make the corruption not occur. One is to attach a dummy color buffer to the FBO, so that the FBO is no longer depth-only (just be cautious to not attach a texture you might be reading from, during the time the FBO is bound). The other workaround is more interesting, but a bit more complicated to describe.
For me, I didn’t discover any problems with depth-only FBO on any nVidia hardware and with any rather fresh drivers.
One thing to mention - when I create FBO, I create dummy color texture firstly, and force it to bind to color attachment, then binding zero texture there. So, in fact, I have depth-only FBO.
I don’t remember actually, why I did this magic with dummy texture binding/unbinding.