Non-deterministic results from render to texture

Hello everybody,

We are developing an application that applies filters to images and we extensively use the GPU via OpenGL and GLSL shaders for that. We currently experience a strange anomaly in the application: the result of one of our most complex shaders is sometimes darker than it should be or even completely black. “Sometimes” means like out of 50 processed images 3-4 will contain the anomaly (never the same images). If we process again the “faulty” images they will most likely come out OK.

Now for a quick description of how we use OpenGL to filter:

  • bind the result texture to a FBO;
  • compile/link/set the GLSL shaders to be used (vertex shader and fragment shader, GLSL v1.1);
  • bind the needed texture units (11 texture units for the case of the shader exposing the anomaly);
  • set the uniforms used by the shader;
  • finally render to the FBO;
  • none of the textures used are bigger than 1024x1024 (neither the textures used as input or the texture bound to the FBO); the anomaly when present it affects the whole output texture in the same way (meaning that is either completely black or completely darker than it should normally be)
  • we check for OpenGL errors at quite all used functions;

All of the shader uniforms and input textures were checked both for the “darker” or black results and for the successful case and they were correct. Only the output texture differs.

The anomaly seems to be independent of GPU (happens both on NVidia and ATI) and also OS independent (Win Vista, Win7 and MacOS 10.5 and 10.6).

Anybody has any idea of what we may deal with here? Some ideas of how we could nail this problem? We kind of ran out of ideas so any help would be very appreciated.

Best regards to everyone!

If we process again the “faulty” images they will most likely come out OK.

Using the exact same code as before?

The anomaly seems to be independent of GPU (happens both on NVidia and ATI) and also OS independent (Win Vista, Win7 and MacOS 10.5 and 10.6).

If it’s consistently happening cross-platform, then we’re going to have to know a bit more about your code before we can really be of help.

Yes, using exactly the same code next time we will not get the same series of faulty images. Most likely the exact same images that previously were not correctly filtered will yield correct results in a subsequent run.

I will post tomorrow the code that seems to generate the problem. I’m currently working on a small glut application that will filter many times the same set of input pictures over and over again (using the complex shader that seems to cause the issue). If we manage to get this to reproduce outside our filtering framework we will be 100% sure it is nothing related to our environment that causes this problem.

OpenGL is a state machine… if some part of the code is setting a state (such as changing the colour, blend mode, lights - anything) it can have unexpected results when combined with other state changes further down the line.

“Why aren’t these objects textured!?”
“Oh, yeah, I glDisabled textures to draw the grid :p”

Problem is now solved.

I did that glut application trying to reproduce the problem but it did not reproduce. So it was more likely that the problem was not the shader or render to texture process but something else. I logged all the GL state just before the shader is executed and realized that some GL state was incorrectly cached by our framework under some interesting circumstances (we were binding some input texture but it never got to GL because of our cache thinking it is already bound).

So guys thank you everyone for your feedback!