I’m using OpenGL 2.0+ (ATI X600 card) on linux.
I was getting texture corruption (and system hangs) when playing a video to a texture. The video player system decompresses to a chunk of memory. I then upload the new video frame to a texture using the standard calls (done outside the Rendering loop):
glEnable( GL_TEXTURE_RECTANGLE_EXT );
glBindTexture( GL_TEXTURE_RECTANGLE_EXT, textureID );
etc. etc.
glTexImage2D( GL_TEXTURE_RECTANGLE_EXT, …, imagePtr );
glDisable( GL_TEXTURE_RECTANGLE_EXT );
inside the rendering loop, I draw the texture as a Quad.
Depending on system load, I get corruption in the video image sometimes and/or linux hangs. I added a bunch of cerr’s before and after the video codec calls, this mysteriously fixed the problem which lead me to believe it was a synchronization problem. So in the above code, I added a ‘glFinish()’ call before I upload the video frames pixels to the texture. This fixed the problem. My thinking is that the draw code’s opengl calls were being queued up and the texture would sometime get changed while it is being drawn. Is this correct? I can understand the image corruption, but why would it hang? Wouldn’t this upload code still be legal?
Would a double buffer texture upload scheme fix the problem without having to ‘glFinish()’?
thanks for any help.
John