Problem with textures

My application displays images using textures. It breaks the image into 512x512 blocks and creates a texture with each. The problem I’m having occurs on a multiprocessor (I generate the textures across multiple processors via. pthreads) SPARC running Solaris 8. I’m using Qt as the GUI system that the opengl is embedded in.

It seems that the first texture I generate during a run of the software is corrupted…it contains image data from the second texture I generate. If I use an image that is smaller than 512x512 and thus is a single texture, it looks ok until I create another texture while the first is still being displayed, then I see the same problem.

Some strange things I see. This occurs when running on the console. If I run the application over a network and display on a windows machine with an x server (tried cygwin x and xmanager enterprise) I don’t see the problem. If I run this on windows, I also don’t see the problem. I originally thought it might be a problem with the solaris opengl implementation. The code that generates the textures is:

glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, 4, texSizeX, texSizeY, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

I verified that the proper handles are associated with each texture. As a workaround, I had the program generate a texture with random (uninitialized) data then immediately delete that texture to simulate the initial texture generation as the program works fine except for the first image displayed during a run. When I did this, all textures generated were displayed as the first “real” (second) texture generated. This behavior happened on Solaris and Windows. This made me believe there is something wrong with what I’m doing.

Any suggestions?

My rule of thumb for MP OpenGL is that only one thread should do all the OpenGL calls. A context can only be active in one thread at any given time anyway, so that seems natural. In theory you can have multiple contexts sharing textures and display lists, but in the end it’s all gonna end up in one card anyway, so it doesn’t seem to have a significant benefit, and it’s a part of the drivers that probably nobody ever tests.

You don’t say how you handle the OpenGL part of the work, but my recommendation would be to make sure that only one thread has an OpenGL context and handles all OpenGL calls.

Hope it helps

Dirk

Any chance of a sample (windows) app that has this problem?

It seems pretty clear that the OpenGL driver is not threadsafe. It’s quite possible that on a uniprocessor system you’ll get very sporadic corruption as well, if one rendering thread is preempted for another one during the upload.

You could ask a question at tech support for the manufacturer of the card (don’t know if it’s Sun or another manufacturer), but as far as I know, OpenGL implementations are not required to be threadsafe. And there would be a performance penalty for every program, whether they use threaded OGL or not.

The reason you’re not getting problems over the network is that the network protocol (GLX or even TCP) takes care of correct delivery.

What you can do:
protect the OpenGL upload call with a semaphore.

  1. obtain the semaphore (this waits for the semaphore to become available if it’s locked)
  2. do the upload
  3. release the semaphore

Some code:

#include <synch.h>
mutex_t TextureUpload;

main()
{
 mutex_init(&TextureUpload, USYNC_THREAD, 0);

}


uploadtexture()
{
 if(mutex_lock(&TextureUpload)==0) {
  // locked. now upload the texture
   ...
  mutex_unlock(&TextureUpload);
 }
 else { // error - mutex timed out or invalid
  ...
 }
}