Uising a thread to load/create textures

What I would like to do is every time I need to load a texture on the fly, spawn a thread, have it load and then create the texture while the main rendering process runs. After the thread does its work, the thread should die. This brings up a few issues, of keeping the texture after the thread is destroyed and simply loading the thread to be shared between the main process and this thread.

Currently I have it just the thread loading the data and having the main thread to the do glTexImage2D/gluBuild2DMipmaps. The pause is still to great, so is it at all possible to do the gl calls to build the texture in the thread? I notice this little tidbit in the MSDN wglShareLists page, “You can only share display lists with rendering contexts within the same process.” This really puts a damper on my plans.

>> gluBuild2DMipmaps
Avoid using this ! It is only a utility function, that you can replace by your own code. If you use it for resizing to power-of-two sizes, stop now, it is ugly :slight_smile:
If it is about mipmaps, consider doing them yourself (in your worker thread) or tell the card to build them in hardware :
glTexParameteri( GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE );

Still the glTexImage2D has to be in the opengl thread. You may upload a mipmap level per frame to ease it out. Or reuse texture, and do glTexSubImage to change only the texel data (sometimes much faster for the same number of texels).

Does this helps ?

Yeah, i do have a replacement for gluBuild2DMipmaps, but i just listed that as its commonly known to build the set of miped maps.

So there is no way to move the glTexImage2D to another thread? This with some of my larger textures is causing a .5 second pause which is somthing I would really like to rid myself of.

Don’t know if it works but I heard about sending a NULL pointer to glTexImage2D to just create the texture, without actually initialising it, then use glTexSubImage2D to update parts, ie only a quarter of the full texture each frame.

What hardware do you use ? And what size of texture are your large ones ?

Thanks for the replies.

It doesn’t matter what hardware im running as I would like my software to run on any geforce 2+. But the larger textures are 1024x1024, that i need to make a full set of mipmaps for.

Ill mess around with sending it a NULL pointer, if i can load bits at a time, i think that would be an acceptable solution.

Personally, I suggest to just leave out the GL-in-multiple-thread stuff.

I think a somewhat better design would be to make “loading requests” to a thread. The thread does everything it needs and then asincronously sends a “loaded data block” to requestor. This does not need copy because of threading.
The loading thread does nothing but pulling from disk.
The main thread now does all the work of uploading the texture to GL while the loader doesn’t even knows what will happen next.

The problem being blocking TexImage could be solved by using pixel buffer objects… anyway I don’t think it’s a problem. What are you planning to do until the texture does not get uploaded? Most games simply draw a wait screen with some feedback.
Why do you have this problem in the first place?

Yes I will have a wait screen for the base set of textures, but I have too many units (each with 4 textures to load all at once). So I load it dynamicly on demand.

You asked what will be done while it’s loading? I’ll have it using a preloaded texture until the dynamic one is loaded.

Now I see the problem is real.
I still suggest to try minimizing seek time before going optimize GL. Since disk access is slow, maybe you can have more speed up by parallelizing the IO.
For win32 there are two main alternatives.
Asyncronous IO is quite good. I’ve played only seldom with it but it’s not so complicated as people thinks. Do not even try do the same on linux, I find it’s too badly documented. My manpages however are out of date, maybe they have been improved.

The other way to parallelize IO is to use multiple threads. This is possibly The Right Way since it allows to use third-party libraries which may use syncronized IO.

Can you evaluate the amount of time spent on disk IO and the time effectively spent in uploading the texture?

By the way, what ZbuffeR said sounds like a good idea to me. Maybe you could make a hybrid solutions of those two.

Don’t know if you’ve seen this, but the same idea has been discussed some threads below:
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=9;t=000302

Sorry for the very slow reply, things got chaotic and I did not have time to work on this untill now.

Originally posted by Relic:
[b]Create your window,
select and set a pixelformat (once!),
create the main rendering thread,
create your main rendering OpenGL context,
wglMakeCurrent in the main thread,
create your texture loading OpenGL context,
NOW: wglShareList(mainContext, textureLoadingContext)!
Create the texture loading thread,
wglMakeCurrent of the texture loading OpenGL context to the texture loading thread.

Now both OpenGL contexts run on the same window, bound to different threads, BUT texture objects are shared between the contexts.[/b]
I have a few questions on this. Firstly, you create the window, then main thread then make 2 contexts and then make the texture loading thread. Is this order right? Would you not want to make the thread that loads the texture, then its context? Also is it possible to do this with having the rendering in the same process as the main window process?

Thanks for the help

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.