glGenTextures() lags when vsync is enabled

I’m involved in a developement of a frontend that periodicaly reloads batches of images of diffrent sizes, which takes around 40-60 ms when vsync is enabled. When I disable vsync it takes 1-2 frames less time. I’ve identified the function that causes this lag. It’s glGenTextures() that hangs the thread for 1-2 frames. Is there any way to get rid of this lag but keep the vsync enabled?

What GPU and GL driver version?

You generally shouldn’t be creating new texture handles and allocating texture storage during your render loop. What happens if you move the creation of texture handles and texture storage to startup, before your render loop? Pre-allocate your texture storage up-front and then just subload the texels into those pre-allocated textures in your runtime render loop.

The problem with preallocating the textures is that each loaded texture has different dimensions and it’s hard to predict what is the maximum size of the largest one. There may be more than 30000 of images in the folder the textures are loaded from.

Then fix that. High performance graphics applications gain their performance in part by regularizing their input. They don’t just load whatever the artist coughed up. Artists live within restrictions than the engine imposes.

Allocating texture objects and storage is going to cause framerate hitches. The only mitigation strategy you can employ is to not do those things during high-performance times (ie: during a level). And if that means all diffuse textures have to be the same size, with the same format, then that’s what you have to do.

Now, your scene doesn’t have to be that rigid. But it needs to be rigid enough, particularly in streaming situations, that you never have to create new texture objects in high-performance situations.

It should also be noted that Vulkan in particular shines here, since it allows you more direct access to the memory behind these allocations. So for streaming situations, you don’t have to pick a particular texture size/format; you’re instead aiming for how much storage the textures in a streaming block take up, their size/formats in aggregate rather than as individual textures.

You see, if I’ve had any control over it I would do it, but because it’s a launcher I have no control over what images the end user puts inside. Besides, the owner of the project chose SFML so Vulkan is out of the question unfortunatley. I’ll have to deal with it with what I got. I can commit some changes to the SFML as I did before, but so far I have no clue what to do with those framerate hickups.

You mention limitations. So how much of the application can you actually change? Is it a streaming case where you can just start using it when its ready or do you need the data at that exact next frame?

For streaming the luxurious solution would be to use a second thread with its own context. (but with sharded gl objects)

You can do all the load/create/upload stuff there, set a fence and glFlush. And then send that fence object over to the render thread and start using the new assets as soon as the fence signals.

The next best option would be to just wait a few frames before you start using a texture and hope its done or mostly done with creation/copying. But you still have a few gl calls that probably will block to copy memory blocks from your applicaiton into OpenGL managed memory. So you can only hide the object creation and part of the memory transfer to vram in the background, and that only in the best case. Drivers do not always do what you want. Worst case would be a driver that only fully creates the texture when you first use it.

The second variant also could be cut up by uploading only part of a big image each frame, to hide the transfer to OpenGL managed memory a bit more.