Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 6 of 6

Thread: glGenTextures() lags when vsync is enabled

  1. #1
    Newbie Newbie
    Join Date
    Jul 2018
    Posts
    3

    glGenTextures() lags when vsync is enabled

    Iím involved in a developement of a frontend that periodicaly reloads batches of images of diffrent sizes, which takes around 40-60 ms when vsync is enabled. When I disable vsync it takes 1-2 frames less time. Iíve identified the function that causes this lag. Itís glGenTextures() that hangs the thread for 1-2 frames. Is there any way to get rid of this lag but keep the vsync enabled?

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,435
    Quote Originally Posted by Oomek77 View Post
    I’ve identified the function that causes this lag. It’s glGenTextures() that hangs the thread for 1-2 frames.
    What GPU and GL driver version?

    You generally shouldn't be creating new texture handles and allocating texture storage during your render loop. What happens if you move the creation of texture handles and texture storage to startup, before your render loop? Pre-allocate your texture storage up-front and then just subload the texels into those pre-allocated textures in your runtime render loop.
    Last edited by Dark Photon; 07-18-2018 at 04:34 AM.

  3. #3
    Newbie Newbie
    Join Date
    Jul 2018
    Posts
    3
    The problem with preallocating the textures is that each loaded texture has different dimensions and itís hard to predict what is the maximum size of the largest one. There may be more than 30000 of images in the folder the textures are loaded from.

  4. #4
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    6,021
    Quote Originally Posted by Oomek77 View Post
    The problem with preallocating the textures is that each loaded texture has different dimensions and itís hard to predict what is the maximum size of the largest one.
    Then fix that. High performance graphics applications gain their performance in part by regularizing their input. They don't just load whatever the artist coughed up. Artists live within restrictions than the engine imposes.

    Allocating texture objects and storage is going to cause framerate hitches. The only mitigation strategy you can employ is to not do those things during high-performance times (ie: during a level). And if that means all diffuse textures have to be the same size, with the same format, then that's what you have to do.

    Now, your scene doesn't have to be that rigid. But it needs to be rigid enough, particularly in streaming situations, that you never have to create new texture objects in high-performance situations.

    It should also be noted that Vulkan in particular shines here, since it allows you more direct access to the memory behind these allocations. So for streaming situations, you don't have to pick a particular texture size/format; you're instead aiming for how much storage the textures in a streaming block take up, their size/formats in aggregate rather than as individual textures.

  5. #5
    Newbie Newbie
    Join Date
    Jul 2018
    Posts
    3
    You see, if Iíve had any control over it I would do it, but because itís a launcher I have no control over what images the end user puts inside. Besides, the owner of the project chose SFML so Vulkan is out of the question unfortunatley. Iíll have to deal with it with what I got. I can commit some changes to the SFML as I did before, but so far I have no clue what to do with those framerate hickups.
    Last edited by Oomek77; 07-19-2018 at 12:05 PM.

  6. #6
    Junior Member Regular Contributor
    Join Date
    May 2013
    Posts
    140
    You mention limitations. So how much of the application can you actually change? Is it a streaming case where you can just start using it when its ready or do you need the data at that exact next frame?

    For streaming the luxurious solution would be to use a second thread with its own context. (but with sharded gl objects)

    You can do all the load/create/upload stuff there, set a fence and glFlush. And then send that fence object over to the render thread and start using the new assets as soon as the fence signals.

    The next best option would be to just wait a few frames before you start using a texture and hope its done or mostly done with creation/copying. But you still have a few gl calls that probably will block to copy memory blocks from your applicaiton into OpenGL managed memory. So you can only hide the object creation and part of the memory transfer to vram in the background, and that only in the best case. Drivers do not always do what you want. Worst case would be a driver that only fully creates the texture when you first use it.

    The second variant also could be cut up by uploading only part of a big image each frame, to hide the transfer to OpenGL managed memory a bit more.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •