Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 21

Thread: Lock the Default Framebuffer

  1. #11
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    As a side note, I concur with Illian that iOS's way of doing bits: make a render buffer and use FBO jazz to render to that is better than that currently in EGL/GLX/WGL.
    I'm not saying that it would be bad to have that. But it is simply not going to happen, so there's no point in asking for it. In order to make it work in the context of WGL, you have to remove the default framebuffer more or less entirely. And while OpenGL itself is ready to do that (3.0 added language for what happens when there is no default framebuffer), WGL is not.

    Quote Originally Posted by Eosie View Post
    I guess you could make a texture object from a framebuffer even without locking. It wouldn't be so easy, but it would certainly be possible. Or you could just say that the texture object created with glTextureDefaultFramebuffer isn't changed when the default framebuffer is resized. In other words, the texture object would keep referencing the same underlying texture despite the fact it's not part of the default framebuffer anymore. Such a solution would be a lot cleaner than using the lock.
    Cleaner to use, perhaps; not necessarily cleaner to implement. That would mean that, when the window is resized, the implementation needs to suddenly cleave this memory from the default framebuffer. That's probably not an easy thing to just do.

  2. #12
    Junior Member Regular Contributor
    Join Date
    Jan 2004
    Location
    Czech Republic, EU
    Posts
    190
    Quote Originally Posted by Alfonse Reinheart View Post
    Cleaner to use, perhaps; not necessarily cleaner to implement. That would mean that, when the window is resized, the implementation needs to suddenly cleave this memory from the default framebuffer. That's probably not an easy thing to just do.
    I've got no idea what you are talking about. Buffers, framebuffers, textures... all are just buffers and can be anywhere in memory. Resizing a framebuffer is just a buffer re-allocation, nothing more, nothing less. What I described is very easy to implement. The texture object would just reference the underlying buffer... and that's it. The framebuffer could be released completely, the OpenGL context could even be destroyed... but the kernel wouldn't release the buffer, because I own the reference to it and I could use it anywhere I want, even in the contexts I have not created yet. Or I can just map it (using the kernel interface) and read its memory...
    Last edited by Eosie; 09-08-2012 at 05:17 PM.
    (usually just hobbyist) OpenGL driver developer

  3. #13
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Resizing a framebuffer is just a buffer re-allocation, nothing more, nothing less.
    Allow me to quote myself:

    Quote Originally Posted by me
    This will return a texture view of the 2D storage for the back buffer of the default framebuffer. So it's immutable.
    There's a reason why texture_storage exists. There's a reason why texture_view requires immutable textures.

    If the image you get back can undergo "buffer re-allocation" at any time, then not only can you not rely on its basic properties (size, etc), you also can't use it with texture_view and any other nifty techniques that come along that require immutable texture storage.

    "buffer re-allocation" is not a good thing.

  4. #14
    Junior Member Regular Contributor
    Join Date
    Jan 2004
    Location
    Czech Republic, EU
    Posts
    190
    I don't understand why you are telling me all this. Maybe I was not clear enough, so I'll try again.

    Let's say the back buffer is Buffer 1. The moment the texture view is created, both the back buffer and the texture view point to Buffer 1 (it has 2 references at that point). When the window is resized, the back buffer is reallocated, which means Buffer 1 is unreferenced and another buffer is created, let's call it Buffer 2, which is immediately used as a backing storage for the back buffer. Then, the back buffer points to Buffer 2, but the texture view still points to Buffer 1 (it has only one reference now). The texture view pretty much becomes an ordinary immutable texture and has nothing to do with the default framebuffer anymore.
    (usually just hobbyist) OpenGL driver developer

  5. #15
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    The texture view pretty much becomes an ordinary immutable texture and has nothing to do with the default framebuffer anymore.
    What good is that compared to simply leaving it undefined? In both cases, in order for the user to actually make useful use of that texture, they have to detect that this has happened, unlock the framebuffer, and lock it again. Since they're going to do that anyway to get a useful result, why bother with stating that this must be how it's implemented?

    Or more to the point, what if the implementation allocated a larger backbuffer than you asked for and you resize it? Or better yet, what if the size got smaller on a resize? In either case, the resize doesn't require buffer reallocation; it would simply use a portion of the buffer's true size. But if you force buffer reallocation by requiring it in the spec, then you've forced a heavy-weight operation (buffer reallocation is not cheap) where one was simply not needed.

    The user still has to do an unlock/lock cycle as before, because in both cases they can't rely on it (in mine, it's implementation-defined. In yours, it's a guarantee of reallocation). But only in your case does the driver have to do this extra work for no reason.

  6. #16
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251
    @kRogue: I assume by Illian you mean me?
    One more point, we could also have "viewable" textures by e.g. substituting glTexStorage2D with another special non-gl function.
    We could use them for both rendering into and for sampling from. I don't know if they can be efficiently implemented though.

  7. #17
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251
    @Alfonse: I dont think adding "viewable renderbuffers" would complicate the wgl or any other gl-glue-api. For example it can be done this way:
    We already have mechanism to create gl contexts with attributes. Then we can define a new attribute that means "this context does not use the default framebuffer". Such context is still activated with e.g. wglMakeCurrent, but the HDC parameter is just a dummy window to make the old apis happy (it could be the same dummy window that we need to get pointer to wglCreateContextAttribsARB). Using dummys is nothing new to us, is it? You get the idea.

  8. #18
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I highly doubt something like that could even work. After all, the implementation needs to know the format of the image to be displayed. By doing what you suggest, you could display an sRGB buffer one frame, then a non-sRGB buffer the next. Is that even something hardware can handle?

  9. #19
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251
    Why are you worried about the possibility of changing the display format every frame? After all during single frame the application itself can switch the colorbuffer format by using FBOs and/or do format converting blits as many times as it wants.
    One more such would hardly be a problem. AFAIK all GPUs have flexible blitters that can convert most uncompressed formats on the fly at no higher cost than a simple copy operation.

  10. #20
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Why are you worried about the possibility of changing the display format every frame?
    It's not the "display format" that's the problem. It's the display.

    If you want to display an sRGB framebuffer, currently you must create an sRGB default framebuffer. The fact that the default framebuffer is sRGB is an intrinsic, inflexible part of the default framebuffer.

    What you're suggesting assumes that the "display" has no format. That an sRGB default framebuffer is ultimately no different from a linear RGB default framebuffer.

    There's a difference between "what you render to" and "what you show in a window." Unless there is evidence that "what you show in a window" is as flexible as "what you render to," I prefer to err on the side of getting the most useful functionality with minimal disruption of the current system.

    AFAIK all GPUs have flexible blitters that can convert most uncompressed formats on the fly at no higher cost than a simple copy operation.
    I don't want to do a copy operation. And with SwapBuffers, you don't have to. Why should we accept that (likely minor) performance penalty even when we don't use it? Why not get the best of both worlds: the ability to talk directly to default framebuffer images, and the ability to get the fastest buffer swapping performance possible.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •