Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 21

Thread: Lock the Default Framebuffer

Hybrid View

  1. #1
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Lock the Default Framebuffer

    Currently, the default framebuffer can't be used in many of the ways that textures or renderbuffers can be. You can render to them, copy to them with glBlitFramebuffer, but you can't do things like bind them as images or textures and so forth.

    One of the reasons for this is that the default framebuffer can be resized by the windowing system. Images need to have a specific, fixed size, so having them be resized can be an issue.

    To deal with this issue, I propose the ability to lock images within the default framebuffer. This will prevent the framebuffer from being resized. If a window is resized, then something implementation-dependent will happen. It may be rescaled, it may fill empty areas with garbage, whatever.

    It would be simple:

    Code :
    glLockDefaultFramebuffer();

    While the default framebuffer is locked, you can get a texture object for the various images. As such:

    Code :
    glTextureDefaultFramebuffer(GL_BACK);

    This will return a texture view of the 2D storage for the back buffer of the default framebuffer. So it's immutable. Every call to this will return a new texture object (from an unused texture name, as if from `glGenTextures`). So repeated calls will create new views to the same storage. Obviously, if the framebuffer is not locked, this will fail with an error.

    The sticky point is this:

    Code :
    glUnlockDefaultFramebuffer();

    Obviously, we would need that function. But what exactly does it mean? What happens to those texture views that were created? The simplest thing would be for any use of them (besides deletion) after unlocking the default framebuffer to become undefined behavior.

    An alternative implementation of this concept that avoids the pesky unlock issue is to add a WGL/GLX_CONTEXT_FIXED_BIT_ARB to the attribute flags. If this flag is set, then the context has a size fixed at the time of its creation. In such a context, `glTextureDefaultFramebuffer` will return a valid texture object as defined above; if the context wasn't created with this flag, it returns 0.

  2. #2
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    578
    In my opinion, something like this likely belongs in the land of EGL/GLX/WGL. For example, in EGL land the logic would be like this:
    1. Create EGLImage from framebuffer
    2. Create texture (or renderbuffer or whatever) from EGLImage


    In locking the framebuffer, then one would also need to introduce language for rendering to the framebuffer and not using it as a source, unless you are to only allow for sourcing from the front buffer, which in turn assumes the system uses double buffering (some use single buffering, some use triple buffering.. the iOS platform is "interesting", one makes a renderbuffer and uses an iOS call to say that is the contents of the program to display). Along these lines, additional language is required to make sure that if you hook a texture to the front buffer then the texture is read only (because the windowing system is presenting that buffer). Moreover, in EGL land there is a config bit which when enabled basically says that the contents of the framebuffer are undefined after eglSwapBuffers (this is a performance benefit for all tiled based renderers).

    Out of curiosity, why do you want this? Along those lines, why not just "fake" your own framebuffer via FBO's and at the end of the frame blit your renderbuffer/texture to the framebuffer?

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    I think Alfonse's complaint in regards to the inflexibility of the default FB is justified. However, personally I've been using the default FB like kRogue suggests for quite some time, i.e. only as the target of the final blit. I have the feeling that introducing new APIs and making changes to the window system bindings to achieve what we can already do in a different way should be warranted by some really convincing use cases.

    In regards to what kRogue calls "faking" your own framebuffer: In GL ES you can already drop the whole framebuffer completely and be mandated to use FBOs entirely. Of course, this will only give you the opportunity to render off-screen but it's essentially the same except for this annoying missing surface to blit to in the end. Now, providing that one surface will still allow on-screen rendering and window system handled resizing, all the other stuff would be the developers responsibility like swapping and resizing FBO accordingly - unless they want rendering to happen at lower resolution and having an interpolated blit to some higher/lower resolved surface. No depth, no stencil - just a plain RGB(A) surface. This adds complexity to the implementation of the application but would leave at least the GL unchanged, though it would probably necessitate some changes to the window systems bindings.

    In conclusion: Give us something that's almost like OES_surfaceless_context and EGL_KHR_surfaceless_context - only with a single RGB(A) surface we can blit to after rendering has been completed.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,138
    Quote Originally Posted by Alfonse Reinheart View Post
    If a window is resized, then something implementation-dependent will happen. It may be rescaled, it may fill empty areas with garbage, whatever.
    This is one major objection I have here. Defining implementation-dependent behaviour is dangerous - some implementations are going to crash, some are going to give garbage, some give black, some rescale, and some may even give what appears to be correct behaviour. That opens huge potential for programmer error. If resizing the framebuffer is to be disallowed then it should have well-defined error behaviour that all implementations must abide by in order to be conformant, and that should be testable-for in code so that the programmer can detect it happening and respond accordingly.

  5. #5
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Quote Originally Posted by mhagain View Post
    This is one major objection I have here. Defining implementation-dependent behaviour is dangerous - some implementations are going to crash, some are going to give garbage, some give black, some rescale, and some may even give what appears to be correct behaviour. That opens huge potential for programmer error. If resizing the framebuffer is to be disallowed then it should have well-defined error behaviour that all implementations must abide by in order to be conformant, and that should be testable-for in code so that the programmer can detect it happening and respond accordingly.
    Implementation-dependent behavior doesn't mean crashing is acceptable. Indeed, even undefined behavior doesn't mean that crashing is on the table; the only time crashing is an acceptable option is if the spec explicitly states that program termination is a legitimate outcome.

    If you lock the framebuffer, it is on you, and your window-system API of choice, to ensure that the window is not resized. OpenGL should not have any means to tell you it's been resized, since your window system API already lets you know.

  6. #6
    Junior Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    228
    my opinion is that the default FB should go. it's useless remnant from the pre-fbo times.

    a while ago i gave some example about how this can be done and recently i learned that apple have done exactly that in ios. there is no default fb, but instead one creates a special renderbuffer that is displayable. this is done by substituting glRenderbufferStorage with a special non-gl function. from the POV of opengl this special renderbuffer is indistinguishable from a regular one. It is special only for the outside (non-gl) apis that take care of the display.

    some of the benefits of getting rid of the default fb are:
    - flexibility (the default fb is immutable)
    - scrap the redundant non-gl apis for configuring the default fb (pixel formats, visuals, etc).
    - makes the glue apis (wgl, glx, etc.) thinner hence opengl less dependent on them.
    - does not need any additions to the opengl itself and hence does not complicates it.

    just take example from apple (i hope they dont have it patented)
    Last edited by l_belev; 09-05-2012 at 05:24 AM.

  7. #7
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,138
    Quote Originally Posted by Alfonse Reinheart View Post
    Implementation-dependent behavior doesn't mean crashing is acceptable. Indeed, even undefined behavior doesn't mean that crashing is on the table; the only time crashing is an acceptable option is if the spec explicitly states that program termination is a legitimate outcome.
    Ignore the example of crashing and focus on the actual point being made, please.

  8. #8
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Quote Originally Posted by mhagain View Post
    Ignore the example of crashing and focus on the actual point being made, please.
    If it's not going to crash, then there is nothing "dangerous" about it, and therefore, there is no specific need to have a defined behavior. The reason why the spec allows certain behavior to be implementation defined is to allow for the freedom of optimizing things as is best for that hardware. By defining it in a particular way, you remove the chance for those optimizations.

    And I see no need to do that once crashing is off the table. So what if some implementations display garbage or stretch the screen or whatever. The users should not be relying on any of those behaviors, because the user entered into a contract with OpenGL that says that the window will not be resized. That's what locking ultimately means.

    Also, please make sure that if you're going to give examples, give reasonable ones. Because your idea will be judged based on the importance of those examples.

  9. #9
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    578
    As a side note, I concur with Illian that iOS's way of doing bits: make a render buffer and use FBO jazz to render to that is better than that currently in EGL/GLX/WGL. Though one will still need a system-dependent call to say "present this render buffer as my window/fullscreen contents" There is a reason why one may want to have the system give you the render target: make the render targets format match the current display settings so presenting the buffer can be done by a blit performed by a dedicated blitter (for example in STE-U8500, the B2R2 unit). Right now though, EGL is a bit of mess (EGLConfig has within what kind of GL context one is to make and that config is needed to get the EGLSurface). But oh well, EGL could be worse it could be WGL.. which reminds me of one of the most hilarious bits in an extension that I have read in a long time (from http://www.opengl.org/registry/specs...V_float_buffer )

    Quote Originally Posted by GL_NV_float_buffer

    Additions to the WGL Specification

    First, close your eyes and pretend that a WGL specification actually existed. Maybe if we all concentrate hard enough, one will magically appear.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •