Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 6 of 17 FirstFirst ... 4567816 ... LastLast
Results 51 to 60 of 166

Thread: Official feedback on OpenGL 3.1 thread

  1. #51
    Junior Member Regular Contributor
    Join Date
    Apr 2001
    Posts
    180

    Re: Official feedback on OpenGL 3.1 thread

    Quote Originally Posted by Rob Barris
    Spec writing can directly influence 'c', but really only has an indirect effect on 'a' and 'b'.
    True, but IMHO the effect on 'a' and 'b' shouldn't be downplayed. For instance, here's a concrete issue I've had: ATI drivers would behave in "client-side array mode" when using VBOs to compile a display list, and consequently crashing when trying to access 0x00000000 (ie offset 0 into the VBO). While I guess this isn't a normal thing to do, it _is_ allowed according to ARB_vertex_buffer_object.

    In any event, I'll most likely try to do some GPGPUish stuff soon, and will hopefully be able to exploit some of the new stuff like TBOs and instancing.

  2. #52
    Member Regular Contributor
    Join Date
    Mar 2001
    Posts
    469

    Re: Official feedback on OpenGL 3.1 thread

    http://www.opengl.org/registry/specs...opy_buffer.txt

    ... contains the enumerants which were apparently missing from the glext.h file

  3. #53
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    Adelaide, South Australia
    Posts
    208

    Re: Official feedback on OpenGL 3.1 thread

    Quote Originally Posted by Rob Barris
    If you aren't in the group described above (migrating a legacy app up to 3.1 while wanting to continue using legacy func) then the story is very simple - focus on the core feature set and code to that. The spec is shorter, the coding choices are fewer. The driver will never spend any cycles wandering between fixed func and shader mode because your app won't be asking for that type of behavior any more.
    But the driver doesn't know that you wont use an old feature so it still needs to make allowances in case you do, and still needs to load all the extra code into memory.
    This can easily be solved with the addition of one bit in the attribute value for WGL_CONTEXT_FLAGS in <*attribList>:

    We already have:
    WGL_CONTEXT_DEBUG_BIT_ARB 0x0001
    WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB 0x0002

    To which we can add:
    WGL_CONTEXT_PERFORMANCE 0x0004
    which would remove all of the removed features and would not advertise any legacy extension.

    WGL_CONTEXT_PERFORMANCE would load only the core streamlined driver with no legacy functions or software emulation, while the absence of WGL_CONTEXT_PERFORMANCE would load a separate DLL that impliments all of the legacy functions and supports 2.1 contexts on top of the core driver.
    WGL_CONTEXT_DEBUG_BIT_ARB would load an instrumentation layer on top of the driver.
    WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB and WGL_CONTEXT_PERFORMANCE together would remove both the removed and depreciated functions.

  4. #54
    Member Regular Contributor
    Join Date
    Oct 2006
    Posts
    353

    Re: Official feedback on OpenGL 3.1 thread

    What happens right now, if someone enables WGL/GLX_CONTEXT_FORWARD_COMPATIBLE and tries to access a function from ARB_compatibility? Is an error generated?

    If not, then ARB_compatibility is simply a copout from supporting forward-compatible contexts (and isn't that what Nvidia intended all along?)
    [The Open Toolkit library: C# OpenGL 4.4, OpenGL ES 3.1, OpenAL 1.1 for Mono/.Net]

  5. #55
    Junior Member Regular Contributor
    Join Date
    Apr 2001
    Posts
    180

    Re: Official feedback on OpenGL 3.1 thread

    If you read the presentation from when 3.0 was released, you'll see that the indended "retirement plan" for features were "Core -> Deprecated -> Extension -> Gone/Vendor specific extension"

    So they're just following their plan.

  6. #56
    Intern Newbie
    Join Date
    Jan 2008
    Posts
    42

    Re: Official feedback on OpenGL 3.1 thread

    Quote Originally Posted by Simon Arbon
    But the driver doesn't know that you wont use an old feature so it still needs to make allowances in case you do, and still needs to load all the extra code into memory.
    Couldn't the driver reconfigure itself / load the code for legacy APIs only after some call to them was made?

    Regardless, I like being explicit in my code, and I would prefer what you've suggested. From an aesthetic point of view it seems cleaner, and perception is important.

  7. #57
    Intern Newbie
    Join Date
    Feb 2000
    Location
    UK
    Posts
    42

    Re: Official feedback on OpenGL 3.1 thread

    Ok some questions / thoughts re the migration of code to 3.1.

    My engine is x-platform CgFX based with a GL back-end, generally it's pretty up to date and relatively clean, though there are of course <= 3.0 parts in there that I'd like to clean out now.

    Ideally I'd like the compiler to tell me what's outdated, however if ARB_compatability is enabled by default, and afaik not possible to disable it - does it mean going through my code by hand cross referencing stuff to the new spec, or is there some other method that will give me compile time errors?

  8. #58
    Super Moderator OpenGL Lord
    Join Date
    Dec 2003
    Location
    Grenoble - France
    Posts
    5,575

    Re: Official feedback on OpenGL 3.1 thread

    The spec suggest to use the <GL3/gl3.h> header instead of classic gl.h header. But nobody seem to know when it will be available

  9. #59
    Junior Member Regular Contributor Heiko's Avatar
    Join Date
    Aug 2008
    Location
    the Netherlands
    Posts
    170

    Re: Official feedback on OpenGL 3.1 thread

    Quote Originally Posted by ZbuffeR
    The spec suggest to use the <GL3/gl3.h> header instead of classic gl.h header. But nobody seem to know when it will be available
    This would cause the driver not to load the ARB_compatibility extension?

    Perhaps it will be available when the vendors release their first true GL3.1 drivers? I can't recall they advised to use <GL3/gl3.h> when OpenGL 3.0 was released. Perhaps this include dir is new since OpenGL 3.1?

  10. #60
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.1 thread

    The copy-buffer extension looks nice and very useful. However, i have a few questions: The main reason for this is obviously loading data in parallel directly into a gl buffer. But how would i do such a thing? If i have 2 threads, would thread 1 (rendering) need to create / map the buffer, get the pointer to it, pass it to thread 2, which then fills it, returns "ok" and thread 1 then issues the copy? Or the more inefficient, thread 1 and 2 need to do context-switches? Which one would work? Can thread 2 write to a buffer, that was mapped by thread 1, at all?

    And another thing. Did i get this correctly, that gl provides us with an implementation defined "write" (temp) buffer? So that i can actually only prepare ONE buffer at a time, instead of several. It is a bit unclear to me. Why not just prepare your data in some user-defined buffer and then just tell gl "copy this to that", why the extra READ/WRITE buffer semantic?

    Jan.
    GLIM - Immediate Mode Emulation for GL3

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •