Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 4 123 ... LastLast
Results 1 to 10 of 37

Thread: OpenGL 4.6 request list

  1. #1
    Intern Contributor
    Join Date
    Nov 2013
    Posts
    50

    OpenGL 4.6 request list

    Please add the following features to OpenGL 4.6

    Add OpenGL ES 3.2 context creation functionality to OpenGL 4.6 core.

    Add the extensions from OpenGL ES 3.2 to core OpenGL.
    Make OpenGL a superset of OpenGL ES again.

    Make in core OpenGL the ASTC support mandatory and s3tc optional (or what I like to see more: deprecate/remove s3tc).
    Possibly adding one of the following ASTC extensions:
    OES_texture_compression_astc
    https://www.khronos.org/registry/gle...ssion_astc.txt
    texture_compression_astc_hdr
    https://www.opengl.org/registry/spec...n_astc_hdr.txt
    Maybe make a full and portable profile for ASTC with different texture limits to serve the full spectrum of devices?

    Put shader draw parameters in core.
    https://www.opengl.org/registry/spec...parameters.txt

    Allow using more varied names in core for low component texture components.
    Especially single and dual.
    Allow not only R but G, B and A as well.
    And allow another letter for differentiating between colour components and other miscellaneous components.
    (Maybe c for component or channel, maybe another letter)
    The reason, rational for such a thing is: it makes detecting errors much easier for programmers. Allowing them to much easier see if their code is using the components correctly or not.
    Do not mandate how to use the component names to avoid them becoming a programming limitation instead of a more expressive way to write code.



    If you introduce an extension for async shader compilation, please put async somewhere in the name. And have both compilation and loading of shaders done asynchronously by the extension.
    Perhaps the name could be GL_ARB_parallel_async_shader_compile for OpenGL and also used in OpenGL ES.
    Not GL_ARB_parallel_shader_compile.
    If it provides async compilation, that is a big feature/point, such a feature needs to be advertised in the name.
    Unify with how async is done in Vulkan might be a good idea.
    It seems only minor adjustments would need to be done to the following extension:
    https://www.opengl.org/registry/spec...er_compile.txt
    Also have the specification provide plenty of information about how it interacts with a shader cache. Put in the specification plenty of information about shader caches. What a shader cache is, what it does, allows and mention shader cache a few times more in the description of the extension.
    Do make sure there is good information about what async shader compilation and loading allows, especially in reducing lag spikes.
    Increasing predictability and performance while reducing lag and stutter.


    Do NOT put in features from Vulkan YET.
    The following is not applicable to putting compatibility contexts between OpenGL and Vulkan.
    It's too early. Between several things:
    - apparently a new Vulkan release this summer
    - the Vulkan spec churn (new documentation release every week)
    - the resulting spec churn from the new Vulkan release this summer
    - getting feedback from developers about desired feature sets
    Vulkan really is not ready yet to base individual features for OpenGL on.
    Once more time has passed it will be.
    Once the documentation becomes somewhat more stable (maybe as early as next year: 2017). Once Vulkan's features will be more crystallized with feature sets. And the new release of Vulkan has happened.
    After those things will have happened, it will be the right time to start doing feature cross-pollination between the two API's.
    Also don't put in SPIR-V when there is a new release coming up this summer.
    It makes little sense to start copying features between both API's.
    Especially since with Vulkan. There will be feedback on what features developers want to have through determining feature sets. Knowing what features are popular will allow spec makers at Khronos to optimally choose which features to copy to other API's.
    Last edited by Gedolo2; 05-23-2016 at 08:23 AM. Reason: Added async shader compilation name suggestion

  2. #2
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,923
    Add OpenGL ES 3.2 context creation functionality to OpenGL 4.6 core.
    OpenGL doesn't define any "context creation functionality". So it's not clear what that would mean.

    But in any case, you can use the EXT_create_context_es_profile extension to create any version of OpenGL ES contexts, where supported.

    Make OpenGL a superset of OpenGL ES again.
    No version of desktop OpenGL was ever a superset of any version of OpenGL ES. There was always code you could write in ES that would not do the same thing on desktop GL.

    Make in core OpenGL the ASTC support mandatory
    That's not practical. ASTC is only supported by a very small set of hardware. Unless you want nobody to implement GL 4.6.

    ASTC is not a real thing yet.

    s3tc optional (or what I like to see more: deprecate/remove s3tc).
    First, good news: S3TC was never adopted into core OpenGL. It has always been an extension.

    FYI: RGTC isn't S3TC. It's similar to it, but it doesn't have the patent issues. Which is why RGTC is core and S3TC is not.

    Second, even if it was mandatory, why get rid of perfectly valid, functional, and useful technology? It's not like IHVs will be ripping out support for it from their texture fetch units. Not so long as applications still use it.

    Put shader draw parameters in core.
    Intel doesn't support it. It would be better to just bring in the `gl_InstanceIndex` functionality from khr_vk_glsl. That's the most important part of draw parameters that OpenGL doesn't support, and it's something we know Intel can support.

    And have both compilation and loading of shaders done asynchronously by the extension.
    What exactly does that mean? Loading shader code is the job of the application, not OpenGL. It can't make that asynchronous.

    Unify with how async is done in Vulkan might be a good idea.
    That would be the opposite of that extension. In Vulkan, there is no asynchronous shader compilation support. What there is in Vulkan are two things:

    1. When you call `vkCreateShaderModule`, you are guaranteed that the compilation is finished (successfully or with failure) by the time it returns. Similarly, when you call `vkCreateGraphicsPipelines`, you are guaranteed that the compilation is finished (successfully or with failure) by the time it returns.

    2. Both of those calls are fully reentrant. You can call them on the same `VkDevice` from multiple threads. You can even have multiple threads all using the same pipeline cache without synchronization.

    Vulkan doesn't make shader compilation parallel or asynchronous (FYI: those words mean the same thing in this context). It simply provides you with the tools to compile shaders asynchronously.

    By contrast, parallel_shader_compile provides you with the tools to realize that the OpenGL implementation may compile shaders in parallel, and it gives you the tools to stop interfering in that process (by asking if there was an error before the compile has finished).

    It's two different models for two very different APIs. In one case, the API is asynchronous; in the other case, the API is reentrant.

    Also have the specification provide plenty of information about how it interacts with a shader cache. Put in the specification plenty of information about shader caches. What a shader cache is, what it does, allows and mention shader cache a few times more in the description of the extension.
    That is not how a specification works. A specification defines behavior, not how it gets implemented.

    Vulkan talks about a pipeline cache because it is an explicit object which is part of the Vulkan system. It's part of the API; it can't not talk about it.

    OpenGL has no similar construct. If the implementation uses a cache when compiling shaders, that's not something OpenGL can explain, since it does not affect the behavior of the system or its interface. It only would affect performance.

    It's an implementation detail in OpenGL.

    - apparently a new Vulkan release this summer
    From where have you heard of this summer release? Is it scheduled for SIGGRAPH?

  3. #3
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,923
    Quote Originally Posted by Alfonse Reinheart View Post
    That's not practical. ASTC is only supported by a very small set of hardware. Unless you want nobody to implement GL 4.6.
    To add to this, the Vulkan database shows that the only desktop hardware that supports ASTC is Intel. Even for NVIDIA, only their mobile Tegra line supports ASTC. This could be due to immature Vulkan drivers, but it does match up with the OpenGL support.

    So while ASTC may be the future, it is definitely not the present.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,714
    It's also worth noting that this is what OpenGL specs used to do in the past: define a software interface with little or no consideration to how hardware can support it (or even whether hardware supports it). That approach manifestly failed; the upshot was that OpenGL implementations tended to end up with functionality that was software-emulated, but it was not queryable if that was the case, so you could find yourself rudely thrown back to software emulation and single-digit framerates. That's OK if you're in a scenario where "everything must work and performance is secondary", but that's not always the case with everybody, and those for whom it wasn't the case were poorly served by OpenGL.

    ASTC is certainly feasible if all of the hardware vendors come onboard and implement support in conjunction with a future evolution of the spec. But that should be a requirement - the spec cannot evolve in isolation.

    Far more interesting (and useful) would be to bring anisotropic filtering into core. 20 years from priority date has now expired (http://www.google.com/patents/US6005582) so it should now be doable.
    Last edited by mhagain; 06-07-2016 at 04:02 PM.

  5. #5
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,923
    It should also be noted that other features are dealt with in the same way. Things are only added to core when a broad base of hardware can support it. In the case of OpenGL 4.x, something only becomes core if all current 4.x hardware can support it. Features of note which do not have such a broad base of support are:

    * ARB_fragment_shader_interlock: No AMD hardware support.
    * KHR_blend_equation_advanced: No AMD hardware support.
    * ARB_bindless_texture: No Intel/pre-GCN AMD hardware support.
    * ARB_sparse_texture/buffer: No Intel/pre-GCN AMD hardware support.

    What OpenGL really lacks is Vulkan's "feature" concept, which are effectively functionality that is defined in the core specification, but for which support is not required. OpenGL can express a form of this by using implementation-defined limits. For example, image load/store and SSBOs are only required to be supported for fragment and compute shaders. Other stages can express support by having non-zero limits.

    But features like the above can't really be expressed as "limits". The best way OpenGL has to express such optional features is as ARB/KHR extensions. And there is nothing wrong with using an extension, either conditionally or relying on it.

  6. #6
    Junior Member Newbie
    Join Date
    Dec 2015
    Posts
    13
    May be the best solution to these problems will be setting up new OpenGL 5.0 . ( this already occurred in the past. Opengl 1.X -> OpenGL 2.0, OpenGL 3.X -> OpenGL 4.0 )
    In which will distinction between newer generation of GPU hardware from older ones.
    In which support:
    1) ASTC compression,
    2) standard binary format of shaders and programs,
    3) bindless textures and buffers,
    4) some kind of "GL_NV_command_list",
    5) good implementation of GL_KHR_no_error
    6) although a little bigger multithreaded support
    will be included.
    IMHO this will push forward OpenGL and it will can still easy to learn ( compared to Vulkan ) and can be as efficient as Vulkan.

  7. #7
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,923
    In which will distinction between newer generation of GPU hardware from older ones.
    You're making an assumption that all "newer generation" hardware would be able to support all of that.

    No Intel hardware supports bindless textures, and no AMD hardware supports non-ARB bindless stuff. There's a reason why Vulkan doesn't do things the bindless way, that it uses descriptor sets rather than just arbitrary numbers you throw around. It's a better abstraction, one which can be implemented across lots of hardware while still providing the featureset required (access to arbitrary amounts of stuff in a shader).

    The only functionality missing from that is NVIDIA's passion for passing GPU memory pointers around. Note that Vulkan doesn't let you do that, despite being lower-level than OpenGL.

    Bindless is not a good hardware abstraction.

    As for a variation of NV_command_list... why? If you're willing to go through that much trouble, you may as well just use Vulkan. It'd be a much cleaner API, and you'd get more functionality out of it in the long run.

    can be as efficient as Vulkan
    No. No it can't.

  8. #8
    Junior Member Newbie
    Join Date
    Dec 2015
    Posts
    13
    I'm watching evolution of GPU hardware for quite a long time and only my instinct tells me that the future can look like.
    1) ASTC compression -> newest mobile GPU have it so I'm pritty sure that desktop GPU will have it to. It's too good ( for now ) to now implement this.
    2) standard binary format of shaders and programs -> SPIR_V its a good candidate for this.
    3) You are right that pointers are not perfect for that. Maybe a OpenGL needs Descriptor Sets too. Maybe something else, thats why this is only a suggestion.
    4) Why NV_command_list ? I think looking for solution how we can "Approaching Zero Driver Overhead in OpenGL" is a good idea. Multi Draw Indirect do not solves all problems.
    And solution proposed by NVIDIA is worth considering. Finding a best way how efficent pack state changes with clean API IMHO is a new goal for OpenGL.
    5 ) and 6 )
    When I said OpenGL can be as efficent as Vulkan. I mean I want to OpenGL be as efficent as Vulkan (of course in single thread enviroment only ).
    And if driver will not be bottleneck I think it is possible. So GL_KHR_no_error is needed.

  9. #9
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,714
    Quote Originally Posted by jurgus View Post
    I'm watching evolution of GPU hardware for quite a long time and only my instinct tells me that the future can look like.
    Much of what you list here isn't actually anything to do with hardware though; what you're talking about is evolution of a software abstraction, and you're requesting to move the OpenGL software abstraction so close to Vulkan that it may as well just be Vulkan and be done with it.

    How OpenGL should evolve in a post-Vulkan world is a valid topic for discussion of course, and some may even make a case that it's more useful for OpenGL to evolve towards an even higher-level abstraction than it currently is.

  10. #10
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,923
    Quote Originally Posted by mhagain View Post
    How OpenGL should evolve in a post-Vulkan world is a valid topic for discussion of course, and some may even make a case that it's more useful for OpenGL to evolve towards an even higher-level abstraction than it currently is.
    OpenGL is in a strange place, abstraction-wise. Its abstraction is not a good fit for modern hardware from a performance standpoint, so it doesn't really work there. But abstracting things more branches out into the realm of scene graphs, and there are innumerable ways of designing a scene graph. OpenGL is as high-level as you can reasonably get without going there.

    The only real advantage OpenGL's abstraction has is that it strikes an interesting balance between performance and ease-of-use. Handling synchronization as well as whatever gymnastics are needed in order to change framebuffers willy-nilly and so forth. You can get reasonable performance out of OpenGL as well as access to good hardware features, but without a lot of the explicit work that Vulkan requires.

    At yet, engines like Unity, Unreal, and the like give you all kinds of power while hiding the details of APIs like Vulkan, D3D12, etc. They are easier to use than OpenGL, and they don't really lose performance. But at the same time, they do lose the generality that OpenGL provides. If you're not making a game, if it's just a graphics demo or whatever, then there's a lot that those engines do which you won't care about.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •