Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 37

Thread: OpenGL 4.6 request list

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Intern Contributor
    Join Date
    Nov 2013
    Posts
    50

    OpenGL 4.6 request list

    Please add the following features to OpenGL 4.6

    Add OpenGL ES 3.2 context creation functionality to OpenGL 4.6 core.

    Add the extensions from OpenGL ES 3.2 to core OpenGL.
    Make OpenGL a superset of OpenGL ES again.

    Make in core OpenGL the ASTC support mandatory and s3tc optional (or what I like to see more: deprecate/remove s3tc).
    Possibly adding one of the following ASTC extensions:
    OES_texture_compression_astc
    https://www.khronos.org/registry/gle...ssion_astc.txt
    texture_compression_astc_hdr
    https://www.opengl.org/registry/spec...n_astc_hdr.txt
    Maybe make a full and portable profile for ASTC with different texture limits to serve the full spectrum of devices?

    Put shader draw parameters in core.
    https://www.opengl.org/registry/spec...parameters.txt

    Allow using more varied names in core for low component texture components.
    Especially single and dual.
    Allow not only R but G, B and A as well.
    And allow another letter for differentiating between colour components and other miscellaneous components.
    (Maybe c for component or channel, maybe another letter)
    The reason, rational for such a thing is: it makes detecting errors much easier for programmers. Allowing them to much easier see if their code is using the components correctly or not.
    Do not mandate how to use the component names to avoid them becoming a programming limitation instead of a more expressive way to write code.



    If you introduce an extension for async shader compilation, please put async somewhere in the name. And have both compilation and loading of shaders done asynchronously by the extension.
    Perhaps the name could be GL_ARB_parallel_async_shader_compile for OpenGL and also used in OpenGL ES.
    Not GL_ARB_parallel_shader_compile.
    If it provides async compilation, that is a big feature/point, such a feature needs to be advertised in the name.
    Unify with how async is done in Vulkan might be a good idea.
    It seems only minor adjustments would need to be done to the following extension:
    https://www.opengl.org/registry/spec...er_compile.txt
    Also have the specification provide plenty of information about how it interacts with a shader cache. Put in the specification plenty of information about shader caches. What a shader cache is, what it does, allows and mention shader cache a few times more in the description of the extension.
    Do make sure there is good information about what async shader compilation and loading allows, especially in reducing lag spikes.
    Increasing predictability and performance while reducing lag and stutter.


    Do NOT put in features from Vulkan YET.
    The following is not applicable to putting compatibility contexts between OpenGL and Vulkan.
    It's too early. Between several things:
    - apparently a new Vulkan release this summer
    - the Vulkan spec churn (new documentation release every week)
    - the resulting spec churn from the new Vulkan release this summer
    - getting feedback from developers about desired feature sets
    Vulkan really is not ready yet to base individual features for OpenGL on.
    Once more time has passed it will be.
    Once the documentation becomes somewhat more stable (maybe as early as next year: 2017). Once Vulkan's features will be more crystallized with feature sets. And the new release of Vulkan has happened.
    After those things will have happened, it will be the right time to start doing feature cross-pollination between the two API's.
    Also don't put in SPIR-V when there is a new release coming up this summer.
    It makes little sense to start copying features between both API's.
    Especially since with Vulkan. There will be feedback on what features developers want to have through determining feature sets. Knowing what features are popular will allow spec makers at Khronos to optimally choose which features to copy to other API's.
    Last edited by Gedolo2; 05-23-2016 at 08:23 AM. Reason: Added async shader compilation name suggestion

  2. #2
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,924
    Add OpenGL ES 3.2 context creation functionality to OpenGL 4.6 core.
    OpenGL doesn't define any "context creation functionality". So it's not clear what that would mean.

    But in any case, you can use the EXT_create_context_es_profile extension to create any version of OpenGL ES contexts, where supported.

    Make OpenGL a superset of OpenGL ES again.
    No version of desktop OpenGL was ever a superset of any version of OpenGL ES. There was always code you could write in ES that would not do the same thing on desktop GL.

    Make in core OpenGL the ASTC support mandatory
    That's not practical. ASTC is only supported by a very small set of hardware. Unless you want nobody to implement GL 4.6.

    ASTC is not a real thing yet.

    s3tc optional (or what I like to see more: deprecate/remove s3tc).
    First, good news: S3TC was never adopted into core OpenGL. It has always been an extension.

    FYI: RGTC isn't S3TC. It's similar to it, but it doesn't have the patent issues. Which is why RGTC is core and S3TC is not.

    Second, even if it was mandatory, why get rid of perfectly valid, functional, and useful technology? It's not like IHVs will be ripping out support for it from their texture fetch units. Not so long as applications still use it.

    Put shader draw parameters in core.
    Intel doesn't support it. It would be better to just bring in the `gl_InstanceIndex` functionality from khr_vk_glsl. That's the most important part of draw parameters that OpenGL doesn't support, and it's something we know Intel can support.

    And have both compilation and loading of shaders done asynchronously by the extension.
    What exactly does that mean? Loading shader code is the job of the application, not OpenGL. It can't make that asynchronous.

    Unify with how async is done in Vulkan might be a good idea.
    That would be the opposite of that extension. In Vulkan, there is no asynchronous shader compilation support. What there is in Vulkan are two things:

    1. When you call `vkCreateShaderModule`, you are guaranteed that the compilation is finished (successfully or with failure) by the time it returns. Similarly, when you call `vkCreateGraphicsPipelines`, you are guaranteed that the compilation is finished (successfully or with failure) by the time it returns.

    2. Both of those calls are fully reentrant. You can call them on the same `VkDevice` from multiple threads. You can even have multiple threads all using the same pipeline cache without synchronization.

    Vulkan doesn't make shader compilation parallel or asynchronous (FYI: those words mean the same thing in this context). It simply provides you with the tools to compile shaders asynchronously.

    By contrast, parallel_shader_compile provides you with the tools to realize that the OpenGL implementation may compile shaders in parallel, and it gives you the tools to stop interfering in that process (by asking if there was an error before the compile has finished).

    It's two different models for two very different APIs. In one case, the API is asynchronous; in the other case, the API is reentrant.

    Also have the specification provide plenty of information about how it interacts with a shader cache. Put in the specification plenty of information about shader caches. What a shader cache is, what it does, allows and mention shader cache a few times more in the description of the extension.
    That is not how a specification works. A specification defines behavior, not how it gets implemented.

    Vulkan talks about a pipeline cache because it is an explicit object which is part of the Vulkan system. It's part of the API; it can't not talk about it.

    OpenGL has no similar construct. If the implementation uses a cache when compiling shaders, that's not something OpenGL can explain, since it does not affect the behavior of the system or its interface. It only would affect performance.

    It's an implementation detail in OpenGL.

    - apparently a new Vulkan release this summer
    From where have you heard of this summer release? Is it scheduled for SIGGRAPH?

  3. #3
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,924
    Quote Originally Posted by Alfonse Reinheart View Post
    That's not practical. ASTC is only supported by a very small set of hardware. Unless you want nobody to implement GL 4.6.
    To add to this, the Vulkan database shows that the only desktop hardware that supports ASTC is Intel. Even for NVIDIA, only their mobile Tegra line supports ASTC. This could be due to immature Vulkan drivers, but it does match up with the OpenGL support.

    So while ASTC may be the future, it is definitely not the present.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,714
    It's also worth noting that this is what OpenGL specs used to do in the past: define a software interface with little or no consideration to how hardware can support it (or even whether hardware supports it). That approach manifestly failed; the upshot was that OpenGL implementations tended to end up with functionality that was software-emulated, but it was not queryable if that was the case, so you could find yourself rudely thrown back to software emulation and single-digit framerates. That's OK if you're in a scenario where "everything must work and performance is secondary", but that's not always the case with everybody, and those for whom it wasn't the case were poorly served by OpenGL.

    ASTC is certainly feasible if all of the hardware vendors come onboard and implement support in conjunction with a future evolution of the spec. But that should be a requirement - the spec cannot evolve in isolation.

    Far more interesting (and useful) would be to bring anisotropic filtering into core. 20 years from priority date has now expired (http://www.google.com/patents/US6005582) so it should now be doable.
    Last edited by mhagain; 06-07-2016 at 04:02 PM.

  5. #5
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,924
    It should also be noted that other features are dealt with in the same way. Things are only added to core when a broad base of hardware can support it. In the case of OpenGL 4.x, something only becomes core if all current 4.x hardware can support it. Features of note which do not have such a broad base of support are:

    * ARB_fragment_shader_interlock: No AMD hardware support.
    * KHR_blend_equation_advanced: No AMD hardware support.
    * ARB_bindless_texture: No Intel/pre-GCN AMD hardware support.
    * ARB_sparse_texture/buffer: No Intel/pre-GCN AMD hardware support.

    What OpenGL really lacks is Vulkan's "feature" concept, which are effectively functionality that is defined in the core specification, but for which support is not required. OpenGL can express a form of this by using implementation-defined limits. For example, image load/store and SSBOs are only required to be supported for fragment and compute shaders. Other stages can express support by having non-zero limits.

    But features like the above can't really be expressed as "limits". The best way OpenGL has to express such optional features is as ARB/KHR extensions. And there is nothing wrong with using an extension, either conditionally or relying on it.

  6. #6
    Junior Member Newbie
    Join Date
    Dec 2015
    Posts
    13
    May be the best solution to these problems will be setting up new OpenGL 5.0 . ( this already occurred in the past. Opengl 1.X -> OpenGL 2.0, OpenGL 3.X -> OpenGL 4.0 )
    In which will distinction between newer generation of GPU hardware from older ones.
    In which support:
    1) ASTC compression,
    2) standard binary format of shaders and programs,
    3) bindless textures and buffers,
    4) some kind of "GL_NV_command_list",
    5) good implementation of GL_KHR_no_error
    6) although a little bigger multithreaded support
    will be included.
    IMHO this will push forward OpenGL and it will can still easy to learn ( compared to Vulkan ) and can be as efficient as Vulkan.

  7. #7
    Intern Contributor
    Join Date
    Nov 2013
    Posts
    50
    Quote Originally Posted by Alfonse Reinheart View Post

    Intel doesn't support it. It would be better to just bring in the `gl_InstanceIndex` functionality from khr_vk_glsl. That's the most important part of draw parameters that OpenGL doesn't support, and it's something we know Intel can support.
    First and foremost, thanks for bringing feedback with your constructive criticism and insight into hardware.

    About only bringing in gl_InstanceIndex.
    It's a great idea to bring in gl_InstanceIndex functionality as a good discrete jump in functionality for an OpenGL release if the whole shader draw parameters extension/functionality can't be added yet.

    Quote Originally Posted by Alfonse Reinheart View Post

    From where have you heard of this summer release? Is it scheduled for SIGGRAPH?
    Summer release rumours mentioned in article on phoronix.com also mentioning a SIGGRAPH timeslot lacking subject description:
    New Vulkan Slides; Wondering If "OpenGL 4.6" Will Be Out This Summer
    http://www.phoronix.com/scan.php?pag...ay-2016-Slides
    Last edited by Gedolo2; 06-09-2016 at 12:05 PM.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •