PDA

View Full Version : Official feedback on OpenGL 4.5 thread



Khronos_webmaster
08-11-2014, 05:35 AM
The Khronos Group, an open consortium of leading hardware and software companies, today announced growing industry support for the OpenGL family of 3D standards that are advancing the visual experience for more than two billion mobile devices and PCs sold each year. OpenGL, OpenGL ES and WebGL are the world’s most widely deployed APIs that between them provide portable access to graphics and compute capabilities across multiple platforms, including Android, iOS, Linux, OS X, Windows and the Web.

OpenGL 4.5 Specification Released
Khronos publicly released the OpenGL 4.5 specification today, bringing the very latest functionality to the industry’s most advanced 3D graphics API while maintaining full backwards compatibility, enabling applications to incrementally use new features. The full specification and reference materials are available for immediate download from the OpenGL Registry. New functionality in the core OpenGL 4.5 specification includes:

Direct State Access (DSA) : object accessors enable state to be queried and modified without binding objects to contexts, for increased application and middleware efficiency and flexibility;

Flush Control : applications can control flushing of pending commands before context switching – enabling high-performance multithreaded applications;

Robustness : providing a secure platform for applications such as WebGL browsers, including preventing a GPU reset affecting any other running applications;

OpenGL ES 3.1 API and shader compatibility : to enable the easy development and execution of the latest OpenGL ES applications on desktop systems;

DX11 emulation features : for easier porting of applications between OpenGL and Direct3D.

OpenGL Registry (https://www.opengl.org/registry/)
OpenGL 4.5 Reference Card (http://www.khronos.org/files/opengl45-quick-reference-card.pdf)

thokra
08-12-2014, 08:54 AM
Direct State Access (DSA)

I can't believe it ... How awesome is that?

Closed
08-13-2014, 12:18 AM
VertexArrayElementBuffer(uint vaobj, uint buffer)

Will be much better to have "intptr offset" as third parameter, like in VertexArrayVertexBuffer. Yes, I understand that it is possible to make offset in element buffer using "const void *indices" and "base vertex" in Draw* commands. But this is VERY complex solution, this is non-native. For example, I have 128Mb uber-buffer where I store all my VBs and IBs. In this case drawcall managment and debugging is really difficult because I have very long offsets. Another example - I store UINT IBs, USHORT IBs and all VBs in one uber buffer, so I can damage my brain when computing offsets for indices in that buffer, where UINT and USHORT IBs can be in random order.

Closed
08-13-2014, 01:43 AM
Typo in GL_ARB_direct_state_access: Example 3 - Creating a vertex array object without polluting the OpenGL states

in "// Direct State Access"
Line 3147: glEnableVertexAttribArray should be glEnableVertexArrayAttrib

mlfarrell
08-13-2014, 02:01 PM
I have a question regarding the new NG api. I know the api itself will break backwards compatability but will this also be the case with GLSL? Will GLSL shaders still run on the new API?

mlfarrell
08-13-2014, 02:07 PM
I was wondering if the "next generation" api will hold any compatibility with GLSL shading language?

Shinta
08-19-2014, 02:41 AM
Hi,

there seems to be a few bugs in the 4.5 parts of the gl.xml, glcorearb.h, glext.h and the 'OpenGL 4 Reference Page'. The 'size' argument of multiple functions is different (GLsizei instead of GLsizeiptr) to what the standard demands. e.g. glNamedBufferStorage, glNamedBufferData, glCopyNamedBufferSubData, glTransformFeedbackBufferRange, glClearNamedBufferSubData, glGetNamedBufferSubData, ...

hth,
Shinta

H. Guijt
09-17-2014, 03:57 PM
I'm rather surprised to see the complete lack of reactions under this announcement. Previous OpenGL versions were greeted by a flurry of messages, people going through the spec and pointing out likes and dislikes. This version brings the long-requested feature of DSA - and yet nobody seems to care. How come?

Chris_F
09-20-2014, 01:52 AM
I'm rather surprised to see the complete lack of reactions under this announcement. Previous OpenGL versions were greeted by a flurry of messages, people going through the spec and pointing out likes and dislikes. This version brings the long-requested feature of DSA - and yet nobody seems to care. How come?

Well, DSA was abandoned for years, and now out of nowhere it is suddenly revived. At the exact same time it is announced that OpenGL is going to be re-built from the ground up. This of course isn't the first time we've heard this story before. Frankly I think people don't know what to expect at this point.

mhagain
09-20-2014, 04:15 AM
I'm rather surprised to see the complete lack of reactions under this announcement. Previous OpenGL versions were greeted by a flurry of messages, people going through the spec and pointing out likes and dislikes. This version brings the long-requested feature of DSA - and yet nobody seems to care. How come?

I suspect that most people just bit the bullet and used GL_EXT_direct_state_access anyway. I know that id Software did (link (https://github.com/id-Software/DOOM-3-BFG/blob/master/neo/renderer/RenderSystem_init.cpp#L373)) and Valve have a mention of it in one of their slides too. The fact that it was so widely supported made this something safe and easy enough to do.

It's also the case that many recent features had a DSA API from the outset (sampler objects), or a DSA API was unnecessary (vertex attrib binding), they were specified in such a way that bind-to-draw has no (or minimal) interference with bind-to-edit/create (multi bind) or even they were a part of DSA brought into core already (glProgramUniform calls). So full DSA had become largely unnecessary except in certain very specific cases.

Finally, we all know how long it takes both AMD and Intel to get new drivers supporting new GL_VERSIONs out. GL 4.5 and DSA means nothing until we get comprehensive widespread driver support; until then it's best suited to tech demos and private projects on a single vendor's hardware.

barthold
09-21-2014, 04:13 PM
I'm rather surprised to see the complete lack of reactions under this announcement. Previous OpenGL versions were greeted by a flurry of messages, people going through the spec and pointing out likes and dislikes. This version brings the long-requested feature of DSA - and yet nobody seems to care. How come?

That was, unfortunately, a bug in the forum. Our webmaster fixed it now, obviously. Sorry about that!

barthold

H. Guijt
10-02-2014, 06:32 AM
Looking at the reference pages (http://www.opengl.org/sdk/docs/man4/), the following prototypes strike me as incorrect:

void glProgramUniform2ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLuint v1);

void glProgramUniform3ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLuint v2);

void glProgramUniform4ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLint v2, // surely GLuint?
GLuint v3);

Khronos_webmaster
10-02-2014, 06:43 AM
Looking at the reference pages (http://www.opengl.org/sdk/docs/man4/), the following prototypes strike me as incorrect:

void glProgramUniform2ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLuint v1);

void glProgramUniform3ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLuint v2);

void glProgramUniform4ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLint v2, // surely GLuint?
GLuint v3);

If you feel it's incorrect, can you open a bug report at https://www.khronos.org/bugzilla

Thanks.

Chris_F
10-03-2014, 05:07 PM
It's clearly incorrect. I opened a report on the bugzilla about a mistake in the man pages and it has gone unnoticed for the better part of a year.

Khronos_webmaster
10-03-2014, 05:17 PM
It's clearly incorrect. I opened a report on the bugzilla about a mistake in the man pages and it has gone unnoticed for the better part of a year.

Thanks for bringing this to my attention, and I fully understand your frustration. Would you happen to have the URL to the bug. I'm forwarding these items directly to the work group.

Thanks for your extended patience.

Chris_F
10-03-2014, 06:16 PM
https://www.khronos.org/bugzilla/show_bug.cgi?id=1057

Yandersen
11-29-2014, 07:48 AM
Hello!

The 4.5 version was here for a while, but it seems like nVidia does not hassle to support it in a variety of it's hardware. I have a GeForce GT 520 videocard and OpenGL 4.4 is still a top-version supported with the newest drivers (version 344.75). What bothers me most is that the GL_ARB_clip_control is not supported (the other extensions brought by OpenGL 4.5 are awesome to have as well, but this particular one is mostly critical for me). I know it shouldn't matter for a cross-platform library like OpenGL, but "just in case" I will mention that I sit on WinXP32. So what should I do? Wait some more time (but how long and is there a guarantee?..) or buy a new videocard?

Aleksandar
11-29-2014, 11:06 AM
This is certainly not a proper section for such kind of questions. Everything that is driver-specific should be in OpenGL drivers section.

OpenGL 4.5 is supported in NVIDIA beta drivers for Windows (since you are using XP) ver. 340.65, 340.76 and 340.82.
GL_ARB_clip_control was supported by 340.65 (I didn't test later ones). If you want to play with beta drivers download and try some of the mentioned above.

The latest release drivers still support "only" OpenGL 4.4.

Yandersen
11-29-2014, 06:33 PM
Sorry for offtop. Moved here (https://www.opengl.org/discussion_boards/showthread.php/185117-OpenGL-4-5-support-how-much-longer-to-wait-%21?p=1262745&viewfull=1#post1262745).

Alfonse Reinheart
12-10-2014, 03:52 PM
The Third Annual Unofficial OpenGL Feature Awards!

And you thought I'd forgotten about you ;) On to the awards!

We Did What We Said We Were Gonna Award:

ARB_direct_state_access

I'm not talking about the meat of the extension. I'm talking about the part of it that's actually new: the fact that we finally get a function that merges name creation and object creation. One of the sillier bits of OpenGL was making these separate, but that was as a consequence of an even sillier bit of OpenGL: giving users the ability to decide what names were for themselves.

Long's Peak promised a new object creation paradigm. And while LP was going to make virtually every object immutable, ARB_DSA builds a new object creation system that makes name and object creation synonymous. Just like sampler objects.

Which means that we finally have what Long's Peak promised us (well, most of it). Now, isn't OpenGL 4.5 such a much cleaner API for it? With our glorious Long's Peak API, there are only two ways to do the same thing now. That's progress.

And it only took them 8 years from the initial Long's Peak announcement for us to get it ;)

OneTwo Little Mistakes Award:

ARB_direct_state_access

I hate to quibble about naming conventions and the like. But it truly amazes me how the ARB could debate the naming issue as extensively as it appears they did from the issues section. And with all of that debate and all of the possibilities before them, they still choose the worst possible alternative.

Couldn't you guys have just agreed to flip a coin, one naming convention or the other? Sure, some APIs would have unwieldy names. I certainly don't care much for the idea of having glNamedCompressedTexSubImage1D or glRenderbufferObjectStorageMultisample. But at least it would have been a convention; the unwieldyness of some function names would have been defensible by convention. With your way, not only do you have unwieldy names (glInvalidateNamedFramebufferSubData), you can't even justify it by citing a convention.

Also, the difference between "Named" and "Object" is exactly one character. The difference between "Named" and "exture" is exactly one character. The difference between "Named" and "Array" is zero characters.

So yeah, I don't see how the non-"Named" APIs are so much better than the "Named" ones that they had to break convention.

Also: glNamedBufferData. What were you thinking with that one? glBufferStorage made glBufferData completely obsolete. Everyone should always be creating immutable storage buffers. And you did it right for textures; you didn't allow people to use the new API to create non-immutable textures. So why did you think this was a good idea?

Tail Wagging The Dog Award:

ARB_ES3_1_Compatibility

OpenGL ES exposed hardware, legitimate hardware, features that GL 4.4 did not. That's something of an indictment of the ARB, that they somehow missed having imageAtomicExchange work on single-channel f32 images since 4.2. Which was in fact three years ago.

Well, hindsight is always 20/20. Then again, very few people know that Prometheus, the Greek Titan of foresight, had a brother named Epimetheus (https://en.wikipedia.org/wiki/Epimetheus_%28mythology%29), the Titan of hindsight. There's a reason there aren't great tales of his exploits.

Let's Make Geometry Shaders Even More Useless Award:

ARB_cull_distance

GS's were a terrible idea. Originally envisioned as a means of tessellation, it turns out that they were terrible at that. So terrible in fact that they had to create two entirely new programmable stages and a fixed-function stage specifically to make tessellation worthwhile.

But that was OK, because GS's gave developers the ability to write arbitrary data to buffer objects via transform feedback. So you could issue "rendering commands" that didn't render, but were used to do things like frustum culling and the like. Oh but wait, 4.2 gave us compute shaders to make that completely irrelevant. CS's can do all of that, and without the nonsensical faux rendering command.

But even that was OK. Because GS's could still do something that no other shader stage could: cull triangles. Yes, the TCS could cull patches (little known fact: if any outer tessellation level used by the TES is zero, then the patch is culled), but it could only cull whole patches at a time. It took a GS to cull things at the triangle level.

No longer. (https://www.opengl.org/registry/specs/ARB/cull_distance.txt)

ARB, if you're going to kill off GS's, just do it already. Give us AMD_vertex_shader_layer/viewport_index (obviously with the ability to set them from the TES too). That would completely nullify any point to that worthless, poorly-named, Microsoftian addition to the rendering pipeline. Or if we can't have that, at least give us NV_geometry_shader_passthrough, so that GS's can be focused on the one useful thing they can still do.

Where's The Beef Award:

OpenGL 4.5

Did ARB_DSA really deserve having a full point-release for it? Because the features exposed in 4.5 are pretty scant. That's not to say that nothing of substance was released. I'm sure that people will find uses for primitive culling and ES 3.1 compatibility is important (particularly since it had more features than GL 4.4, and we can't have that). But very little of substance was actually standardized. Especially considering that there is much functionality that is available, across hardware vendors, which is worth standardizing.

Like sparse_textures/buffers. And so forth.

I understand that you kinda want GL 4.x to be implementable on all D3D 11-class hardware. But why exactly is that so important? Especially now.

4.5's features compared to 4.4 were minor, bordering on non-existent. Plus, 4.4 was quite complete in terms of D3D 11.2-level features; there wasn't much we were missing. So why not simply let the "backwards compatibility extensions" (the ones without ARB suffixes) allow hardware that couldn't do sparse stuff to implement what they could, and give us a 4.5 with some real meat?

We Need More Ways To Do Things Award:
Let's Rewrite Our API And Still Leave The Original Award:

ARB_direct_state_access

Because that's what OpenGL needs more of: ways to do something we can already do.

But at least it shows that the ARB understands timing. They know that the best time to introduce a feature that renders a healthy portion of the existing API superfluous... is right before you replace it all with an entirely new API.

That's the kind of timing that has placed the OpenGL API in the marketplace position it currently enjoys ;)

barthold
12-11-2014, 08:27 AM
Alfonse, welcome back. You haven't changed much :-) We missed you.

Barthold

Gedolo2
01-19-2015, 08:01 AM
Alfonse, glad you keep up and review the new release with your report and awards.

Am I reading your criticism of Geometry Shaders right If it seems to say Geometry Shaders are useless?
Are you considering Geometry Shaders obsolete and useless?
Would it be fine if they be deprecated in favor of other, newer ways to achieve the same things?
(Please do mention what other, newer ways and API functions that will be to be complete so other people can comment to make sure no use case is missed.)

(If they are, Geometry Shaders shouldn't be in OpenGL NG and deprecated from OpenGL 4.x)

Alfonse Reinheart
01-19-2015, 08:59 AM
The only use case I can come up with for GS's, one that can't be solved with either AMD_vertex_shader_layer/viewport_index (again, assuming it could work with a TES) or NV_geometry_shader_passthrough, is rendering geometry to cubemaps. That is, projecting each primitive onto the six faces of a cube.

The benefit of a GS here is GS instancing, which allows multiple instances of a GS to operate on the same input primitive. There might be a way to stick similar instancing functionality into the VS or TES, but that would really depend on the hardware.


Would it be fine if they be deprecated in favor of other, newer ways to achieve the same things?

No. Deprecation and removal does not work. And they're not going to do it again.

mhagain
01-19-2015, 09:10 AM
I've used the GS stage to generate per-triangle normals on the fly for certain data sets; that's certainly been quite useful (it's still not as fast as just including the normal in your vertex format though, but I consider that a fault of the implementation or the hardware rather than of the concept of the GS stage).

Alfonse Reinheart
01-19-2015, 10:07 AM
I've used the GS stage to generate per-triangle normals on the fly for certain data sets

I'm curious about your need to do that on the GPU. If your data set was pre-computed, then surely the normals could be as well. And if your data set was computed on the GPU, then whatever process that computed it could also have given it normals, yes?

In any case, it would be possible to do this with the TES. You'd just pass outer tessellation levels of 1, that effectively cause no tessellation. Granted, you would be invoking the tessellation primitive generator, only for it to do no actual work. So I don't know how it would compare, performance-wise. I would guess that the GS is faster on a 1:1 basis.

Which brings up an interesting GS vs. tessellation question. Is it faster to do point sprites in the GS than it is to use the TES to do them (one-vertex-patches, "tessellated" as quads)?

Dark Photon
01-19-2015, 06:16 PM
The only use case I can come up with for GS's, one that can't be solved with either AMD_vertex_shader_layer/viewport_index (again, assuming it could work with a TES) or NV_geometry_shader_passthrough, is rendering geometry to cubemaps.

How about: Using a geometry shader with multistream output to perform geometry instance culling, LOD selection, and LOD binning (emphasis on the binning here).

aqnuep blogged about this 4 years ago, and for quite a while now, this can be implemented very efficiently with a geometry shader (no CPU-GPU sync).

Alfonse Reinheart
01-19-2015, 06:55 PM
I remember that. (http://rastergrid.com/blog/2010/10/gpu-based-dynamic-geometry-lod/) I even made an oblique reference to it in my original post (https://www.opengl.org/discussion_boards/showthread.php/184618-Official-feedback-on-OpenGL-4-5-thread?p=1262928&viewfull=1#post1262928) ("used to do things like frustum culling and the like"). But then I mentioned that you can do all of that just fine in a Compute Shader. A computer shader logically fits better, because you're not rendering; you're doing arbitrary computations. You don't have to pretend that you're doing vertex rendering and capturing primitives as output.

Compute shaders don't have to fit the output into a small set of bins equal to the number of streams; they can write drawing commands directly. So the CS version has more functional advantages as well.

Plus, the same hardware that can do Compute Shader operations can do indirect rendering, so you get even more of a performance boost with that.

My point isn't that GS's are useless. It's that there is very little a GS can do that other things can't do at least as well.

Ilian Dinev
01-20-2015, 01:02 PM
I like its use for this:
http://www.humus.name/index.php?page=3D&ID=87
Note how well it antialiased the alphatested objects, too.

Alfonse Reinheart
01-20-2015, 01:45 PM
Interesting.

I came to realize that actually, with GS passthrough or vertex_shader_layer/viewport_index, you don't even need GS's to rendering the same primitive to multiple layers. Though you do have to use regular instancing to do it, which means that simultaneously using instancing for its intended purpose becomes rather difficult.

Given this, the valid use cases for GS's would seem to consist of just generating per-primitive parameters (as seen in the edge distance from Humus's site).

Gedolo2
02-18-2015, 05:36 AM
The only use case I can come up with for GS's, one that can't be solved with either AMD_vertex_shader_layer/viewport_index (again, assuming it could work with a TES) or NV_geometry_shader_passthrough, is rendering geometry to cubemaps. That is, projecting each primitive onto the six faces of a cube.

The benefit of a GS here is GS instancing, which allows multiple instances of a GS to operate on the same input primitive. There might be a way to stick similar instancing functionality into the VS or TES, but that would really depend on the hardware.



No. Deprecation and removal does not work. And they're not going to do it again.

Sure sparked an interesting discussion.
Humus's use of Geometry Shader is very useful.

For the record, I wasn't actually going to suggest removing geometry shader.

michaelheilmann
06-30-2015, 04:45 AM
https://www.opengl.org/sdk/docs/man/html/glBlendFuncSeparate.xhtml states that the parameter "srcRGB" "Specifies how the red, green, and blue blending factors are computed.". However, I do think the correct wording should be "Specifies how the red, green, and blue source blending factors are computed."

Khronos_webmaster
06-30-2015, 08:58 AM
Please post all bugs in our bug tracker: https://www.khronos.org/bugzilla/