PDA

View Full Version : Functions to deprecate. GL 4.2



V-man
05-26-2011, 02:09 PM
Do these "multi" version of functions boost performance?
glMultiDrawArrays, glMultiDrawElements, glMultiDrawElentsBaseVertex.

From tests, it seems that they don't. One person had even said that he works on drivers at nvidia and they just turn the calls into the non-MULTI version. There doesn't seem to be a GPU advantage.

There are too many ways to draw. Get rid of
glDrawElements, glDrawElementsIndirect, glDrawElementsBaseVertex, glDrawElementsInstancedBaseVertex, glDrawElementsOneInstance, glDrawElementsInstanced.

Since there is already the "Range" version of these functions.
For example :
glDrawElements = glDrawRangeElements
I think we can keep the "Range" version and dump the other.

Alfonse Reinheart
05-26-2011, 02:30 PM
Functions to deprecate

FYI: deprecation is deprecated. It was a failure. Proposing additions to the list of deprecation will serve no purpose. Time to move on.

It would be better if the ARB just ended the charade and just said that compatibility was core again. Every single OpenGL extension specification treats it that way anyway.


From tests, it seems that they don't. One person had even said that he works on drivers at nvidia and they just turn the calls into the non-MULTI version. There doesn't seem to be a GPU advantage.

By that logic, let's ditch the "Range" functions too. After all, on ATI hardware, these are just regular glDraw functions. There doesn't seem to be a GPU advantage to them.

Also, there's no such thing as glDrawRangeElementsInstanced. So I'm not sure how you plan to have instancing in your new "cleaner" API.

Lastly, this: "glDrawElementsOneInstance" is not an OpenGL function. It is a pseudo-function used to define the behavior of the other core OpenGL functions. The spec is quite clear on this:



The command
[glDrawElementsOneInstance]
does not exist...


It was "created" to clean up the core specification due to the removal of glArrayElement and immediate mode. It makes the language describing the behavior simpler, nothing more.

Groovounet
05-27-2011, 07:44 AM
Functions to deprecate

FYI: deprecation is deprecated. It was a failure. Proposing additions to the list of deprecation will serve no purpose. Time to move on.

It would be better if the ARB just ended the charade and just said that compatibility was core again. Every single OpenGL extension specification treats it that way anyway.


I believe that your statement is just wrong. NVIDIA is supporting the compatibility profile but basically I don't think it's in anyone else business interest to do this and I actually quite believe that the compatibility profile fails. For example can you tell me the list of features that work with the display lists and the ones which doesn't work?

One issue however is that the all deprecation thing is a mess and doesn't seem to have been designed for multiple level of deprecations.

Alfonse Reinheart
05-27-2011, 11:59 AM
I believe that your statement is just wrong. NVIDIA is supporting the compatibility profile but basically I don't think it's in anyone else business interest to do this and I actually quite believe that the compatibility profile fails.

OK, name a single OpenGL implementation that only implements core. Or name a single application that only works on core. If compatibility is still implemented everywhere, then nothing that deprecation was intended to accomplish has been successful.

As to whether it is in their business interest to support it, I'm guessing that ATI wants a slice of NVIDIA's high-end customers, and those customers are entirely dependent on compatibility.


For example can you tell me the list of features that work with the display lists and the ones which doesn't work?

The OpenGL specification can.

kyle_
05-27-2011, 12:47 PM
I actually quite believe that the compatibility profile fails. For example can you tell me the list of features that work with the display lists and the ones which doesn't work?
What do you mean?
As far as i know, pretty much every feature introduced to core is 'backported' to compatibility, so you can legally tessellate glRectf and such.

malexander
05-27-2011, 02:42 PM
OK, name a single OpenGL implementation that only implements core. Or name a single application that only works on core. If compatibility is still implemented everywhere, then nothing that deprecation was intended to accomplish has been successful.

The upcoming version of a certain fruit-oriented OS appears to only support the core profile if you want OpenGL 3.2, otherwise you're stuck with OpenGL 2.1 without many useful 3.x extensions. I'm hoping that will change, but given this company's track record of backwards compatibility, I doubt it.

DarkGKnight
05-27-2011, 06:55 PM
Do these "multi" version of functions boost performance?
glMultiDrawArrays

I disagree, as I use glMultiDrawArrays to stitch disjoint Triangle Strips together. It has the advantage over NV_primitive_restart in that it doesn't require an index buffer. I could use immediate mode, but that has been deprecated in the Core Profile.

kyle_
05-28-2011, 02:45 AM
It has the advantage over NV_primitive_restart in that it doesn't require an index buffer. I could use immediate mode, but that has been deprecated in the Core Profile.
V-man doesnt suggest any of these features, but using glDrawArrays and looping over needed set of primitives yourself.
The point is, GL is very likely to implement glMultipleDraw* with such a loop itself, so you dont gain very much with these functions.

Ilian Dinev
05-28-2011, 02:47 AM
OK, name a single OpenGL implementation that only implements core. Or name a single application that only works on core.
Heaven Benchmark almost fits that. A trace: http://pastebin.com/dJhNmD79
Except for the glpushattrib and glColorMaskIndexedEXT. It's just a benchmark, though.

DarkGKnight
05-28-2011, 03:44 AM
V-man doesnt suggest any of these features, but using glDrawArrays and looping over needed set of primitives yourself.
The point is, GL is very likely to implement glMultipleDraw* with such a loop itself, so you dont gain very much with these functions.
I don't care how the driver implements it, but I saw a performance improvement in using glMultiDrawArrays. I probably could try to 'unroll' it similar to how drivers implement it, but seeing that it works fine as is, I see no need to do that.

DarkGKnight
05-28-2011, 04:59 AM
The upcoming version of a certain fruit-oriented OS appears to only support the core profile if you want OpenGL 3.2,
That isn't a bad idea. Seeing that there was much fuss when the Lean and Mean version of OpenGL did not appear when OpenGL 3 was introduced, I see no problem with an implementation exposing OpenGL 2.1 for old apps and 3.2+ Core Profile for new apps. The only issue is that unlike Direct3D, which has a host of documentation and samples included in their SDK, most OpenGL samples and documentation online reference deprecated features. I'm sure Apple will provide the relevant documentation for Lion developers.

kRogue
05-28-2011, 08:52 AM
If I remember correctly, the EGL spec from Khronos recommended that when bringing OpenGL (not OpenGL ES) to a new platform to bring version 3.2 (or higher) with the core profile... but I cannot find the page and quote in the spec right now... erg.

malexander
05-28-2011, 02:37 PM
That isn't a bad idea. Seeing that there was much fuss when the Lean and Mean version of OpenGL did not appear when OpenGL 3 was introduced, I see no problem with an implementation exposing OpenGL 2.1 for old apps and 3.2+ Core Profile for new apps.

I certainly don't blame them for not wanting to waste time writing code for the compatibility context when writing a new GL driver. I just find it unfortunate that there is only one deprecated features extension, GL_ARB_compatibility, rather than one for each deprecated feature (like GL_ARB_display_list, GL_ARB_draw_primitive_quad, GL_ARB_wide_line_width). When the ARB annouced that deprecated features removed from the core would become extensions, that made perfect sense. However, the all-or-nothing approach with GL_ARB_compatibility makes it unlikely that a vendor will implement any of these deprecated features, unless they had previous legacy GL driver code. And unfortunately some of those deprecated features were quite useful.

Alfonse Reinheart
05-28-2011, 03:21 PM
GL_ARB_wide_line_width

Those are still around. That's why I always try to make a distinction between "deprecated" and "removed". Display lists were deprecated in 3.0 and removed in 3.1. Line widths greater than 1.0 were deprecated in 3.0, and remain deprecated in 4.1.

All deprecation means is that the functionality may be removed in the future. Not that it will be removed, and certainly not that it has been removed.


And unfortunately some of those deprecated features were quite useful.

I would much rather that they find better ways to provide those useful features than relying on compatibility extensions.

malexander
05-28-2011, 04:12 PM
Those are still around. That's why I always try to make a distinction between "deprecated" and "removed". Display lists were deprecated in 3.0 and removed in 3.1. Line widths greater than 1.0 were deprecated in 3.0, and remain deprecated in 4.1.

Ha, of course one of my examples had to be the sole exception to all the 3.0 deprecated features removed in 3.1 :)


I would much rather that they find better ways to provide those useful features than relying on compatibility extensions.

Agreed, better core features to replace them would be great. Such as a geometry shader accepting a variable number of vertices (up to a pre-defined maximum, like the pre-defined maximum output vertices) so I can tessellate polygons and quads myself, rather than rely on auto-tessellation of the now obsolete quads and polygons.

Alfonse Reinheart
05-28-2011, 04:31 PM
Such as a geometry shader accepting a variable number of vertices (up to a pre-defined maximum, like the pre-defined maximum output vertices) so I can tessellate polygons and quads myself, rather than rely on auto-tessellation of the now obsolete quads and polygons.

Geometry shaders perform terribly at tessellation, unless they're on hardware that already has the infrastructure for tessellation (ie: GL 4.x hardware). In which case you should be using a tessellation shader.

And you can always use lines-adjacency to simulate a quad input, and a tri-strip for a quad's output.

malexander
05-28-2011, 05:52 PM
Geometry shaders perform terribly at tessellation, unless they're on hardware that already has the infrastructure for tessellation (ie: GL 4.x hardware). In which case you should be using a tessellation shader.
A tessellation shader requires that all patches have the same number of vertices though, doesn't it? I'd like to be able to tessellate a stream of variable-vertex, possibly concave polygons w/primitive restarts into a triangle strip for each. It's for content creation, where models-in-progress aren't always in a nice, triangle or quad-only format. Rendering a bunch of uvec2 GL_POINTs with vertex offset/length information to index a TBO containing the actual vertices is one way around it, but that seems a bit hacky and likely won't perform well.


And you can always use lines-adjacency to simulate a quad input, and a tri-strip for a quad's output.
Neat trick. A little abusive to semantics, but it'll do :)

As I appear to be derailing the original intent of this post, I'll sign off now...

aqnuep
05-30-2011, 01:38 AM
Back to the original topic, I have one argument for keeping the MultiDraw* commands.

While currently there is little to no sense in using the MultiDraw* commands as they are in fact implemented (at least in most cases) using a simple loop in the driver, this can change in the future as we may see also MultiDrawIndirect-style commands that will source their parameters from buffers in the same style we already have the functionality provided by ARB_draw_indirect.

While maybe this won't change the fact that classic MultiDraw* commands are implemented with a simple CPU-side loop in the driver, the specification would be more consistent if it would have both direct and indirect versions of the same drawing functions.

malexander
06-06-2011, 04:49 PM
As if on cue:

http://www.opengl.org/registry/specs/AMD/multi_draw_indirect.txt

aqnuep
06-07-2011, 01:21 AM
I told you :)

However, what I'm still missing is the possibility to take the primcount parameter from a buffer object. That should be the next step.

glfreak
06-10-2011, 11:49 AM
ARB wanted a streamlined API that resulted in removing several invaluable features such as: wide line drawing, line/polygon stippling, display lists...etc.

Why epic fail? It's because they introduced compatibility profile, which means old functionality is in need ;)

So it's clear that their main goal was to "streamline" rather than improve the API.

Improve drivers quality? Give me a break, with compatibility profile added?

Epic fail!

Good luck!

RefleX
06-13-2011, 03:31 AM
I wonder what would have happened if they split the compatibility & core profile completely, separate headers and libraries. Programmers that want to migrate will just need to change headers and link into different libraries and also remove deprecated code which will show up as errors since it's not in the new headers. They would still have to deprecate features after that but at least they starting on a clean slate. This would have been a good choice in the long run.

Open too opinions?

Alfonse Reinheart
06-13-2011, 03:46 AM
separate headers and libraries

Headers are the responsibility of the user. Most OpenGL extension loaders generate their own headers anyway, so it wouldn't help. And there would be no point in having separate libraries, since they'd just be linking to the same code either way.

RefleX
06-13-2011, 04:26 AM
And there would be no point in having separate libraries, since they'd just be linking to the same code either way.

Yes forgot :(

Ok what if say GLEW decided to allow you to define GLEW_CORE_ONLY and it would block of any declaration of non core features in the headers, by this I mean the compiler will not allow you to compile a program using deprecated features because it will think they do not exist. That would make moving over to core allot easier.

V-man
06-13-2011, 04:33 PM
Isn't that what GL3W does?
http://www.opengl.org/wiki/Extension_Loading_Library

przemo_li
06-14-2011, 01:02 PM
Why do you need lines with bigger width?

And why can not you implement missing features as separate libraries on top of core?

If Apple implement only core 3.2, I'll be against! It should be 4.1 core only! ;)

Really, if apple do this move, dev will start to create core-only apps, and that will ease pressure on others also.

And I do not get idea that they can not start with core, but __must__ go from 2.1 to compatibility. There are huge benefits from implementing only core profile.

The only problem with core profile is that, Intell do not have a single gpu capable of doing OGL 3.2.

glfreak
06-15-2011, 08:22 AM
Why do you need lines with bigger width?


To render an object with thicker polygon edges/wireframe. ;)


And why can not you implement missing features as separate libraries on top of core?


No time. That's why we use a graphics library, not a driver :)


The only problem with core profile is that, Intell do not have a single gpu capable of doing OGL 3.2.

Or even 2.1 ;)