EXT_separate_shader_objects? (...and friends)

Anyone know anything about this one?

How about {GL,GLX}_NV_copy_image?

Nothing in the registry yet.

Appeared in NVidia 190.18beta drivers, along with ARB_compatibility, ARB_copy_buffer, and NV_parameter_buffer_object2 (last 3 formerly 180.37.04-05 only; GL3.0/3.1 beta support branch).

Glad to see GL3.1 finally advertised on the mainline, as this means UBO support.

Well it’s pretty easy to figure out though even if you don’t know the exact details.

NV_parameter_buffer_object2 is an update to NV_parameter_buffer_object and probably adds a few things to it to make it more useful, probably in order for it to be accepted as an ext later on.

ARB_compatibility, ARB_copy_buffer we already know, the first one contains a lot of the deprecated functions and the latter one copies buffers back and forth.

EXT_separate_shader_objects is most likely an upgrade to the shaders that allows them to switch for instance fragment shaders without switching all of them, and it is a welcome one as well since we are now required to use shaders to do anything and that could be a lot of shader switching and compiling and so on.

I have no idea what NV_copy_image does but if i had to speculate it’s the texture/framebuffer version of ARB_copy_buffer

Thing is, NV_parameter_buffer_object is NVidia’s pre-UBO functionality for UBOs (in NV_gpu_program4 assembly shaders), and what Cg allegedly uses behind-the-scenes for uniform BUFFERs when targeting the (gpu* G80+ profiles). A ++ version would likely still target NVidia-only assembly, seeing how the ARB has (sadly) paused the EXT/ARB assembly shader path (read: no precompiled shaders :frowning: ). So prob not pre-EXT prep here.

… that is, unless the ARB is resurrecting the assembly shader path :smiley: Hey, one can hope (only 9 days 'til the OpenGL BOF). Would be really nice to nuke that long “compiling/optimizing high-level language shaders” phase in our app and move it to an off-line pre-process where it should be (or at least cache the stupid things on disk after the first run). Feeding GL pre-compiled assembly would probably bring the time down radically, though a GL-ES OES_get_program_binary would be just fine too (even better for us). Good grief, at least support compiling these suckers on a different core, though that would be the lamest solution. If you’ve got either of the first two, you’ve got the third automatically.

ARB_compatibility, ARB_copy_buffer

Yeah, no questions about those.

EXT_separate_shader_objects is most likely an upgrade to the shaders that allows them to switch for instance fragment shaders without switching all of them

Yep, that’s what I was thinking. Just haven’t tripped over an extension spec out there yet, which is why I asked.

Yea, exactly what i was thinking, don’t we already have this stuff in there, but maybe they see a potential in it or they wouldn’t have done it, there really is a motivation to reach at least EXT or else people can’t really use it.
As for precompiled shaders, yea it’s a must down the line with all the new shaders we all have to use.

It would be interesting if EXT_separate_shader_objects chose the path to do away with the linking stage altogether and just allow you to call
glBindShaderObject(GL_VERTEX_SHADER,VertexShader);
glBindShaderObject(GL_FRAGMENT_SHADER,FragmentShader);
to use it, if they line up with all the “in” and “out” it works, else it doesn’t and you get nothing.

That would certainly be a motivation to do something then and free up the usage of the compiling function, so you could write something like.
glCompileShader(VertexShader, 1, (const char **)&VertexShaderSource, NULL);
and whenever you like just extract the binaries from VertexShader, or
glLoadBinaryShader(VertexShader, VertexShaderBinary);
if you already got them.

But this is just wild speculation.

Since nVidia loves making so many features, can’t they make an extension for binary blob shaders? I would be interested in it even if it is a NV extension.

Won’t that require binding-semantics (“in vec4 myVarying3 : TEX3”) be introduced for fast coupling, though?

I don’t think it requires it, but it’s certainly an option.

Definitely. So long as they don’t defer shader final link/optimize until the draw call, like NVidia used to do pre-GeForce 8.

That would certainly be a motivation to do something then and free up the usage of the compiling function, so you could write something like.
glCompileShader(VertexShader, 1, (const char **)&VertexShaderSource, NULL);
and whenever you like just extract the binaries from VertexShader, or
glLoadBinaryShader(VertexShader, VertexShaderBinary);
if you already got them.

But this is just wild speculation.

…and a friendly nudge to the vendors/ARB to consider heading that direction (if they aren’t already). :wink:

I second that.

Perhaps, but I’m just fine with that. Letting GL assign uniform/varying slots is a feature I don’t want.

I think many of us have engine-global vs. material-local inputs, and we’d really, really like to bind the globals just once, to the same place, for all shaders. Having to continually rebind them just because we changed shader and we don’t know if GL moved them is a waste. NVidia’s Cg and assembly profiles (including their assembly UBOs (NV_parameter_buffer_object) have it right: let the developer tell you where to put it if he wants.

It doesn’t need to be TEXCOORD3, COLOR, etc. It could be ATTR0, ATTR1, ATTR2 (for new-style attribs) or generic SLOT0, SLOT1, SLOT2 (uniforms, varyings), the latter of which the driver is free to map to whatever. But ATTR0 or SLOT0 must be the same data location for every shader, so you can depend on it cross-shader.

Perhaps, but I’m just fine with that. Letting GL assign uniform/varying slots is a feature I don’t want.
[/QUOTE]
Imho without manually specifying slots, the deferred “shader final link/optimize” will happen.
My example with “TEX3” instead of ATTR3 is simply because of the way the Cg-style GLSL compiler needs varyings to be specified.

Edit: ah, just now noticed that you already mentioned this by “But ATTR0 or SLOT0 must be the same data location for every shader, so you can depend on it cross-shader.”

Won’t that require binding-semantics (“in vec4 myVarying3 : TEX3”) be introduced for fast coupling, though?

If they do, the ARB will never approve it. And they should not.

The Program Environment method that the ARB was working on for Longs Peak was a far better solution than direct use of shader objects.

Related link courtesy of Eosie: NVIDIA ForceWare 190.15 Brings OpenGL 3.1 Support And New Extensions!

The specs for these three extensions are now available. We officially released support for them together with the OpenGL 3.2 beta drivers. See here:

http://developer.nvidia.com/object/opengl_3_driver.html

For a complete list of NVIDIA supported extensions:

http://developer.nvidia.com/object/nvidia_opengl_specs.html

Barthold
(with my NVIDIA hat on)

Two thumbs up on the separate shader object extension. Are you looking at a format object of sorts to tie VS input with attributes, or does VAO fit the bill here?

Agree that in a perfect world the LP solution would be grand, but this is the next best thing.