PDA

View Full Version : Longs Peak Program Objects



Flavious
02-28-2007, 04:48 PM
From the newsletter:

In OpenGL 2.1 only one program object can be in use (bound) for rendering. If the application wants to replace both the vertex and fragment stage of the rendering pipeline with its own shaders, it needs to incorporate all shaders in that single program object. This is a fine model when there are only two programmable stages, but it starts to break down when the number of programmable stages increases because the number of possible combinations of stages, and therefore the number of program objects, increases. In OpenGL Longs Peak it will be possible to bind multiple program objects to be used for rendering. Each program object can contain only the shaders that make up a single programmable stage; either the vertex, geometry or fragment stage. However, it is still possible to create a program object that contains the shaders for more than one programmable stage.Anyone care to expand or speculate on these ideas?

I'm thinking "set of compiled shaders that can be quickly linked at runtime", but then things get a bit fuzzy ;)

Cheers

Korval
02-28-2007, 05:01 PM
Anyone care to expand or speculate on these ideas?No need for speculation.

All it's saying is that you can have fully linked programs that don't have to include all of the stages that you intend to use. So you can have a vertex program and a fragment program, both compiled and linked, and then you decide to use both when rendering something.

Before, you would have had to create a combined vertex&fragment program, one program that contains both stages. Now, you don't.

Flavious
02-28-2007, 07:35 PM
So we'd have something like this:


...
glUseProgram(vsSet);
glUseProgram(gsSet);
glUseProgram(fsSet);
...
SpecifyWhichShadersToApply();
DrawLotsOfStuffUsingThisSet(); Each of the shader sets is considered to be a valid program in its own right, in this model (total program validation presumably deferred to the next draw command). Am I right?

Where I get fuzzy is in how one would specify the particular {vs,gs,ps} from the active set that is intended for use, and how this selection might interact with uniform (block) updates.

I realize it's a bit early for specific details, but I just can't resist asking :)

Thanks,
Cheers

k_szczech
03-01-2007, 01:53 AM
Right now we link multiple fragment, geometry and vertex shaders into one program that gets executed - so more than one type of shaders creates a program.
From now on you will link multiple fragment shaders into one fragment program, multiple vertex shaders into one vertex program - so only one type of shader can be linked together into one program.
You will not link fragment and vertex shaders into one program.
You can then enable/disable single fragment/vertex/geometry program separately.


The API is different thing. Note that

glUseProgram( vp );is OK, because we know vp is a set of vertex shaders linked into one vetex program (they're allready linked).
But this:

glUseProgram( 0 );Is no longer valid, because is tells to unbind (bind default) program, but does not tell which stage it reffers to (fragment? geometry?)
Something like this would be OK:

glUseProgram(GL_DEFAULT_FRAGMENT_PROGRAM);That GL_DEFAULT_FRAGMENT_PROGRAM would be defined as external variable being a program object. So unbinding proagam is actually binding default program of that type.
Perhaps instead we would have glUnbindProgram( vp ) or glUnbindProgram(GL_VERTEX_PROGRAM) - It doesn't mak much difference to me.

I have no clue how new API will look, but I think the idea (not the API) will be more less what I described.

Again - don't consider this to be a confirmed fact - it's just how I understand it.

elFarto
03-01-2007, 02:05 AM
The way I understand it is that a program object may contain 1 of each of the shader types (vertex, geometry, fragment). You can then do the following:

glUseProgram(programWithOnlyAVertexShader);
glUseProgram(programWithOnlyAFragmentShader);And it'll work just as if the vertex and fragment shader were in the same program.

You could also do the following:

glUseProgram(programWithAVertexAndFragmentShader);
//..draw stuff
glUseProgram(programWithOnlyAGeometryShader);
//..draw more stuffThe first draw would just use the vertex and fragment program, and the second draw would use all 3.

The bit I'm not sure on is if 2 currently bound programs have matching shader types. I assume the last one to be bound is the one that will be used. E.g.

glUseProgram(programWithAVertexAndFragmentShader);
glUseProgram(programWithOnlyAFragmentShader);In this example it would use the vertex shader from the first program, and the fragment shader from the second program.

Of course they'd probably change the semantics to bind/unbind, e.g.


glBindProgram();
glUnbindProgram();Regards
elFarto

Overmind
03-01-2007, 02:22 AM
I don't think there will be a default program.


The bit I'm not sure on is if 2 currently bound programs have matching shader types.I think that won't be allowed. It's a bit fuzzy on that point, but as I read it, you have two choices:

- link everything into a single program and use this
- link only shaders of the same stage into a program, and combine multiple programs

Zengar
03-01-2007, 03:31 AM
IMHO:
I think you can bind multiple programs, but only one 'main()' function for a shader of each type will be allowed. So it's pretty much like current approach, just imagine only that the bound programs will be linked automatically.

k_szczech
03-01-2007, 03:44 AM
I don't think there will be a default program.Yes, you're right. I think semantics "unbind" and "bind default" are equivalent when comes to most of implementations, but from the API's point of view fixed functionality still exists. There will be probably something like: glUnbind( <program_type> ).

k_szczech
03-01-2007, 03:56 AM
I think you can bind multiple programs, but only one 'main()' function (...) linked automaticallyYou can implement it allready - if you want to use a set of shaders you can create temporary program, attach, link and destroy program after you're done using it. I wouldn't say that doing it multiple times every frame is performance-wise.
It's better to prepare such 'sets' of shaders we intend to use on init and pre-link them and that leads us back to idea of programs.

The only difference now is that program object is not for all stages but just for one stage. You can attach them freely with no performance loss as vertex shader does not have to be linked with fragment shader. It never had to. :)

Overmind
03-01-2007, 04:07 AM
but from the API's point of view fixed functionality still existsAre you sure? I don't think so ;)

Zengar
03-01-2007, 04:23 AM
Ah, sorry, I haven't read it throughly... I missed the "Each program object can contain only the shaders that make up a single programmable stage" part :-/

k_szczech
03-01-2007, 04:48 AM
Are you sure?No I'm not. I just assume that "lean and mean" pofile won't have FF, but there will be backward compatible profile and it will allow some of the new stuff, right? I think separate programs for every stage is not in conflict with existing model - it's just an extension of it.
If there will be absolutely no FF, do I have to create shaders for all stages to render anything or is there some default behavior for, let's say, geometry shaders defined? If yes, then it's defalt shader/FF or whatever you wish to call it.
If no, I'm gonna have to do the homework :)

Michael Gold
03-01-2007, 07:04 AM
I'll try to clear up the confusion.

There are two classes of linkage which occur with programs:

1) Intra-stage linking occurs when multiple shader modules are combine to form a single pipeline stage.

2) Inter-stage linking occurs when multiple stages are combined to form a complete pipeline.

In GL2, both forms of linkage occurs when you call LinkProgram(). You bind the program with UseProgram(), and any stages not present in the program revert to fixed function.

In Longs Peak, Intra-stage linking always occurs during program creation. If multiple stages are present in the program, Inter-stage linking also occurs. However, you have the option of creating a pipeline from separate program objects, e.g. you might have a vertex program which you wish to use with multiple fragment programs. In GL2 you are required to create a separate program object for each combination of per-stage programs. In contrast, Longs Peak allows inter-stage linking when you bind the program objects. It may look something like:

glUsePrograms(GLprogram *programs, GLint count);

All required stages (currently vertex and fragment) must be represented by the list of programs provided. For efficiency, we currently require that the set of varyings passed between the stages be an exact match.

You still have the option of linking all stages into a single program object, in which case the varyings need not exactly match, provided that the outputs from one stage are a superset of the inputs to the next stage.

At this point, there are no plans to support fixed function in Longs Peak.

Overmind
03-01-2007, 07:24 AM
is there some default behavior for, let's say, geometry shaders defined?No default behaviour, but the geometry shader is optional. That means you don't need one, not even a default implementation ;)

k_szczech
03-01-2007, 08:43 AM
Yep. Homework it is :D
Guess I'll lay low for a while, so Michael Gold and others will not have spend time explaining new API to me and fucus on actually developing it :)

It's a good thing to discuss upcoming API with community, but poeple not aware of some design decisions that have been allready made (like me) can produce too much noise. Most of us just don't see the whole picture, and the truth is: whatever the new OpenGL will look like, I'll be damn happy about it :D

Flavious
03-01-2007, 09:29 AM
Thanks guys, much clearer now.

So just to recap, we would bind a list of programs, each of which containing only a single vertex or fragment shader?

How would uniform updates fit into this scenario? I suppose we'd need to update the uniforms for each program (vs or fs) independently, or together once used in a list? Of course currently we have our uniforms applied to these stages together as a group, so I'm a bit curious as to how this plays out in the proposal.


glUsePrograms(GLprogram *programs, GLint count);Now that's an API. (What was I thinking :D )

Cheers

LarsMiddendorf
03-01-2007, 11:39 AM
In contrast, Longs Peak allows inter-stage linking when you bind the program objects. It may look something like:

glUsePrograms(GLprogram *programs, GLint count);
In contrast, Longs Peak allows inter-stage linking when you bind the program objects. It may look something like:

glUsePrograms(GLprogram *programs, GLint count);Would it be difficult to support intra-stage linking in a similar way?

Korval
03-01-2007, 12:20 PM
Would it be difficult to support intra-stage linking in a similar way?Why would you want to? I mean, that's basically doing what we have now.

The purpose of having independent, fully linked fragment, vertex, and geometry programs is to have most of the linking work be done and make it performant to mix-and-match linked programs without having to go through a full relinking stage.

Once you get into having two partial programs combined, you're basically back to just the compile and link step, which we already have.

We're not losing the ability to compile multiple shaders of the same type and do intrastage linking. We're simply allowing people to mix and match stages together without the performance penalty that one would have when doing so under the current GL implementation.

Flavious
03-01-2007, 12:47 PM
Yes, the more I think about this, the more I like it! We get on-the-fly program construction from stages, and it only costs us a lightweight link/validation step.
(This seems like fragment linking in DX, only better ;) )

To have a stab at my own question, post glUsePrograms seems like the right place to specify uniforms, unless there's a better way to block/group uniforms with independent stages, to possibly reduce updates somehow (maybe I just need more coffee).

Cheers

LarsMiddendorf
03-01-2007, 01:01 PM
The purpose of having independent, fully linked fragment, vertex, and geometry programs is to have most of the linking work be done and make it performant to mix-and-match linked programs without having to go through a full relinking stage.
Yes and this would be also very useful for shader objects.

Once you get into having two partial programs combined, you're basically back to just the compile and link step, which we already have. Is this really necessary? It would be acceptable if the driver cannot perform a whole program optimization between the shader objects.

Jan
03-01-2007, 01:02 PM
I think having a default-shader would be a nice thing. It makes life easier to get started.

However, this default shader could be supplied by a glu-function (eg. gluBindDefaultShader () ...). It could be a simple shader, that just transforms vertices and passes color and texture-coordinates to a fragment shader, which then does a simple texture-fetch and modulates it with the color. I think more is not needed, but it would be nice to have such a basic shader, just to be able to get an app up and running fast. By putting this functionality into a glu-function, one would keep the core clean.

Jan.

k_szczech
03-01-2007, 01:58 PM
which then does a simple texture-fetch and modulates it with the colorI see it rather this way: gluBindSimpleShader( red, green, blue, alpha, texture); - binds shader and/or sets uniform values for it. Texture is optional. That would certainly shorten the learning curve for those new to OpenGL. Otherwise they would have to learn basics of shader programming just to make 'Hello world' application.
As for me, even if such function existed I wouldn't use it. I just don't like to link addtional library just to use a functon of two from it. But that's just me. Others might like it.

knackered
03-01-2007, 03:02 PM
The layered mode is what was originally proposed for this fixed-function emulation - a full OpenGL2.0 implementation layered on top of the new API.
How much you can mix (or if at all) between layered mode and LongsPeak is another matter.
It's like the old situation with how much you mix Performer and OpenGL...Performer being analogous to layered mode - the high level makes assumptions that it's in total control of the low level states.

k_szczech
03-01-2007, 04:31 PM
The layered mode is what was originally proposed for this fixed-function emulation - a full OpenGL2.0 implementation layered on top of the new API.But should beginner learn OpenGL 2.0 first and then move to Longs Peak or start his adventure with the new object/state model?
First option means long learning curve with unnecessary stage. Second option means long learning curve because shaders are required. Unless glu helps a bit.

Jan
03-02-2007, 01:37 AM
Good point.

One could have a layered mode to implement OpenGL 2.1 on top of LP, just for convenience, so that people's old programs still compile and run.

But for beginners, i would rather have a good utility library, that provides many helper-functions (immediate mode, default shaders, easy texture creation, extension loading(?), glTransform, glRotate, camera-matrix creation, default state-objects, ...) and would thus allow to get into OpenGL fast and easy. Many of these functions might be useful for advanced users, too (i still use gluPerspective and gluBuild2DMipmaps as a fallback).

That's was the idea behind glu 15 years ago and it was a really good one.

Jan.

knackered
03-02-2007, 03:27 AM
I find it hard to think of this new API without thinking about the layered mode/utility library alongside it. If people know that there will be 2 distinct layers, one with a standardised bare minimum to communicate with the hardware, the other with a standardised bare minimum to get a prototype application up and running layered on top of the low-level API, then I'm sure we'll hear more sensible suggestions.
For me, the IHV's should decide what functionality needs exposing (they know their hardware best), and then work with the developer community to discuss how that functionality should be exposed (they're the ones who'll have to use the API).
At the moment it seems like people are trying to dictate high level functionality is put into the low level core where it does not belong.

Flavious
03-02-2007, 10:30 AM
There is an honorable mention of a new GLU like layer in the newsletter, but few clues as to what it might look like.

To be honest, I hadn't really stopped to think of not being able to dive straight into drawing in the new model. Although it does sound as if there's the possibility of having both 2.1 and LP drawables current at once (or something to that effect). This seems like the best of both worlds.

Cheers

Edit: correction.

Michael Gold
03-02-2007, 11:46 AM
Although it does sound as if there's the possibility of having both 2.1 and LP contexts current at once (or something to that effect).They will be separate contexts. They may both be bound to the same drawable (although only one context may be current at any time within a single thread). Thus you can render parts of the scene with legacy code and other parts with new code, and it should all mix and match - you just need to bind the appropriate context when switching between GL2 and LP.

Flavious
03-02-2007, 11:55 AM
Thanks, Michael. Drawable is what I meant ;)

So to rephrase, we can bind a legacy context, render to a common drawable (shared with a LP context), then bind a LP context, then render again to the same shared drawable?

This seems like just the ticket. The only potential caveat that I can see is the cost of the context switch; but if it's this or no compatibility at all, I'll take this.

Cheers

Forest Hale
03-19-2007, 11:39 PM
Originally posted by Flavious:
How would uniform updates fit into this scenario? I suppose we'd need to update the uniforms for each program (vs or fs) independently, or together once used in a list? Of course currently we have our uniforms applied to these stages together as a group, so I'm a bit curious as to how this plays out in the proposal.Your question is actually more complex than it first seems...

In OpenGL 2.x to update a uniform you bind the program object, and call glUniform.

With the advent of glUsePrograms there can be multiple bound at once, does the glUniform call affect all that are currently bound?

Or does glUniform take a program object handle for the specific one (vs, gs, fs) you want to modify? (which would require validation of the program object handle on each glUniform call, the very thing glUseProgram was designed to avoid)

Or is there a new glEditProgram to select which program the glUniform call affects?

Korval
03-20-2007, 12:08 AM
With the advent of glUsePrograms there can be multiple bound at once, does the glUniform call affect all that are currently bound?You're not thinking in terms of the full Longs Peak experience.

Under LP, there is no "glUniform" call anymore. Well, not to a program. The equivalent call takes a buffer object that was created for the purpose of holding uniform data. So it might look like:


glUniformfv(bufferObject, "glLightDir", direction, 3);This is just a guess, but it's gathered from several pages worth of thread discussion.

The buffer object would be created from a program object. Thus BO would be compatible with the program object, and would contain only names that the program object used. Attempting to set program object uniforms that the program didn't actually specify would produce an error.

A program can have multiple such buffer objects, with the layouts defined by the program (and probably named in some way?). Further, multiple programs with compatible buffer object layouts (all containing the same uniform definitions) can use the same buffer object. Creating such buffer objects may actually be an external process that does not require any particular program object, but that's going on old data (from discussions months ago) and may not be consistent with the current Longs Peak version.

So, when you want to render with a program, you bind the program and bind to that program its associated uniform buffer objects. You bind to certain slots in those UBOs samplers and texture images as well.

Now, as for multiple programs for different stages, it works as it expects, except that the buffer objects do not cross-connect. If you want the uniform named "glLTPMatrix" to be available in both the vertex and fragment program, you have to bind separate (or possibly the same, but you need two binds) buffer objects that contain that uniform. That's the price of having separate program objects.

If you used only one program object, then the stages would share uniform data were reasonable. IE, where the different stages refer to the same uniform name.

Overmind
03-20-2007, 12:50 AM
Better make that:

glUniformfv(programObject, ...);One of the things Long Peaks gets rid of is the whole "bind to edit" thing ;)

Zengar
03-20-2007, 01:31 AM
Actually I prefer Korval's suggestion. One weeknes with GLSL IMHO is, that semantically same uniforms have to be set separately for each program. It would be so much nicer to have a unified uniform block feature...

Overmind
03-20-2007, 07:44 AM
True. Now that you mention it, setting a uniform is not really something that should belong to a program :p . It's not really editing the program, it's just a state that's associated with programs (currently, propably it should not be).

This makes me wonder, will there still be individually settable uniforms? If yes, what object will this state be associated with? The program object? An extra object? Globally?

Uniform blocks in buffer objects make a lot of sense for things like lighting parameters or matrices (that is, the global environment).

But how will we set "parameter" uniforms like material properties? What about sampler uniforms? I doubt we'll be able to store texture object handles in buffer objects ;)

Korval
03-20-2007, 11:07 AM
But how will we set "parameter" uniforms like material properties?A "material" property is just another uniform. I don't understand what is special about this.


What about sampler uniforms?
I doubt we'll be able to store texture object handles in buffer objectsWhy not?

Or, more importantly, once you can store uniforms in buffer objects, why would you ever want to store them in the program objects themselves (or anywhere else)? Rare is the shader texture that is specific to the nature of shader itself (maybe a look-up table) as opposed to the particular use of the shader (which character it is, etc). And for those, you just have a UBO specifically for those uniforms.

Forest Hale
03-20-2007, 11:32 AM
I think Overmind is right, most state is totally unrelated to the specific program object being used.

The classic example being a model using multiple materials on different meshes in the model, some of them requiring different program objects, to update a uniform for rendering this particular instance of the model it must be set in all the program objects referenced by this model.

Can anyone think of a good example of state that is per program object, rather than per rendered model?

The concept of a uniform buffer certainly allows the same uniform set to be supplied to multiple program objects used on a single model, which should solve this previous weakness of program objects.

Korval
03-20-2007, 11:40 AM
Can anyone think of a good example of state that is per program object, rather than per rendered model?A lookup table.

Forest Hale
03-20-2007, 03:50 PM
Originally posted by Korval:

Can anyone think of a good example of state that is per program object, rather than per rendered model?A lookup table. A fine example indeed! :)

Thanks, was having no luck thinking of such a thing :)

This however means that ideally you want to have a bit of per-program state (lookup tables) AND per-model state (light source information, etc).

Overmind
03-21-2007, 04:24 AM
Why not?Because the texture is an OpenGL object handle, and I doubt the GPU could interpret this without help by the driver.


glUniformo(bufferObject, "texture0", samplerObject);Are you sure that's the way buffer objects are manipulated? I was under the impression buffer objects are just some arbitrary lump of memory. That's what makes storing uniforms in buffer objects so interesting.

The call you posted looks more like a manipulation of some black-box object encapsulating the uniform state. This kind of object would of course make sense in addition to normal buffer objects ;)


A "material" property is just another uniform. I don't understand what is special about this.The difference is the granularity at which I want to update it. I have three different granularities in my engine:
- per program state (e.g. lookup tables)
- per material state (textures, colors, ...)
- per object state (lights, matrices, ...)

Korval
03-21-2007, 11:09 AM
Are you sure that's the way buffer objects are manipulated? I was under the impression buffer objects are just some arbitrary lump of memory. That's what makes storing uniforms in buffer objects so interesting.If you're going to store uniforms in a buffer object, then this is how you have to do it. It gives the driver its required flexibility (specifically in not telling you how it's laying out the uniforms), while allowing the user to quickly switch banks of uniforms without making a bunch of calls.


- per program state (e.g. lookup tables)
- per material state (textures, colors, ...)
- per object state (lights, matrices, ...)Then use 3 separate buffer objects. Problem solved.

Overmind
03-21-2007, 01:09 PM
If you're going to store uniforms in a buffer object, then this is how you have to do it.What you say makes sense. I don't really like the idea to expose memory layout details to the application.

But I don't think these "buffer objects" are meant to have transparent layout.

From http://www.opengl.org/pipeline/article/vol003_5/ :

New Uses for Buffer Objects have been defined to allow the application to use buffer objects to store shader uniforms, textures, and the output from vertex and geometry shaders.Note that they speak of a single object type called "Buffer Object", that can be used for all these things. I assume this is the same buffer object as in VBO/PBO. I don't see how transparent layout would fit into this, especially when using multi-purpose buffer objects (e.g. use transform feedback data as uniforms).

Maybe we'll just get two seperate uniform setting methods, one based on buffer objects and one with transparent layout ;)

Michael Gold
03-21-2007, 08:38 PM
Let me try to summarize how this works. Again, all tentative and subject to change.

1) Create a program object
2) Create buffer object(s) of appropriate size for the uniform partition(s).
3) Query the uniform offsets from the program object. This is a byte offset within the corresponding buffer object.
4) Load values into the appropriate locations within the buffer object(s). You can use any available technique for populating buffer objects, e.g. BufferData, MapBuffer, PBO reads, etc.

Image and texture Filter objects are attached directly to the program object, as are the buffer objects. When you bind a program, it pulls in all dependent objects in a single call.

Korval
03-22-2007, 12:25 AM
Not what I expected.

How do you get to use the same buffer object for multiple programs, if they could be using different sets of uniform partitions for the same uniform data?

Jan
03-22-2007, 05:51 AM
You need to use several uniform-blocks, that are equal in all the shaders, where you want to use them.

Example:

shader1
-------

uniform block A
{
float
float
int
}

uniform block B
{
float
int
}


shader2
-------

uniform block C
{
int
float
vector
}

uniform block D
{
float
int
}


Now block B and D have an equal layout, therefore you can bind one buffer to both B and D. If shaders need different data, you put general data into one block (B and D) and the data that is shader-specific is put into a different block (A and C).

You need to do the same thing for data that is updated at a different frequency, as mentioned above somewhere, if you want to prevent needless uploading of data.


At least, that's how i understand it.
Jan.

Michael Gold
03-22-2007, 09:36 AM
Jan's understanding is correct.

Overmind
03-22-2007, 10:23 AM
Ok, then my basic understanding about everything that has nothing to do with samplers was correct, too. That brings me back to my original question

I still don't understand how textures fit into this. Ok, the image and filter objects are bound directly to the program object. But how are they connected to the sampler uniforms? Do I have to bind it to an uniform id (something like glBindObjecti_o(program, uniformID, image)?

Also, when binding textures to programs, how can I change all textures with a single call? I was hoping that in Long Peaks we would be able to change all material parameters (that is, uniforms and textures) with a single call.

knackered
03-22-2007, 10:58 AM
This is getting not too different from the old dx execute buffers.

Korval
03-22-2007, 11:07 AM
Do I have to bind it to an uniform id (something like glBindObjecti_o(program, uniformID, image)?Yes.


Also, when binding textures to programs, how can I change all textures with a single call? I was hoping that in Long Peaks we would be able to change all material parameters (that is, uniforms and textures) with a single call.That's a very good point.

More than likely, you're going to be using the same program with many image/sampler pairs.

It would be better to be able to take a program and create a "sampler block" object that you bind image/sampler pairs to. It, combined with uniform buffers, would work much like having an instance of a program. Thus you can render simply by binding program, the uniform buffers, and sampler block to the context and then rendering.

The ultimate hope is to be able to fully instance programs. That is, have the program itself not really own any state, but have it associated with these other objects that apply their state to the program.

nystep
03-23-2007, 05:15 AM
I'm very interrested in the new object model proposed, anyone knows when drivers supporting this will be out? :)

Korval
03-23-2007, 11:04 AM
I'm very interrested in the new object model proposed, anyone knows when drivers supporting this will be out?I'm guessing not before there's a published spec. Which won't happen until summer at the earliest.