GL3 and texture binding

How do you bind a texture?

lpBindTexture(GL_TEXTURE0, textureObject);

or something like that?

Textures are attached to programs so that binding a program pulls in all objects on which it depends. The indirection table in the context is gone.

something like

glUniform1iARB( texture_location, texture_handle );

I think.

The actual API is going to look something like:

AttachImage(program, index, image);

Where index identifies the attachment point and image is the texture image.

It’s not inconvenient for me but it is weird.
That will make the internal of your drivers more efficient?

I don’t think this is really about efficiency, but more about a consistent API design.

The pipeline, as we know it (ie. mostly fixed function) is gone. You start by passing values into the vertex-shader who’s output is passed along through the geometry-shader and into the fragment-shader. Each shader can do texture-lookups (in theory). That means, textures are now inputs for shaders. They are not globally set for the whole pipeline, as it is right now.

Therefore it does make sense to “bind” textures to shaders and not to “the pipeline”.

Jan.

In 2.1, binding textures isn’t just about inputting them into programs; calls like glGetTexImage require you to bind the texture first.

How will this be handled? Will it be essentially similar to what is now glMapBuffer, or what?

A map seems somewhat likely for generic GPU-CPU interactions, perhaps from an allocated block of memory tagged for efficient CPU access via CopySubBuffer and friends. At least that’s the recommended pattern in d3d10.

In 2.1, binding textures isn’t just about inputting them into programs; calls like glGetTexImage require you to bind the texture first.
That’s because GL 1.0 was a state-based system and had no concept of objects outside of display list objects. Everything that was built on OpenGL since then has built on this state-based system.

Which means that, by 2.1, we have to do incredibly stupid things.

Currently, we bind textures to a program. But we have to do it in an incredibly roundabout way. We set a sampler in the program to a particular texture index. We then bind a texture to that index.

It makes much more sense to just bind the texture to the sampler in the program.

GL 3.0 is an object-based system. To upload or download data to/from an image, you use the object itself:

“gl3ImageData(imageObject, …);”

No binding. Binding is for when you want to use an object, not modify it.

Originally posted by Korval:

Currently, we bind textures to a program. But we have to do it in an incredibly roundabout way. We set a sampler in the program to a particular texture index. We then bind a texture to that index.

This approach has also its advantages. For example currently most of my shaders take shadowmap. At shader creation I set corresponding sampler to fixed index of texture unit. During rendering I bind shadowmap for currently rendered light to that unit and render all relevant objects without need to to bind that texture to each from many program objects.

At shader creation I set corresponding sampler to fixed index of texture unit. During rendering I bind shadowmap for currently rendered light to that unit and render all relevant objects without need to to bind that texture to each from many program objects.
In what way is this an advantage?

In the object-based system, you bind the shadowmap to each object you want to use it with well before rendering. And you never need think about it again.

i agree with Komat i think its a bit more work in this case as well

eg
i have a 2d shadowmap on texture unit 10, thus in the shader i just use whatever depthtexture is bound to unit 10

for all lights
{
bind this lights SM to unit 10
set shader + state (if necessary)
draw all geometry
}

with the new method im gonna have to tell the object that ive changed the depthtexture, though i assume this will be pretty cheap

If I’m right, this is beginning to sound like a cool effect system of sorts: the driver keeps all state bound up into (immutable) program objects, which when bound automatically configure the samplers, <wrong>ROP, depth/stencil and blend states</wrong> along with all stage shaders. Changing program state means creating a new program object, but this is a very lightweight operation.

[edit] A correction or 2, due to review…
http://www.khronos.org/library/detail/siggraph_2007_opengl_birds_of_a_feather_bof_presentation/

http://opengl.org/img/uploads/pipeline/pipeline_004.pdf

Originally posted by Korval:
[b]In what way is this an advantage?

In the object-based system, you bind the shadowmap to each object you want to use it with well before rendering. And you never need think about it again. [/b]
I have more than one shadowmap texture. They are suballocated to individual visible lights and any of them can be used by any material shader.

Now say that I have set of objects utilizing some materials (each corresponds to one or more program objects) and lit by one light and another set of objects which use the same materials and are lit by different light with different shadowmap.

In OGL2 I bind the first shadowmap and render objects from first set, then bind second shadowmap and render objects from second set. There is no need to change texture binding within the program objects. Because of unfortunate absence of shared GLSL uniforms I need to upload light parameters each time I use program object which was not yet used for current setup.

In OGL3 in the same situation I have two possibilities. I can rebind the textures in the same way I do in OGL2 with parameter uniforms (the parameter uniforms itself in this case can be handled by single update of shared buffer) or I need to create program object for all possible combination of original program object and shadowmap texture.

Actually many from my shaders take more than one shadowmap and any from those multiple shadowmaps can be any from the textures so creating program object for all combinations is not practical.

In OGL3 in the same situation I have two possibilities. I can rebind the textures in the same way I do in OGL2 with parameter uniforms (the parameter uniforms itself in this case can be handled by single update of shared buffer) or I need to create program object for all possible combination of original program object and shadowmap texture.
That makes no sense.

If you have object set A that needs shadow map A, and object set B that needs shadow map B, then you just bind the program environments used by object set A to shadow map A and the program environments used by object set B to shadow map B.

Since each object has its own program environment, I don’t see the problem.

Originally posted by zed:
[b] i agree with Komat i think its a bit more work in this case as well

eg
i have a 2d shadowmap on texture unit 10, thus in the shader i just use whatever depthtexture is bound to unit 10

for all lights
{
bind this lights SM to unit 10
set shader + state (if necessary)
draw all geometry
}

with the new method im gonna have to tell the object that ive changed the depthtexture, though i assume this will be pretty cheap [/b]
I guess it will be more efficient than GL2.
You bind a texture to a shader, then you bind the shader, render your objects.

I guess I need to keep track of what texture is bound to a shader

if(ProgramObject.TexUnit0 != mytextureObject)
{
 ProgramObject.TexUnit0 = mytextureObject
 lpActiveImage(ProgramObject.program, 0, ProgramObject.TexUnit0);
}

I think what he’s saying is that if you have 5 possible inputs for 5 possible programs, 2.1 lets you do everything you need to with 10 objects (5 program, 5 texture), while 3.0 sounds like it would require 30 (25 program, 5 texture).

One use case for this would be any ping-pong algorithm. It would be cleaner not to have two separate programs for each direction of calculation.

Of course, if you could handle dynamic texture binding with uniforms, then you could do essentially the same thing you’d do in 2.1 using the 3.0 objects, eg cgGLSetTexParameter().

How is Cg going to fit into all of this, anyway?

Originally posted by Korval:
That makes no sense.
If you have object set A that needs shadow map A, and object set B that needs shadow map B, then you just bind the program environments used by object set A to shadow map A and the program environments used by object set B to shadow map B.

Shadowmap is property of light and environment is property of object. The binding is only temporary and with dynamic allocation which attempts to avoid reuse of shadowmap in next frame it is very likely valid only for duration of one frame so in each frame I will update the binding on almost all rendered environments instead of binding each shadowmap once to global state.

Originally posted by Korval:
Since each object has its own program environment, I don’t see the problem.

This is the cause of our misunderstanding. I do not use the program object as per engine object thing. I use is as per material thing (most materials have over 50 program objects). The data (uniforms,textures,streams) are provided by the engine object, the material provides program object (reused by all engine objects which use that material) which in turn provides the algorithm. Binding between algorithm and data is done at draw time.

With the speed of linking of current program objects I had no other possibility. If the creation of OGL3 program environment is very very fast, I might change my approach. Added: In that case however the shadowmap issue will be even worse (in number of rebinds necessary) because it is basically a global state.

How is Cg going to fit into all of this, anyway?
CG is nVidia’s thing. The ARB shouldn’t care one way or another; that’s for nVidia to decide.

I do not use the program object as per engine object thing.
I didn’t say “program object”. I was very clear in my wording: program environment.

A program environment object is not a program object any more than an FBO is an image. A program environment has a reference to a program object, but it also has all of the attachments to that program object.

Basically, program objects are just data. You do not attach anything to a program object. You attach the program object to a program environment, and you attach to that the various images, samplers, etc.

Furthermore, program environment object attachments are mutable (though the attachment points and their properties are not). So if you need to swap a texture, you can.

Originally posted by Korval:
[b]I didn’t say “program object”. I was very clear in my wording: program environment.

A program environment object is not a program object any more than an FBO is an image. A program environment has a reference to a program object, but it also has all of the attachments to that program object.
[/b]
The Siggraph slides mention that program environement has “Immutable reference to program object” so for my purposes it is similar to OGL2 program objects which I was referring to when I talked about program objects.