Multi-Pass Materials and Framebuffer Combination

Hey everyone!

As part of my diploma thesis in computational visualistics, I’m currently working on a library which is supposed to allow the use of pre-defined and/or runtime assigned materials for polygonal(as of yet) geometry.

Inspired by visual editing of shading properties a la Maya or Blender, I’m trying to generalize the idea of combining a multitude of different shaders into a certain material which can then be used when rendering. It supports a multitude of readily available stock shaders, similar to the current SuperBible’s, and is supposed to be extendable with custom shaders.

So, you would define a material with one or more passes, assign each pass a shader, define which resources to bind to the shader and let the whole thing fly. This has, of course, been done before - OGRE is just one example. My implementation, however, is geared towards editing the properties of a material from within an application, be it a scene editor or shader IDE etc., and then have the library check which shaders can actually be combined into a material. This involves checking if it is even possible to use a certain material and the possible order shaders can be executed in (based on the state from the previous pass in the same material, state from previous passes in other materials or other custom render passes and resources available in the resource cache at the time of definition). To give the application a hint how to treat and interpret materials, shader scripts export a certain feature set, defining the INs/OUTs of every shader stage, e.g. the predefined ModelViewMatrix or a 2D sampler etc. That’s a fair amount of book keeping but guarantees a lot of certainty when defining the contents of a scene and the way they’re to be rendered. State sorting in this case consists of identifying objects with the closest matching or equal materials to minimize state changes. So far so good.

Here’s my sort of problem: Say we have a bunch of objects with a simple, single-pass material which could easily be rendered to the default FB or some FBO altogether. Now suppose some objects with a multi-pass material, with some using 2 passes and some using 3 or more passes.

Now my idea is to render all objects which are grouped into a separate buffer and finally combine all render-targets into the final image. Assuming that for most scenes, most objects will share a lot of shading properties or be shaded equally (i.e. the same material - of course different textures, relative light positions etc. will be involved), that does not seem to be too much of an overhead. But if I have a lot of differently, multi-pass shaded objects which can only be grouped to a certain extent, that would leave me with a lot of FBOs to bind, render to and combine for every frame.

Another problem is, that when I group objects by material and render them into a separate buffer and then try to combine them, objects from buffer 0 may very well obscure objects from buffer 1, although it should be the other way around.

Is there a way of combining FBs while using depth comparison? Does anyone have suggestions or experience with this kind of approach described?

Thank you all for your time!

All the best

Thomas

Quickly : do a depth pass first with all objects. Then use this as “read only” depth buffer for each of your FBO layers, with a GL_EQUAL depth test.

On a more general scale : yes doing a lot multipass will be slow. But that is your choice. Why not avoiding multipass altogether, just combine shaders with properly defined ins and outs into a single shader, render in one pass.

The user interface will be some graph/node based system right, similar Blender material nodes ?

Thanks for your quick reply!

Quickly : do a depth pass first with all objects. Then use this as “read only” depth buffer for each of your FBO layers, with a GL_EQUAL depth test.

Sounds good. Gonna give that a try.

Why not avoiding multipass altogether, just combine shaders with properly defined ins and outs into a single shader, render in one pass.

That would probably work, but on the downside I’d have to enable the library to not only parse material and shader scripts but also correctly assemble shader code into one big source file for each stage. I hoped to bypass this thus keeping it very modular and simple.

Still, I will probably provide a set of stock shaders that offer combined functionality, e.g. lighting + normal- + shadow- + texture-mapping in one shader. But the aim is to keep it flexible and let the user plug in a custom and properly defined shader anywhere it’s possible and to avoid huge switched shaders for enabling/disabling stuff at runtime.

What do you think? Any suggestions on optimizing this approach?

The user interface will be some graph/node based system right, similar Blender material nodes ?

I don’t know if Blender specifically borrows from the same idea but I actually based my thoughts on the Shade Trees [Cook84] paper. Doing a a depth first traversal of such a tree will result in a sequential execution order, which my concept is aiming to reflect. So yes, it’s basically a unidirectional, acyclic graph - a pipeline if you will.

I’ll give you a typical use-case: You want your object to be lit and shadow-mapped with some light-source. You’d have to make sure that the library renders a depth map for the specific light-source in order for the library to say “Ok, you got the depth map, now you can use the shadow-mapping shader”. Another case would be deferred shading, where you’d have to render the G-Buffer first and then enable deferred lighting in the material. In the current concept it’s all sequential.

Thank you!

Just put each elementary shader code in functions, with in / out parameters. Then when assembling the final shader, just concatenate all the function declarations and a main() calling all the functions.
Doing multipass will not be as modular, as you will have only RGBA as output. If you need more precision than 8 bit per component, floating point rendering is costly too.

That is the idea, so it is not really a tree, to reuse the output of operations at different spots
You can look here for inspiration :
http://www.blender.org/development/release-logs/blender-242/blender-material-nodes/

Also check out

http://graphics.cs.brown.edu/games/AbstractShadeTrees/abstract-shade-trees-2006.pdf

That is the idea, so it is not really a tree, to reuse the output of operations at different spots
You can look here for inspiration : http://www.blender.org/development/release-logs/blender-242/blender-material-nodes/

Apparently I thought of at least two of their “Next Steps” before even knowing this page. :smiley: Looks convincing.

Just put each elementary shader code in functions, with in / out parameters. Then when assembling the final shader, just concatenate all the function declarations and a main() calling all the functions.
Doing multipass will not be as modular, as you will have only RGBA as output. If you need more precision than 8 bit per component, floating point rendering is costly too.

Combined with your idea for shader generation this would leave me with kind of a Maya/Blender hybrid (I read that Maya generates shader code from it’s visual material definition), only fully programmable and GL core profile based, right?

If I get the GLSL spec correctly, name clashes aren’t possible as equally named and defined variables will simply share memory across compilation units and language.

Some questions concerning the assembly: Would I have to pass application provided values into the functions? Couldn’t I simply ensure that the globals are all defined before the first function definition and then access data in the global scope?

Thank you very much!

Keep in mind this page is 5 years old, not sure if the “next steps” still makes sense with blender 2.5 :slight_smile:

“Globals are bad” right ? Well I have no advice on this, using only functions in/out and handling the plumbing between app-provided and shader code to happen in the main() will avoid any risk of name clashing. That might also be slower, so I guess you will have to try to know.

In any case, please report your findings and how this interesting project evolves :slight_smile:

tksuoran: Thanks for the hint! I just read the paper and was surprised how many intersections their system and my concept have. It’s also funny they use RenderMonkey since the IDE gave me the initial idea of combining a material by using multiple passes, which then lead me to the OpenGL boards. :slight_smile:

Still, my take uses C-style scripts, similar to OGRE, instead of the declaratice extension in the paper.

In a shader script, a uniform would for example be defined as

uniform “direction”
{
feature : SF_VECTOR_3_F
feature : SF_NORMALIZED
precision : SF_LOWP
}

which corresponds to “uniform lowp vec3 direction (…)”, passed to the shader normalized.

or

attribute “wave_function”
{
feature : SF_FLOAT
precision : SF_HIGHP
}

which will be a “in highp float wave_function …”

The application then tries to bind the data provided to the respective location. Data is either statically defined, e.g. (1.0, 0.0, 0.0) for our uniform “direction”, or dynamically updated by referencing a unique object of the correct type in a material script.

In any case, please report your findings and how this interesting project evolves :slight_smile:

Will do! Thanks everyone! :slight_smile:

Thomas