MichaelK> Does geometry must to predict possible shaders and include all these textures?
Geometry should not come with textures. At work, geometry comes coupled with default materials, and materials have textures; at home, I just keep them entirely separate and let the configuration of the engine object which needs the geometry, also configure what material to use with the geometry.
However, geometry MAY need to include every possible vertex stream. This is especially true for things like tangent spaces, fur shells, anisotropy coefficients, etc. Once you get a sufficiently advanced system, you’ll have to do run-time checking, and display an error saying “you configured a fur material but there is no fur_shell sub-mesh” if a wrong match is made. If you have a material selection UI, then you should just dim out shaders that require geometry streams that aren’t available.
knackered> jwatte, I don’t believe that storing textures in the shader is the best way of going about things
When I say “shader” I really mean “material” which consists of references to textures as well as references to vertex and pixel programs. Shaders are applied to geometry to generate pixels on screen. Also, different pieces of the system I describe exist in at-work code and at-home code; I’m not at that perfect nirvana in either place yet (and doubt I’ll ever be
An object can pick meshes, and pick material settings for each sub-mesh. I e, in a configuration file it might look like (pseudo-code):
mesh {
file basichuman
materials {
hair {
fragmentprogram hair2.fp
diffuse redhair.dds
anisotropy strands.tga
}
skin {
color #e0d4aa
modulate freckles.dds
specular freckles.dds
}
trousers {
bump coarse_jeans.tga
color red_tab.dds
}
…
}
}
Where the material names and default properties are specific to the mesh (but if you have good production tools, they’ll be consistently named). The actual parameter names would be parameters to the shaders. Yes, this involves actually doing linking (resolution) of materials, textures and meshes at load time.
Actually, at work, we build meshes by aggregating and parametizing skinnable meshes, which gets hairy quickly when you need to make sure they all match up and lod together reasonably and all that – nothing is ever simple
To deal with deformable meshes, there’s a prerender step which gets called on everything that’s going to be rendered; that’s the ideal time to form your pose for this time step, etc. Then you just re-submit the geometry for each render pass that needs it.
Oh, and this set-up STILL isn’t actually complete enough, as there are some bits and pieces that can’t be fully data driven, such as “render strategy” – used to make transparency render far-to-close and all that. For now, that’s all hard-coded to a few specific strategies. It’s un-clear to me how to actually make it data driven, unless you consider full-out scripts to be “data” and are prepared to take that hit. I don’t, and I’m not.
Oh, and if you have the luxury of working with good artists, they really deserve in-window fold-out property inspectors with pop-up menus, spinner wheels, sliders and drag-and-drop browsers for these things, rather than editing some text file with NOTEPAD.EXE