Materials / Mesh data management

hi everyone,

i’m asking myself how to best manage the data i need to draw a simple scene, with lighting (phong shaded)
i’m currently using:


struct Vertex {
vec3 Position;
vec3 Normal;
vec2 TexCoord;
unsigned int MaterialIndex;
};

struct Material {
vec4 Diffuse;
vec4 Specular;
float Shininess, Alpha;
int map_DiffuseIndex, map_SpecularIndex;
// padding etc
};

first i collect all the models meshes (triangularized faces) and Materials, then i put ALL vertices of all meshes into 1 vertexbuffer used by 1 vertexarray, and ALL materials into 1 materialbuffer, but that materialbuffer is NOT part of the vertexarray

vertexshader looks like that:


in layout (location = 0) vec3 in_position;
in layout (location = 1) vec3 in_normal;
in layout (location = 2) vec2 in_texcoord;
in layout (location = 3) uint in_materialindex;

flat out uint materialindex; // to fragmentshader

fragmentshader looks like that:


struct Material {
vec4 Diffuse;
vec4 Specular;
float Shininess, Alpha;
int map_DiffuseIndex, map_SpecularIndex;
};

layout (std140, binding = 1) uniform MaterialBlock {
Material materials[256];
};

in the fragmentshader i access the material array backed by an uniform buffer, using the passed “uint materialindex”

that works perfectly, but im asking myself how to best use meshes with different (!!!) materials
lets say i want to render earth, and the moon, i have 1 mesh (sphere), but i want to apply 2 different materials (earth texture and later the moon texture), the problem is that using that vertex layout, i have to put 2 times the same mesh data but with 2 different “materialIndex” attributes

whats a practical solution for that problem ? should i separate the “uint materialindex” into a separate buffer ? should i put all the material attributes as “vertex attributes” (and avoid completely the uniform block for materials) ? how would you approach that task ? (rendering a sphere with [many] different materials) ??

thanks for all your ideas!

To my opinion, what you are doing is not bad, considering you have a simple scene and many materials.

As long as materials vary linearly regarding vertices of a face, your way to do it, or sending them threw uniforms, or threw vertex attributes is good. The latter however, will for sure use far more memory than the previous ones. It will however be potentially faster.
The other approach is to use textures. This can be even more fast, and depending on the kind of textures, can use less memory than with using attributes. One main advantage of this, is that it allows to have materials varying (and even not linearly) at a lower level than faces/vertices. However, it requires you to have/create material textures. And the other drawback is that you’ll have only 256 different values for each component (r,g,b). Doing it with uniforms, buffers or attributes definitely allow more precision for each attribute value. So you can give more details, but each detail will be less precise.

One side-note, if you keep using your method, you might be interested in giving the shininess and alpha as the w components of the diffuse and specular of your materials: if I understood this well, alpha replaces these values.

To conclude, this depends on what you want to achieve.

thanks for your reply

sure, these kind of optimisations i’ll do later, but now i want to have a “good concept / basis” to start with

basically i want to simulate the solar system, 1 intense point-light source in the middle (sun), shaded without lightsources (only ambient = vec3(1)), and then the planets / moons, each animated (not on the gpu!), then a “skybox”. when done, clearing the depthbuffer and using the image as a “background” for a space game, thats at least my idea for now …
by the way all meshes will be rendered “instanced”

general mesh types (like a sphere) are relatively common, so reusing the mesh data with varying materials seems to makes sense …

another idea to realize it would be to (as i said) removing the “unit materialindex” out of the vertex struct, and put these values into a separate buffer, and then i could add a “uniform uint materialoffset = 0” to the fragmentshader to have a possibility to shift the array index from which to get the materials (in the uniform buffer)

The standard solution for that specific problem is instancing. The material index attribute would be per-instance, all other attributes would be per-vertex. Except that you’d probably just do away with the material index attribute and the uniform materials array and just make the material parameters instanced attributes (this should be more efficient, as it eliminates a dependent array access).

However, instancing is largely all-or-nothing. All instances have the same number of vertices, the same topology, and the same values for non-instanced attributes, so they’re identical in every way except for the values of instanced attributes.

If I understood you well, you’re not concerned by performance. Your main aim is certainly realism. In any cases, you’ll have a low fillrate, with almost no work for the z test (the back of the planets will be rejected by the polygon winding) and chances that you’ll have a lot of alignments with many planets well visible on the screen are null. Planets don’t have a lot of changes in materials, except the earth under some point of views. But as long as you stay reasonably far from the planets, you won’t have issues… Things generally start to become more hard when you want to get closer and closer to the planet, to finally to a ‘fly-by’ close to its surface. From my own experience we were able to achieve such things at 50 fps with Geforce 580/680, without using instancing, without caring so much about states changes and with calculating almost everything between 2 frames, including mesh generation and texturing (we were using openscenegraph 1.x which had no supports for these high-end things).

In that way, and to my humble opinion, any solution would do the thing as long as you can express your materials (this includes textures) easily. And yours looks good.

both, i want to be as efficient as possible, the scene’s “realism” shouldnt depend on how everything gets rendered
in other words, “phong shading” or blinn-phong (which i have to learn first :)) provide(s) realism enough, something like “physically based” rendering is not even considered ^^

what i plan to do is a scene that is split into 2 parts: an “environment” (background, sun, planets, moons) and “everything else” i can interact with, like spaceships and other stuff

thats maybe something i’d do later, but with “tesselation” + height texture maps etc

what if i have a spaceship that has 20 different materials ? with that solution, i’d have to put 20 instances of “per-instance-data” into the array buffer to render 1 spaceship, but the scene will likely have many more, should i use 2 different VAOs, 1 for the astrononical objects (each with just 1 material, mesh = sphere) nd 1 for the models (each with many different materials) ?
or is there an easier way to render models with many different materials (up to 50) ?

the reason why i used that uniform buffer and an array index as vertex attribute was to be able so render the whole mesh (or ALL instances of that mesh / model) with 1 drawcall

another problem with that solution: what if i have 2 different models, 1 has 10 materials and the other 11, the “dilema” then would be: either changing each frame the VAOs attribute divisors from 10 to 11 (and back again) or to use another VAO :expressionless:

You’d probably keep the material index attribute, which would be instanced. All other attributes would be non-instanced.

The material array would either become 2D, indexed by the instance ID and the instance-local material index, or you’d have a 1D array of materials and a 2D array mapping the instance ID and instance-local material index to a global material index. The latter would be preferable if instances typically share some of their materials.