Question on glDrawElements

Breadboarding an idea on animation where parts of the character are being actively moved by user input. Not into the actual coding yet.

When sending a glDrawElements command with a range of vertices, will the ViewProjectionMatrix value at the beginning of the range stay the same throughout the range or will it change if matrix is recalculated concurrently during the render?

If it changes, it looks like I’ll need to do extra to sync the matrix change calculations to the drawing.

[QUOTE=Goofus43;1288853]…idea on animation where parts of the character are being actively moved by user input. …

When sending a glDrawElements command with a range of vertices, will the ViewProjectionMatrix value at the beginning of the range stay the same throughout the range or will it change if matrix is recalculated concurrently during the render?[/QUOTE]

With traditional specification of transform matrices, they’re often constant (same at beginning and at the end) during a single draw call. However…

If you want to change these transforms per group of vertices within a draw call or even per vertex, you can! Do whatever your needs require.

Case-in-point: character skeletal animation, where you have a “palette” of pre-computed skinning matrices, one per joint (which you upload to the GPU). Then in the vertex shader, you dynamically look up into that palette of transforms, blend them together, and dynamically compute a unique MODELVIEW transform for each vertex in the draw call. This allows you to have a static set of vertices in each draw call (with a static set of vertex attributes), while still animating any/all parts of the animated character at-will.

[QUOTE=Dark Photon;1288862]With traditional specification of transform matrices, they’re often constant (same at beginning and at the end) during a single draw call. However…

If you want to change these transforms per group of vertices within a draw call or even per vertex, you can! Do whatever your needs require.

Case-in-point: character skeletal animation, where you have a “palette” of pre-computed skinning matrices, one per joint (which you upload to the GPU). Then in the vertex shader, you dynamically look up into that palette of transforms, blend them together, and dynamically compute a unique MODELVIEW transform for each vertex in the draw call. This allows you to have a static set of vertices in each draw call (with a static set of vertex attributes), while still animating any/all parts of the animated character at-will.[/QUOTE]

Many thanks.

This is exactly what I’m planning. If I understand your answer correctly you’re saying to not pass-in MODELVIEW but to pull one out of an array depending on specific parts of the model. I was thinking more of sending seperate glDrawElements calls for each segment and selecting the needed transform to pass in prior to each call. I Imagine your method would be superior as setup of the GPU transfers would be minimized.

Now I just have to figure how to determine the indices of the vertices where the changes must be made. Guess I’ll go browsing GLSL language specs.

New at OpenGL so this may be weird.

If I create a vertex shader with an input mat4 attribute variable can I ignore the common matrix passed in and use this as a per_Vertex model view to be dynamically altered between frames? Would this would take up too much of GPU memory?

[QUOTE=Goofus43;1288864]
If I create a vertex shader with an input mat4 attribute variable can I ignore the common matrix passed in and use this as a per_Vertex model view to be dynamically altered between frames? Would this would take up too much of GPU memory?[/QUOTE]
Maybe, depending upon the number of vertices. Also, it would use 4 attribute slots, as a mat4 attribute is effectively 4 vec4 attributes.

Typically, there are constraints upon the transformation which mean that a mat4 is an unnecessary generalisation. E.g. if a transformation is composed only of rotations and translations, you can represent it as a quaternion and translation, which is only 7 elements. Or a rotation vector and translation, which is only 6.

If each transformation is typically used for multiple vertices, then you can pass the set of transformations as a uniform array and then only need to specify an array index per vertex. This would also reduce the memory bandwidth involved in updating the transformations.

That’s one option, but the one I was thinking of was is this:

[ol]
[li]pass down a constant MODELVIEW and MVP for the entire character, and then [/li][li]have the animation matrices (aka joint skinning matrices) transform the model from the bind pose to the current pose (both in OBJECT-SPACE). [/li][/ol]
The animation matricies are OBJECT-SPACE -to- OBJECT-SPACE transforms, so once you apply them to animate the mesh, you’re still in OBJECT-SPACE. Then you apply the MODELVIEW (and MVP) for the entire character to take the animated mesh to EYE-SPACE and to CLIP-SPACE, for lighting and rasterization, respectively.

I was thinking more of sending seperate glDrawElements calls for each segment and selecting the needed transform to pass in prior to each call. I Imagine your method would be superior as setup of the GPU transfers would be minimized.

Your way is definitely an option. But yes, you’re right: vertex skinning tends to be more efficient as it allows for fewer state changes and larger batches. In fact, you can batch up animated characters into large groups of hundreds or thousands, all animating with a unique pose, all driven by the same static set of vertex attributes and the same palette of joint skinning matrices.

Now I just have to figure how to determine the indices of the vertices where the changes must be made. Guess I’ll go browsing GLSL language specs.

You might check out some skeletal character models. They already have these indices baked into the vertex attributes for the vertices. They also have the animation transforms needed to animate these vertices.

In a full character animation pipeline, the artists/modelers designing the characters in 3D modeling packages have tools to easily designate indices/influences for each vertex in the mesh. This is then exported/published into datafiles that can be read by the realtime renderer to implement vertex skinning.

[QUOTE=GClements;1288865]

If each transformation is typically used for multiple vertices, then you can pass the set of transformations as a uniform array and then only need to specify an array index per vertex. This would also reduce the memory bandwidth involved in updating the transformations.[/QUOTE]

Basically I thought of this approach last night after posting the question since each new matrix would apply to each vertex of the skinning segments. Likely the best bet as I see there are functions to do this.

[QUOTE=Goofus43;1288864]New at OpenGL so this may be weird.

If I create a vertex shader with an input mat4 attribute variable can I ignore the common matrix passed in and use this as a per_Vertex model view to be dynamically altered between frames? Would this would take up too much of GPU memory?[/QUOTE]

With this kind of setup, the amount of memory used is likely to be the least of your bottlenecks. Far worse will be compute (recomputing all of these matrices each frame will take a significant amount of time) and transfer of the updated matrices to the GPU (in terms of both bandwidth and management of pipeline stalls).