Yes. That’s by far the best way to go. For basic single animation track skeletal playback, you can animate thousands of skeletally animated character this way – much, much more cheaply than you can do it on the CPU.
The reason I switched to modern openGL and started thinking about this, is because my comp has 32 megs of VRAM and using the vertex arrays with the fixed pipeline was not working. I was getting some weird behaviors (vertices appeared to be sticking into place, and popping back randomly)
So I switched to VBOs and just went down the rabbit whole.
I can relate. VBOs require some reading and experience to learn how to use them fast. If you have questions on that, I’d suggest starting a different thread rather than mixing that in here. I and others will be happy to help with tips and suggestions.
SO. A vertex shader can get attr and uniforms passed. But if I were to do skinning in a vertex shader, I would need to pass the transforms as uniforms per bone associated w/ a vert every frame. The reason I was considering vertex shaders was to reduce the CPU->GPU overhead. But it still appears to be there anyways…
For basic skeletal animation (basic, single animation track playback), you don’t need to upload anything per frame except a tiny uniform that defines the current time and which animation track the character is on. You let the GPU handle everything else (specifically, in a vertex shader you write).
You can pre-upload all your bind-pose skeletal meshes to a VBO, and pre-upload all your skinning transforms (as matrices, quaternion-translation pairs, or [better] dual quaternions ) to a texture. Then come render time, in the vertex shader you just sample the appropriate joint transforms from adjacent keyframes based on the current time (if you want to support keyframe interpolation) and also based on joint weights (if you support smooth skinnning), compute the aggregate transform, and then use that to transform the vertex position. Conceptually very simple (but with skeletal, there’s lots of fun in the details!)
If I am not uploading the pre-posed verts, I am uploading the pose transforms–every frame; the overhead still exists, no?
For this basic single-animation-track playback, you are pre-uploading the pre-posed (bind pose) verts and pre-uploading the full skinning transforms (for all tracks/keyframes/joints). So that overhead doesn’t exist.
However, when you get further along and want to support more complex animations (feathering, blending, IK, etc.) then – for characters that require more complex animation – you may decide to compute your skinning transforms on-the-fly on the CPU, and then upload them per-frame to the GPU (for those characters which can’t use the simple single-track method), but of course that’s a little more expensive. This’ll probably reduce the number of characters you can animate at the same frame rate by an order of magnitude or two. You’re still skinning on the GPU, but you’re now computing pose transforms dynamically on the CPU (whereas before the per-joint pose transforms were all precomputed and preuploaded, requiring no per-frame “compute” cost on the CPU). Alternatively, you might enhance your implementation to put this dynamic pose transform generation on the GPU.
As it stands now: I have quat & trans pairs…
Sounds like you’re on the right track. But read up on Dual Quaternions though! Your joints will thank you:
The first is a good CG intro to DQ. The second is the “meaty” stuff (shader code, technical papers, etc.)
…that i use to transform verts and send those verts to the GPU every frame.
Is there a cleaner way to do this that is faster, effectively more efficient, using more of the GPU?
Yep! And we’re just scratching the surface here!
(By the way, none of this requires a compute shader, and all the complexity that brings)
Just ask if you have more questions!