Vertex Skinning

I’d like to do skinning with the EXT_vertex_weighting extension. My question is… since I’ve to split the model in many display lists (groups of vertices that share 2 matrices):

  • it’s accelerated on GF1/GF2 and Radeon? It’ll work using display lists? I don’t want to use vertex programs (well, actually I cant :_(

  • it’s faster than doing this myself on software? (the models are 500/600 tris with ~12 bones per model)

well… thanks!

_> Royconejo.

[This message has been edited by royconejo (edited 05-04-2002).]

GL_EXT_vertex_weighting is not supported on Radeons, it supports the more general GL_ARB_vertex_blend though.

Any hardware skinning that doesn’t support matrix palettes is a dead end.

Taking out “hardware” vertex weighting from an existing product and re-writing it using SIMD sped up rendering of polygonal characters on the same card by about 3x in some work I did a year ago, AND resulted in characters that looked better, because the number of matrix limitation is per vert, not per triangle. The problem is that the state setup and switching necessary to do skinning with the available extensions is dwarfing the actual drawing time – plus, the GF2 we were targeting only runs half speed when blending matrices.

If you can afford a higher target platform, I suggest looking into ATI vertex shaders and nVIDIA vertex programs, which allow you to write your own paletted matrix skinning code which is much more likely to run well. The draw-back is that you can’t fit an entire humanoid skeleton in available constant registers, so you still get to split your mesh in a few pieces, and some switch overhead between each piece.

Originally posted by jwatte:
If you can afford a higher target platform, I suggest looking into ATI vertex shaders and nVIDIA vertex programs, which allow you to write your own paletted matrix skinning code which is much more likely to run well.

Yes… but I also have to deal with more ‘general’ hardware (GF1/2-Radeon). I’ve made a lot of tests and research and when it comes to use some new features (vertex programs o pixel shaders) I realize that there is no middle point… or they are used fully (targeting a GF3/Radeon8500), or they’re not used at all. Some time ago I was trying to use “per pixel lighting” for example… but at the end, it’s really expensive to do on older hardware (use lots of CPU or Video resources) and when it looked fine on demos, it’s not feasible for a real game. And the implementation of those effecs is completely different on new hw (GF3) to run really fast…

I’m gonna do it on software…

Thanks for your replies

_> Royconejo.

If you want to look good on the high end, you’ll have to implement a flexible state setup system (a k a “shaders”) and scale back based on the capabilities of the card.

After all, the pipeline is still:

Art tool -> model format & shading parameters -> state setup -> throw vertices at the card

You can usually slice out the parts of “shading paramaters” and “state setup” that the current card doesn’t support, and thus degrade with some semblance of grace.

Of course, if your artists then decide to impart some important gameplay information through runtime modulation of your per-pixel bump map, you’re sort-of screwed, so make sure they get to vet their designs on the bottom end hardware as well as the top of the line.