3D vertex animation thru mpg like playback anyone?

I have these old timey models, most similar to Quake models that are like stop motion/tweening on a per vertex basis. They are based on one of the kinds of animations the original PlayStation could do.

I am working on restoring this turn of the century program. Don’t have access to the source code but hacking around the imported API calls is not too difficult.

This program decompresses these animations, probably without ever expanding the entire animation into memory. The whole process is really CPU intensive I reckon, not to mention spamming the datapoints to video memory every frame.

My thinking was if I could get around to drawing the animations for the program. The best thing I could do would be to load multiple frames into video memory, and tween between them. The models are kind of chopped up piecemeal, along with the animations.

PROBLEM STATED

But I was wondering how completely impractical it would be to render the animations to a movie. Nothing fancy that would allow mipmapping or anything (unless there is a good library for that) but just so the animations could be compressed over time (and maybe a little over space) and to get them into a format that might be more amiable to modern ways.

A) I am wondering if compression is a big win for movies other than space (I reckon bandwidth must be a big deal too, but what if everything is in video memory already?) and if hardware knows enough about how to decode movies (to texture memory) in hardware.

B) Is this kind of thing being practiced or looked into as we speak? I expect if I websearch 3D movies tomorrow I am likely to turn up a lot of hoopla about “3D” movies in movie/home theatres. But I intend to do at least a little bit of research into this, honest.

Anyway, does it sound like something worth giving a shot?

If it worked out even half well it would be a pretty robust way to animate things I think. There would need to be multiple movies outputting to textures playing at once. I’ve implemented movie playback thru codec -> render target before. But I think that would be slower than what I have in mind. I am thinking it would have to be something like the compressed movies would need to be loaded entirely into video memory, so the hardware would be doing the decompression minimum.

Getting the most out of hardware isn’t important to the matter. Not a big fan of stretching things too thin.

PS: Just to be clear. The concept is to render vertex offsets (maybe lighting normals) to an animated texture. Better than 8bits would be required one way or another.