Vertex programs - About keyframe interpolation

All right, this should probably be posted in the advanced forum but even if it is flooded by msgs, I feel those should be left to ppl who’s really in business with the GL. I am not so I try to post as less as possible there.

The point of this message is to get some impressions about vertex programs when used for keyframe interpolation. We all know that, computing new frames in GPU is fast. Probably faster than doing it in CPU I think, even if I cannot be sure (still to experiment with that).

Now, the problem is that VPs are a one-way only. You put data in (say, pos0, pos1, weight) and you get nothing. Well, you get something but you don’t know what, since the calcolations are done in the GPU. I heard about a future render_to_vertex_array thing that may solve this problem but let’s go on.

Now, say I want to do collision detection on a model which is interpolated on GPU. The only way to do this really accurately is to re-blend the frame in CPU. Since it will happen a lot of time, the vertex program is almost useless in this case - computing in CPU and then updating should be faster. Of curse, I could just do the CD on the keyframes only but it would not be such accurate. I am referring to a per-triangle collision detection. Quite sure UT2k3 has it.

This kind of problem is very bad in case of accurate CD but still applies to other things.

Another example: simple shiluette determination. Computing a triangle normal and finding if the normal is facing to the camera or not. In some cases, the interpolated triangle may NOT be facing the camera. This is not really important since shadow volumes are not really important to interactivity, they just make the image better.

However, the keyframe interpolation thing worries me. I would like to hear what other people have to say about this.
Thank you in advance.

Originally posted by Obli:
Now, say I want to do collision detection on a model which is interpolated on GPU. The only way to do this really accurately is to re-blend the frame in CPU. Since it will happen a lot of time, the vertex program is almost useless in this case - computing in CPU and then updating should be faster.

That’s right.
One of the first things to know about vertex programs is that if you need feedback about what the vertex program computes, you may do it yourself.

Originally posted by Obli:
Of curse, I could just do the CD on the keyframes only but it would not be such accurate. I am referring to a per-triangle collision detection. Quite sure UT2k3 has it.

It is highly recommended NOT to test CD on a per-triangle basis. It’s damn slow and don’t always give the best information. I’m not sure what CD is performed by UT2K3, but they rock for sure.

Originally posted by Obli:
Another example: simple shiluette determination. Computing a triangle normal and finding if the normal is facing to the camera or not. In some cases, the interpolated triangle may NOT be facing the camera.

So what is the problem ? Backface culling is performed after the vertex program, so there’s no difference whether vertex program is enabled or disabled.

Thanks for your reply Vincoof.

I was not really sure about the example concerning shadow volumes - looks like I simply would have been better to leave it in my mind.

About the per-triangle CD… I actually think it can be done. You may not need to check for CD every triangle every frame and, if it is too slow you can go back to old CD. It was just a supposition however.

I heard of a thing called render-to-vertex-array. Let’s hope this allows the result of vertex programs to be read (I don’t think however). Does anyone knows something about this feature? I heard of it reading some papers from this year’s GDC.

Kind of an aside to the topic but still kind of relevant:

with a render to vertex program would it alow for use to have one floating point texture containing the xyz positions of some geometry, a second floating point texture to contain normals for each vertex specified in the first texture, and then a floating point displacement map (generated by some nosie). we could then blend these three textures together using a fragment program that would place result in a texture(or vertex buffer in our case), and then we draw the frame with this updated vertex buffer, the next pass we just repeat with the texture that we have just computed as the new vertex positions. We could use tese textures to store key frames as well…Would this be cool or am i have i breathing too much paint fumes again what do you guys think?

I actually think I have to read again the GDC papers on this subject. Anyway, render to VA is a work in process… We’ll have to wait to see how it works.

The render-to-VA is a feature that is being worked on, but won’t be available before some time (maybe some years).

I’m not sure the render-to-VA feature will allow texture lookups. It is a strong break in the OpenGL pipeline. I wonder what will be done. hmmm.