View Full Version : Hardware Transformations vs Software Transformations

11-30-2004, 10:22 PM
I'm writing some collision routines where I need my actual vertex data to be transformed. I assumed I'd have to write my own Rotation, Translation, and Scaling routines to manipulate my data properly, but I was wondering if there's a way to retrieve the vertex data after my glRotate/glScale/glTranslate calls. Wishful thinking?

11-30-2004, 11:11 PM
you wont be able to get the transformed vertices back in a easy fashion (just workaround abusing pixelbuffers afaik)

normally in a collision system you would want to approximate most of your geometry using simple shapes, ie boxes, spheres, capsules. Then you just transform those, much faster and easier to check then real per-poly

there might be just very few objects that really need per-poly (like terrain).
So most objects would still benefit from T&L on hardware, as you render the "nice" model, but use the simplified geometry for collision.

12-01-2004, 03:01 AM
Have a look at glFeedbackBuffer.

12-01-2004, 03:45 AM
You don't transform all vertices into one common space to do collision - you simply transform the vertices of the object with the least vertices into the coordinate system of the one with the most...therefore if you're testing a cube against a line segment you'd just transform the line segment into the cubes coordinate system and leave the cubes vertices alone before doing your collision tests.

12-04-2004, 10:34 AM
Thanks for the help guys.

Zbuffer: glFeedbackBuffer. I think that'll do the trick. My only concern is that the feedback occurs after polygon culling. Might be an issue, but I guess I'll play with it and find out.

12-05-2004, 03:43 AM
Please don't use the feedback buffer for this, it will be terribly slow for everything except the most trivial cases. In general there's no need to transform all vertices for collision detection. You do what knackered said or if you do need to collide multiple rotating meshes you use some kind of spatial hierarchy so you only need to transform vertices of the potentially intersecting parts. And even then you only need to transform the vertices from one of the meshes into the coordingate frame of the other object.