Applying a transform to every vertex in a scene

Hi all,
I’m working on a particle physics event display that uses OpenGL. I’d like to implement a viewing transformation such that every vertex is moved relative to the world coordinates according to a given algorithm. I’ve already written the algorithm I need: it will take a given x,y,z and transform them to their new position. I just don’t know how to do this to all vertices in a scene using OpenGL.

Apologies in advance if this is well explained somewhere else easy to find - I wasn’t entirely sure what to search for online

Thanks

I think the answer really depends upon how you render your scene - one big object (lots of polys - single draw call) or many objects each with their own draw calls.
In OpenGL, objects are positioned in the scene by constructing a MODEL-VIEW matrix which is used to ‘transform’ the verticies of any given model (object). The VIEW part of the OpenGL MODELVIEW matrix is just the camera, and in an ideal world, the MODEL part of the MODELVIEW matrix is an identity matrix. The effect of this is that the model is drawn into the scene and in a way that is viewable by the camera.
To move the object, you just alter the MODEL part of the current MODELVIEW matrix and voila (all the verticeis of the model are transformed into the new position).
In the simplest of cases, all you have to do is:

Setup camera with glulookat…

glpushmatrix; //save the current OpenGL camera state
gltranslatef (x,y,z); //move object to new position in World space
drawmyobject;
glpopmatrix; //restore OpenGL MODELVIEW matrix

The effect of the glTranslatef command is to build a transalation matrix from X,Y,Z and alter the current MODELVIEW matrix to use this new one as the translation. This is why the current state must be saved between drawing multiple objects.

There are many tutotials on this, the OpenGL Pramming Guide (or Red book) and Web tutorials like www.NeHe.gamedev.net or www.Lighthouse3D.com are all excellent.

The assumption i’m making is that your object(s) is being transformed by XYZ as opposed to parts of your object being transformed by different values. The latter would be much harder than just moving the entire object along a vector and would mean recalcuating each and every vertex position for the effected regions of the object. Just how you break an arbitary mesh into smaller peices for manipulation like is is unknown to me.
Hope I helped (a bit).

Thanks. In fact, I am trying to alter the rendered position of each vertex depending on its distance from the origin of the world coordinates - I’ve already been able to do scaling and translations.

The actual transform I want to do is as follows:
double a = 1e-3;//some constant
double rho = sqrt(xx + yy);
rho = rho/(1.0 + arho);
z = z/(1.0 + a
z);
//Transform rho back into x and y

This would have to be applied to each and every vertex. Luckily, since this is a physics application, none of the geometry needs to be too complicated: I’m only rendering lines, points, cylinders and boxes. There should be no need to break up the mesh.

Thanks again for the quick reply, and the useful references. I’m new to all this,as I may have made clear…

In this case, you have two options - depending on the hardware you have access to, the performances you expect and the learning you’re willing to do :

  • simply transform yourself the vertices before submitting them to OpenGL for rendering (probably by updating the Vertex Buffer Object containing the coordinates)

  • use a Vertex Shader to apply the transformation, this will allow the GPU to compute the transform, and probably will go faster than if applied by the CPU