Moving meshes relative to each other

Hi everyone, what I would like to know is what’s the best way to move objects relative to each other. At the moment I got a transform class that can change every vertex, when multiplied in a vertex shader. I understand that basically I got 2 choices when it comes to moving meshes relative to each other.

  1. Change the positions that I read from a file and use glBufferData() everytime I do so.

  2. Initialize each mesh to a different VertexAttribArray and in vertex shader multiply them by a different transform uniforms.

Please tell me which one should I use, or do something else entirely.

Which one sounds like it would be more efficient? Looping over every vertex position and apply an arbitrary transformation to it on the CPU, then uploading that data to the GPU for processing? For every frame.

Or using a vertex shader, which is GPU hardware that is specifically designed to loop over every vertex position and apply an arbitrary transformation to it on the GPU, so that the vertex data for a particular mesh remains static?

I’m pretty sure changing vertex formats and buffer bindings and setting uniform values is faster than doing potentially megabytes of vertex data uploading every frame.

Streaming data to a buffer is something you do when your transformation processes are so complex that a vertex shader can’t do it efficiently (or if committed memory storage is an issue, like for ROAM and similar algorithms). Multiply a position by a constant value isn’t that.

Thanks, now it seems obvious. I can just send a different transform matrix to the uniform in shader before I draw the mesh.

Or even better, create a buffer that contains the transforms for all of your meshes and select the appropriate one in the shader. Update as needed with the highest performing method you have available on your target platform (ideally a persistently mapped buffer).

If you’re learning OpenGL, why not learn the proper high performance path from the get-go? You probably already have (or can get) all the matrices you need before hand, anyway.
Retrofitting the AZDO principles into a classic render loop might prove harder in the long run.

L.E.: There’s a reason I’d like everyone to use 4.x stuff as much as possible: to create a large enough user base so that IHV can fix their damn drivers. AMD, for example, is actively looking into reports on devgurus.amd.com even from indies or hobbyists and I’m sure nVidia is as well. Whenever I see someone posting “What’s wrong with my code: glBegin(…” I sigh

Because that requires way too much effort for a beginner. He’s got enough stuff to learn without also learning about UBOs, uniform blocks, and the fastest ways to update buffer objects.

A noble goal to be sure.

True, I wasn’t expecting him to actually start doing all of that immediately (hence the “Or even better”), just spreading the word. Even a simple “uniform mat4 transforms[MAX_WHATEVER]; uniform int idx;” with transforms updated per frame/per program and idx per frame/per program/per mesh would simplify things in the future. He did mention “VertexAttribArray” and “glBufferData” so I assumed he got the basic gist of buffers.