What bebefit can I enjoy in rigid body particle system if I use recent OpenGL pipe?

I am doing a simulation of rigid body dynamics system. The system is a collection of particles bouncing back and forth in a containing geometry like a cube. The positions of particles are computed in CPU and then I use OpenGL to display them. There are two ways to use OpenGL. One is the old immediately mode (or fixed-function mode) in which I transfer the positions calculated to GPU once I finished computing them. Another way is the recent version of OpenGL which stores the array of particle positions in GPU and I only need to transfer parameters like camera angle. I tried to use recent version of OpenGL because I like using new versions. But I cannot see the benefit of this because I still have to transfer computed positions to GPU, so I cannot convince myself (and other people) that using newer version of OpenGL pipeline is a better choice. Can you help me with this question, i.e., what is the benefit I can enjoy by using recent version of OpenGL, instead of the old immediate mode, when simulating a rigid (or soft) body particle system in which positions of particles are computed in CPU? Thank you.

If you’re updating all the data from the CPU each frame, then there isn’t any inherent advantage to separating upload and rendering.

But that’s not a particularly typical case; in most graphically-intensive programs, some of the data remains constant between frames, so you avoid uploading the same data repeatedly.

Thank you for the reply. It is my homework; I wanna take advantage of recent OpenGL to speedup my program a little bit. It seems at least the containing geometry remains unchanged in GPU so I can apply the new OpenGL version to it. If that is all, CUDA might be a promising way to squeeze the computational power out of my machine but that means a lot of code change…

In OpenGL 4.3+, you can use a compute shader. In earlier versions, you can achieve the same ends by (ab)using a vertex shader (with transform feedback) or a fragment shader (with a framebuffer texture).

GLSL doesn’t have all of the features of CUDA, but it isn’t specific to nVidia and doesn’t require any additional libraries.

try to simulate 1 million particles on the cpu, upload the results, try to simulate the particles on the GPU: you’ll see that the GPU does the same work much faster. besides that, you dont need to upload 1 million x sizeof(Particle) each frame to the GPU, the particle data just remains there once you’ve uploaded the initial state. the “disadvantage” is that you havent direct access to them from the cpu, you’d have to download the particlebuffer data first … (in case you need that)

to do the physics, you can capture the world’s geometry (each triangle) into transform feedback buffers, bind it later as uniform or shader storage buffer, and do the physics in the compute shader (which simulates each particle).

here’s an example:
https://sites.google.com/site/john87connor/compute-shader/tu

the collision detection would be:

bool IntersectionTriangleLine(...) { return true or false ... }

for each triangle in captured_world_triangles
{
    for each particle in particlebuffer
    {
        vec3 A = particle.position;
        vec3 B = particle.position += frametime * particle.velocity;
        if (IntersectionTriangleLine(A, B, triangle))
        { /* reflect particle */ }
    }
}

Even with your specific use case, newer OpenGL versions (and when I say “newer” I mean going back as far as OpenGL 1.5, because as I like to repeatedly stress, buffer objects and shaders are not new features: they have been available for over 15 years!) can allow you to separate the upload from the drawing, so as to potentially get better pipelining.

With immediate mode, upload and drawing are a one-step operation.

With buffers you can upload at the start of a frame, go away and do some other useful work (like maybe calculating the next frame’s data), then come back and draw after that work is done.