Placing thousands of models on a sphere..

I have a very large amount of small models each in their own display list.
I need to place them on a spherical surface when in view… basically a terrain.

I have already broken down the way the models are drawn into dynamic display list groups. This means I get a more even load on the CPU as groups are initialized in stages, and re-drawn more quickly once in a parent display list.

My question is : At the moment I am calculating their initial position on the CPU, with fixed pipeline calls to translate and rotate them. Not the best solution I think and I am trying to get creative…

There is no way around them being constantly initialized and destroyed the way they are as they are independent objects, and their locations are based on a procedural algorithm over a massive landscape.

But I was wondering if it would be faster to simply draw the objects using one glTranslate (forget the rotation) and then handling the orientation of the model in a vertex shader… (Their orientation is to the landscapes surface normal + a variance btw)

If anyone has an opinion to share I am very interested…

Beware that trees (and buildings) grow vertically, and not along surface normal.
Use VBO instead of display lists, it is more flexible and more future proof (see appendix E in GL 3 spec).
You should do both translate and rotate on the vertex shader : just pass uniforms to define for each model the planet center, height, and lat/long coordinates.

Thanks for that. Yeah… I should have said the sphere’s normal.

Good point on VBOs. I am prototyping with display lists but will swap the class out so that it’s handled by VBOs in the end.

Deprecation of the gl matrix stuff does not mean, that using matrices is a bad idea. But emulating these functionality in a vertexshader with many instructions would…

It’s highly recommended to write an own replacement for the matrix stack. An pass a gl_ModelViewProjectionMatrix equalient to the shader.

Can you elaborate on what you mean in that last sentence…

My main aim here is to make the translations and rotations I must do on each object as efficient as possible… Basically pipeline and parallelize it as much as possible, and also cut down on traffic between the CPU and GPU. That last bit should not be at the expense of the GPU though.

My app is already shader limited, and I have a fair amount of spare time on the CPU, so I am concerned that if I push all the maths into the shader I may actually see no speed benefit as the GPU is already quite busy…

To simplify my question…

Is pushing the rotations and transformations into the vertex shader by passing uniforms going to speed things up, or simply shift work from the CPU to the GPU?

Do I get any advantage from multiple pipelines in the GPU, over the bottleneck of doing glRotate / glTransform per object on the CPU side?

I already have my own maths library rolled btw.

http://www.opengl.org/registry/specs/ARB/instanced_arrays.txt or http://www.opengl.org/registry/specs/ARB/draw_instanced.txt or the GL 3.0 core versions. Rendering thousands of similiar objects with different transformations, this is what instancing is good for.

That looks funky! I had not heard of that before. Thanks. :slight_smile:

<s>Is there a RangeElements equivalent?</s>
I am a moron. Ignore that. :slight_smile:

I have played with quite a lot of variations of drawing these objects today.

I’ve settled on a set of VBOs, and using glDrawElementsInstancedEXT (Thanks ScottManDeath) to draw these models. With some clever caching that has taken a big load off the CPU, and also lowered the amount of traffic I am having to send to the GPU - mainly by cutting out lots of individual glDrawArrays.

I think basically that I am hitting hardware limits on the GPU now.

However, at the moment I am sending a big array of matrixes to the shader.
Each one describes a rotation and transformation from the center of the sphere to the surface.

It works, and it’s pretty fast… but it still seems a bit inefficient.

I can’t see an easy or efficient way to just use height and lat / long and do the maths in the shader… Would you care to elaborate, or show me an example anywhere on the net…

I’ve had a look at quite a few examples of ‘skinning’ objects using shaders, and people still seem to be doing the maths using glMatrix commands and uploading sets of matrixes…

As pointed out by oc2k1, my idea would consume more GPU processing power inefficiently : transformations would be calculated separately for each vertex, whereas for static geometry, the instanced method is better.

Thanks for confirming that.

Sorry if this is a bit out of topic but how you pass as much uniform data to the shader? (I mean the matrices) Do you use bindable uniforms, textures or what?

More precisely, which are the ways of sending an array of per-instance data so I can use the glInstanceID to access the data corresponding to the current instance? Which are also the advantages/disadvantages of each method?