I wanted to ask if anyone had any opinions on the best way to render lots of sprites using vertex buffer objects.
Each sprite has several parameters like:
position (updated often)
texture2darray frame number (for animated sprites, updated a lot)
rotation (updated less frequently, probably never used for some)
scale (updated rarely, probably almost never used for most)
I’m trying to figure out what would be the best way to send these attributes to the GPU to render as many sprites as I can as fast as possible. The first optimization of course is the build the quad in the geometry shader. Now each sprite only requires a single vertex. Still, the big question is what is the best way to pass in the rest of the sprite attributes to the GPU?
I had a couple of ideas:
-
Treat each sprite the same way you’d treat a traditional mesh. In this case all the parameters would be uniforms (like a transformation matrix) and the vertex buffer would not even really do anything. This seems like it would require too many API calls as you’d be issuing at least one call to push the uniform data and then a draw call for every single sprite.
-
Use a large dynamic vertex buffer for all of the sprites and pack everything into vertex attributes. Update it once every frame and then sort and batch the draw calls by shader/texture state change.
Is it better to use one large vbo or break everything into a set of smaller ones? If the answer is to break it up, how do you determine when to do this?
For the purposes of dealing with vbo locks when you need to write to them would it make sense to do a double buffering approach? Maybe have 2 vbos and every other frame write updates the other one? How about a vbo thats twice a large and you always write the opposite half of it.
This seems like it could be somewhat wasteful. If 80% of my sprites have a rotation factor 0 then it doesn’t really make much sense to be storing this value for every sprite in a vertex attribute.
- Use a combination of vertex buffer and something else. Perhaps a texture buffer object and/or uniforms for the things like rotation and scaling which happen much less often.
One idea could be each frame to load up the texture buffer object only with the rarer attributes ( like scaling, and maybe some other esoteric things like tinting by a color) if they are used, and then set a vertex attribute to index into it and have a conditional in the shader check if this index is >0.
For example if I’m rendering 2000 sprites but only 5 of them are scaled, then the texture would have those 5 scale values and each of their vertex attributes would index into the texture where this scale data is located for each one. The rest of the sprites index attribute would be something like -1, indicating that they have no special attributes to lookup.
Of course if only 5 things out of 2000 have a special parameter, they could just be given separate draw calls and have that parameter specified by a uniform.
Basically I have a bunch of parameters. Some of them will change almost every frame and on a lot of sprites. Some of them will change less frequently. Between vertex attributes, uniforms, texture buffer objects (or normal textures), whats the best way to get this data into the GPU to my shader?