Looking for advice on a model format

I’m trying to write a nice binary mesh/model format, which I’ll be able to read from a file quickly and easily. I have Assimp working, so I can import from other formats in order to save in my own, but I need some advice on how to organize everything.

One thing I want to avoid is a node/tree based file. I want everything in arrays, to make file reading as fast as possible. File I/O will likely end up being the main bottleneck in my game engine, so I want to make sure it’s as fast as possible. If I’m wrong, and somehow node-based files are faster, please let me know.

So far I’m storing all of the vertices in a single packed array - No separate arrays for separate components, everything is together. My initial idea was then to have a single index array, and then use glDrawRangeElements in between each call (when I need to change materials, for instance), but I read that using glDrawRangeElements is quite a lot slower than simply glDrawElements, so I was considering putting each sub-mesh in it’s own index buffer.

Question #1) Store sub-meshes together in a single index buffer, calling glDrawRangeElements, or in separate buffers, calling glDrawElements?

So far, for the vertex itself, I have 2 separate structures, each one a multiple of 32 bytes (I’ve read that it makes a big difference when transferring buffers to the gpu). The first, a minimalist vertex, contains 3 floats for it’s position (x, y, z), 2 floats for texture coordinates (u, v), and 3 floats for the normal (x, y, z), a total of exactly 32 bytes. The second, which will probably be used most often for static geometry, contains everything in the first, plus 3 floats for a tangent (x, y, z), 3 floats for a bitangent (x, y, z), and then 8 bytes of padding, for a total of 64 bytes. The only advantage of the second over the first is that it supports tangent space normal mapping, and frankly, I don’t enjoy having 8 bytes of padding. Is there something I could be using those 8 bytes for? Right now, it’s wasted memory…

Question #2) Do the first 2 vertex formats look good? What could I use those 8 extra bytes for?

Next, I need to be able to support the following features:

  • Transform animations (rotation, translation, and scale of an entire sub-mesh over a period of time)
  • Skinning (hardware skeletal animation, with animations in separate files)
  • Morph targets (both as an animation on it’s own, and to be used alongside skeletal animation)

And, frankly, I have no idea how to implement any of those features, but I want to prepare my model files to be able to support it.

For transformation animation, the best idea I can think of is to store a “timeline” along with each sub-mesh. The timeline would be an array where each element is a keyframe containing a matrix and a time, and the game would interpolate between one matrix and the next based on the current time.

As for skinning… Frankly, I’m not sure where to start. I’ve done some Google searching, but most of the information I’ve found has been scattered and/or incomplete. Can somebody share a link to a comprehensive skinning tutorial in GLSL? I want to do it in the vertex shader, but I want to maintain compatibility with OpenGL 2.0 and 2.1, so I’d rather not use integer attributes (which are only available in GL 3+).

For morphing… My best guess is to store each morph target as it’s own vertex buffer, and then send multiple vertices to the vertex shader, which will interpolate between them. How can I do that? I can’t bind two vertex buffers at once, can I?

Any advice that you could share would be much appreciated!

There is no one answer fits all I’m afraid.

I’m trying to write a nice binary mesh/model format, which I’ll be able to read from a file quickly and easily.

You’ll propably end up having to write your own - a binary file format.
I wrote a utility to load my assets (.OBJ, .MD5) and convert to my own format. I found this was the best compromise to ensure that each of my engines ‘models’ supported a common set of features - such as the same ability to handle materials (shaders and textures).

Do the first 2 vertex formats look good?
In theory, yes. In practice it wholey depends upon your source art assets. What happens to your lovely model format when the model you want to load has 3 sets of texture coordinates?
I endedup writing a flexible vertex stream class which would allow my to have any number of vertex attributes - thus I support what ever the model wants. I don’t pack the vertex arrays (other than 4-byte aligned). Unless your engine is vertex throughput limited I don’t think this is an area you need to worry about too much these days as you are more likely to be fill-rate limited instead.

Transform animations

I don’t even pretened to understand all that’s involved here. I ported Doom3’s .MD5 model format and was glad when it was all working. I suggest you start there too as there are examples of this on the net. To me, MD5 files are complex: Multi-mesh, multi-frame and many animation files to contend with. On top of that, you need to have a good material handling too.

What kind of effects might I need more than one set of texture coordinates for? (Texture coordinates that I need to supply to the shader, rather than computing within the shader)

I’m not saying that you would need more than one - other than some model editors do support it - so there must be a reason.
All I’m saying is that my vertex stream class considers all vertex attributes to be optional (appart from veretx) so any number could be present in the asset.
My engine’s model class is smart enough to detect the presence of tangent, texture coordinate and so on and do the right thing w.r.t shaders, vertex attributes, uniforms and materials.

That’s understandable, and I see why that would be advantageous, but my engine doesn’t really need that flexibility. I use Assimp to handle imports, so I can import from a pile of different formats, such as .obj, .3ds, and even collada .dae, and handle the data in a similar manner no matter what format it comes from. I might need to make new tools down the road, but for now, this is enough.

Might I need it for light/shadow mapping? I’m not sure how that works, but it wouldn’t surprise me if it needed to use special uv coordinates…

Environment mapping can be calculated in-shader, right?

Shadowing is most likely to use the shadow mapping algorithum. As this is an image-space technique it’s 100% independant from the geometry and thus no considerations to model format affect it.
Stencil volume extrusion shadowing however does use an extra vertex component in the calculations. It’s alledged that Doom 3 uses this method. It produces a hard edge, and requires a change to the model vertex arrays.
Environment mapping is a texture space effect and again unrelated to geometry. This can be added as a flag to your model’s material properties to indicate that a material has some kind of environmental reflectance. How you implement that is up to you (hint: cube mapping)

Hmm, well, I was considering using shadow volumes, but I haven’t decided ultimately. Performance is the biggest concern for me, and with shadow maps I can just make a lower resolution map and blur it, and it’ll still look decent (mostly).

I am a wee bit concerned about light maps, however. I want to be able to handle some kind of radiosity/global illumination down the road, whether that’s with light maps or a crazy amount of point lights in a deferred renderer. Would I need to supply UV coordinates for light mapping?

Huh? Where did you read that?
glDrawRangeElements has been created as an optimization over glDrawElements.
Either you get better performance or you get the same performance (depending on drivers and GPU)

[QUOTE=V-man;1237074]Huh? Where did you read that?
glDrawRangeElements has been created as an optimization over glDrawElements.
Either you get better performance or you get the same performance (depending on drivers and GPU)[/QUOTE]
It was a slideshow from GDC by ATI, about performance considerations with OpenGL. Can’t remember the year or the link, unfortunately…

That same slideshow is where I learned to pack vertices in multiples of 32 bytes, btw.

I think you’ve got that the wrong way round - from http://developer.amd.com/media/gpu_assets/PerformanceTuning.pdf :

Prefer glDrawRangeElements over glDrawElements

Did anyone encounter a performance benefit using a separate VBO for only positions (ex. for depth pass, shadow mapping pass)?

[QUOTE=Dan Bartlett;1237084]I think you’ve got that the wrong way round - from http://developer.amd.com/media/gpu_assets/PerformanceTuning.pdf :
[/i][/COLOR][/QUOTE]

Well darn it, now I’m confused. In that context it’s giving examples of what NOT to do, so I don’t know which one they’re recommending ._.

I had a similar question at http://www.gamedev.net/topic/623448-blender-assimp-opengl-and-animations/, maybe it can help you.

Please report any interesting conclusions you come to.

I think I’ll go through that Collada tutorial that was linked, it seems like a great way to make myself more familiar with how animations are represented. I do like the Collada format a lot, and it does support some things that Assimp wont import, such as physics data. I haven’t quite decided between Bullet and PhysX yet, but they both support Collada physics, so that’s a huge plus right there. XML reading is kinda scary though… Oh well, let’s give it a try.