Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: Looking for advice on a model format

Hybrid View

  1. #1
    Intern Contributor
    Join Date
    May 2011
    Posts
    65

    Question Looking for advice on a model format

    I'm trying to write a nice binary mesh/model format, which I'll be able to read from a file quickly and easily. I have Assimp working, so I can import from other formats in order to save in my own, but I need some advice on how to organize everything.

    One thing I want to avoid is a node/tree based file. I want everything in arrays, to make file reading as fast as possible. File I/O will likely end up being the main bottleneck in my game engine, so I want to make sure it's as fast as possible. If I'm wrong, and somehow node-based files are faster, please let me know.

    So far I'm storing all of the vertices in a single packed array - No separate arrays for separate components, everything is together. My initial idea was then to have a single index array, and then use glDrawRangeElements in between each call (when I need to change materials, for instance), but I read that using glDrawRangeElements is quite a lot slower than simply glDrawElements, so I was considering putting each sub-mesh in it's own index buffer.

    Question #1) Store sub-meshes together in a single index buffer, calling glDrawRangeElements, or in separate buffers, calling glDrawElements?

    So far, for the vertex itself, I have 2 separate structures, each one a multiple of 32 bytes (I've read that it makes a big difference when transferring buffers to the gpu). The first, a minimalist vertex, contains 3 floats for it's position (x, y, z), 2 floats for texture coordinates (u, v), and 3 floats for the normal (x, y, z), a total of exactly 32 bytes. The second, which will probably be used most often for static geometry, contains everything in the first, plus 3 floats for a tangent (x, y, z), 3 floats for a bitangent (x, y, z), and then 8 bytes of padding, for a total of 64 bytes. The only advantage of the second over the first is that it supports tangent space normal mapping, and frankly, I don't enjoy having 8 bytes of padding. Is there something I could be using those 8 bytes for? Right now, it's wasted memory....

    Question #2) Do the first 2 vertex formats look good? What could I use those 8 extra bytes for?

    Next, I need to be able to support the following features:

    - Transform animations (rotation, translation, and scale of an entire sub-mesh over a period of time)
    - Skinning (hardware skeletal animation, with animations in separate files)
    - Morph targets (both as an animation on it's own, and to be used alongside skeletal animation)

    And, frankly, I have no idea how to implement any of those features, but I want to prepare my model files to be able to support it.

    For transformation animation, the best idea I can think of is to store a "timeline" along with each sub-mesh. The timeline would be an array where each element is a keyframe containing a matrix and a time, and the game would interpolate between one matrix and the next based on the current time.

    As for skinning... Frankly, I'm not sure where to start. I've done some Google searching, but most of the information I've found has been scattered and/or incomplete. Can somebody share a link to a comprehensive skinning tutorial in GLSL? I want to do it in the vertex shader, but I want to maintain compatibility with OpenGL 2.0 and 2.1, so I'd rather not use integer attributes (which are only available in GL 3+).

    For morphing... My best guess is to store each morph target as it's own vertex buffer, and then send multiple vertices to the vertex shader, which will interpolate between them. How can I do that? I can't bind two vertex buffers at once, can I?

    Any advice that you could share would be much appreciated!
    Last edited by WIld Sage; 05-02-2012 at 12:05 AM.

  2. #2
    Senior Member OpenGL Pro BionicBytes's Avatar
    Join Date
    Mar 2009
    Location
    UK, London
    Posts
    1,170
    There is no one answer fits all I'm afraid.
    I'm trying to write a nice binary mesh/model format, which I'll be able to read from a file quickly and easily.
    You'll propably end up having to write your own - a binary file format.
    I wrote a utility to load my assets (.OBJ, .MD5) and convert to my own format. I found this was the best compromise to ensure that each of my engines 'models' supported a common set of features - such as the same ability to handle materials (shaders and textures).
    Do the first 2 vertex formats look good?
    In theory, yes. In practice it wholey depends upon your source art assets. What happens to your lovely model format when the model you want to load has 3 sets of texture coordinates?
    I endedup writing a flexible vertex stream class which would allow my to have any number of vertex attributes - thus I support what ever the model wants. I don't pack the vertex arrays (other than 4-byte aligned). Unless your engine is vertex throughput limited I don't think this is an area you need to worry about too much these days as you are more likely to be fill-rate limited instead.
    Transform animations
    I don't even pretened to understand all that's involved here. I ported Doom3's .MD5 model format and was glad when it was all working. I suggest you start there too as there are examples of this on the net. To me, MD5 files are complex: Multi-mesh, multi-frame and many animation files to contend with. On top of that, you need to have a good material handling too.

  3. #3
    Intern Contributor
    Join Date
    May 2011
    Posts
    65
    What kind of effects might I need more than one set of texture coordinates for? (Texture coordinates that I need to supply to the shader, rather than computing within the shader)

  4. #4
    Senior Member OpenGL Pro BionicBytes's Avatar
    Join Date
    Mar 2009
    Location
    UK, London
    Posts
    1,170
    I'm not saying that you would need more than one - other than some model editors do support it - so there must be a reason.
    All I'm saying is that my vertex stream class considers all vertex attributes to be optional (appart from veretx) so any number could be present in the asset.
    My engine's model class is smart enough to detect the presence of tangent, texture coordinate and so on and do the right thing w.r.t shaders, vertex attributes, uniforms and materials.

  5. #5
    Intern Contributor
    Join Date
    May 2011
    Posts
    65
    That's understandable, and I see why that would be advantageous, but my engine doesn't really need that flexibility. I use Assimp to handle imports, so I can import from a pile of different formats, such as .obj, .3ds, and even collada .dae, and handle the data in a similar manner no matter what format it comes from. I might need to make new tools down the road, but for now, this is enough.

    Might I need it for light/shadow mapping? I'm not sure how that works, but it wouldn't surprise me if it needed to use special uv coordinates...

    Environment mapping can be calculated in-shader, right?

  6. #6
    Senior Member OpenGL Pro BionicBytes's Avatar
    Join Date
    Mar 2009
    Location
    UK, London
    Posts
    1,170
    Shadowing is most likely to use the shadow mapping algorithum. As this is an image-space technique it's 100% independant from the geometry and thus no considerations to model format affect it.
    Stencil volume extrusion shadowing however does use an extra vertex component in the calculations. It's alledged that Doom 3 uses this method. It produces a hard edge, and requires a change to the model vertex arrays.
    Environment mapping is a texture space effect and again unrelated to geometry. This can be added as a flag to your model's material properties to indicate that a material has some kind of environmental reflectance. How you implement that is up to you (hint: cube mapping)

  7. #7
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    Quote Originally Posted by WIld Sage View Post
    but I read that using glDrawRangeElements is quite a lot slower than simply glDrawElements
    Huh? Where did you read that?
    glDrawRangeElements has been created as an optimization over glDrawElements.
    Either you get better performance or you get the same performance (depending on drivers and GPU)
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  8. #8
    Intern Contributor
    Join Date
    May 2011
    Posts
    65
    Quote Originally Posted by V-man View Post
    Huh? Where did you read that?
    glDrawRangeElements has been created as an optimization over glDrawElements.
    Either you get better performance or you get the same performance (depending on drivers and GPU)
    It was a slideshow from GDC by ATI, about performance considerations with OpenGL. Can't remember the year or the link, unfortunately...

    That same slideshow is where I learned to pack vertices in multiples of 32 bytes, btw.
    Last edited by WIld Sage; 05-02-2012 at 04:27 AM.

  9. #9
    Member Regular Contributor
    Join Date
    Aug 2008
    Posts
    456
    I think you've got that the wrong way round - from http://developer.amd.com/media/gpu_a...anceTuning.pdf :
    Prefer glDrawRangeElements over glDrawElements

  10. #10
    Junior Member Newbie
    Join Date
    Sep 2011
    Location
    Florence, Italy
    Posts
    14
    Did anyone encounter a performance benefit using a separate VBO for only positions (ex. for depth pass, shadow mapping pass)?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •