Organizing data and technologies

Hello,
since my application grew to some 300 kbs source, I decided it’s time to revise everything, and use the ‘proper approach’ rather then ‘use what’s working’. After doing some digging I found out 2 things:
-Only VBOs should be used for rendering
-Fixed pipeline shouldn’t be used anymore
Those two little things generally invalidate most of my application, and thus to avoid something similar in the near future, I decided to consult you before doing any major changes. How should the data be orginized, and how should the rendering happen?

I figure I should have something like this:

-World class with a list of opaque objects, semi-transparent object, etc.
-Object class containing a list of triangles. Should contain a method to create a VBO, and an attribute defining whether it should be Static, Stream or Dynamic,a method to modify parts of a VBO, rotation, translation vector and scaling attributes.
-Triangle class containing an array of 3 vertices
-Vertex class containing usual vertex data(coordinates, color, texture coordinates)

Should Triangles coordinates be relative to Objects (0,0,0), or should they be unit sized, and triangles should also have rotation, translation and scaling attributes? Same question for Vertices.

Since fixed-function vertex processing shouldn’t be used, that means I have to write vertex shaders to translate,rotate and scale my VBOs, correct?

I think I’ll leave texturing for after I get geometry going, but I’ll ask anyway - fragment shaders along with FBOs are the way to go, right?

And last, but not least, Double Buffering is done rendering frames to one of two FBOs, and rendering the other one on screen, right?

Thanks in advance.

“-World class with a list of opaque objects, semi-transparent object, etc.”
sounds ok. you want to separate those and additionally sort alpha objects by depth.

“-Object class containing a list of triangles. Should contain a method to create a VBO, and an attribute defining whether it should be Static, Stream or Dynamic,a method to modify parts of a VBO, rotation, translation vector and scaling attributes.”
ok but you should concentrate on some simple scene-graph thing as well to better simulate relationship between objects (parent/child).

“-Triangle class containing an array of 3 vertices”
why not but here containing indices TO the vertices in the vertex buffer. you can look up any vertex with this integer index (a,b,c).

“-Vertex class containing usual vertex data(coordinates, color, texture coordinates)”
not really a vertex class rather an interleaved buffer which holds all vertex data together (packed).

“Should Triangles coordinates be relative to Objects (0,0,0), or should they be unit sized, and triangles should also have rotation, translation and scaling attributes? Same question for Vertices.”
triangles should be relative to whatever a artist is intended to do. Triangles are part of whole object. triangles are always defined in the vertex/index buffers and on sending the buffer to GPU are rotated/scaled/translated by the modelview matrix.

“Since fixed-function vertex processing shouldn’t be used, that means I have to write vertex shaders to translate,rotate and scale my VBOs, correct?”
Yes.

“fragment shaders along with FBOs are the way to go, right?”
not essential. If you plan doing post-processing maybe. even then you can copy backbuffer to texture and do post-processing.

“And last, but not least, Double Buffering is done rendering frames to one of two FBOs, and rendering the other one on screen, right?”
not FBO but the backbuffer.

Thanks for your reply;)

So I could do something like make the Triangle list an Object list instead, and make Triangles derive from Objects. Is that what you meant?

You mean I shouldn’t keep my vertices in RAM at all, and instead, put them directly in VRAM using VBOs, and only keep their indices?

Probably a silly question, but how am I supposed to do it without the MatrixMode, Translate, Rotate and Scale functions?

Probably a silly question, but how am I supposed to do it without the MatrixMode, Translate, Rotate and Scale functions?
Get nvMath from the nVidia OpenGL SDK9. You simply add it to your project, and you’re ready to go doing the transformations yourself. The transformation in vertex-shader is just a multiplication of a mat4 and vec4.

A model in a game consists of 1 or several meshes. Each mesh has its material, texture, VBO, shader-uniforms (material-properties, defined by the artist). A material consists of vtx/frag shaders, passes (optional), predefined (global) uniform values (i.e light-positions), predefined textures (i.e env-cubemap).

  1. Opaque: sort by material, then by texture.
  2. Alpha-test transparency: again like opaque
  3. translucency: sort by mesh, or bucket-sort triangles (there’s no real solution anyway)