Best way to load vertex data from different arrays into vertex buffer

Hi

Im pretty new to OpenGL and have just started trying to understand the new pipeline in the past week or so. I had writen a program to load models and texture them and whatnot using the fixed pipeline and im trying to adapt my model loading code to best suit the new pipeline.
Ive watched all of the sigraph 2013 intro video
/watch?v=T8gjVbn8VBk

and also been reading and taking pages of notes on several tutorials however there are a couple things i could really use some help on

Im trying to figure out how to best load data into a vertex buffer given the model loading functions i wrote when i was using the fixed pipeline. So i load my data from obj files and i store it in a class like this

	
        GLfloat **normals;
	GLfloat **verts;
	GLfloat **UVcoordinates;
	GLfloat ***UVs;
        GLfloat ***faceNormals;
        GLfloat ***faces;

the normals, verts, and UV-coordinates arrays are straight up sets of coordinates, and the faces,faceNormals and UVs arrays are pointers to the elements in the other arrays but ordered in terms of faces rather than just whatever order they happend to be loaded from the obj file. Originally i wrote this so i wouldn’t have an extra int index for every set of coordinates.

So i have 2 main questions.

  1. with use of index buffers in mind, how should i format the UV, Normal and vertex data in a vbo? There arent going to be the same amount of UV coordiantes or normal vectors as there are vertices so how am i supposed to associate all that information?

2)when loading data into a vertex buffer, what would be the best way to attach all this data together. Do i have to have all my vertex, normal, and UV coordinate data in one big array for when i call glBufferData() or could use glBufferSubData() to load in the data from their arrays as i have them now?

The best way to handle this is to create a data structure that defines a vertex with all its components (position,normal, uv). Now load it with the data.

It looks like you are trying to load from an obj file so a unique vertex is defines for a set of pos/normal/uv. So a single position can map to several vertices.

Now map the faces vertices to these generated vertices. This gives you the index buffer.

A few things to note.

The index buffer is a 1-1 mapping to the vertex buffer, so you have pos/normal/uv in buffers you must have the same number of entries in each buffer.

It is not normal to keep both a face normal and a vertex average normal but you can if you want. If you want to display using face normals you will need a unique vertex at each corner of the triangle or
use a geometry shader to calculate the normal.

There is a lot of sample code on loading obj’s on the net.

Ive already wrote the code to load the obj and have a program that successfully loads all that info and displays and textures the model and adds some basic lighting. It uses the old fixed pipeline though and i want to use shaders.
My programs set up to look at a text file and load whatever models are named in that file and than load a corresponding texture.

But wow…im surprised that there would be that kind of data redundancy. Oh well. At some point i want to learn some OpenCL. Maybe than ill figure out how to give data to the GPU the way i want to.

So ive been thinking and researching about this since i read your post earlier. And i was thinkiing, what if i had all my models formated in a way that they could be expressed as one continuous linestrip. And than i can put one normal and 3 UV coordinates in every vertex and than i would only need one Normal coordinate being stored for every face aswell as only one set of UVcoordinates.

IS there any algorithm for expressing models as one line-strip?

There is no general algorithm for this. Some models can be represented this way usually by having some zero area triangles to provide links.

Would you mind elaborating on what you mean by links? between what exactly?
Is this approach an advantageous one?

Thanks for your replies by the way, really has helped me direct me in a good direction on figuring this stuff out.

Is this approach an advantageous one?
No - you can generally get a much out of using a vertex buffer and an index buffer.

Would you mind elaborating on what you mean by links?
If you could make your mesh from 2 strips you can convert it to 1 strip by adding a triangle to connect the 2 but it has to be done in such a way that it has zero area, ie 2 vertices has the same coordinates.

[QUOTE=tonyo_au;1257857]No - you can generally get a much out of using a vertex buffer and an index buffer.
If you could make your mesh from 2 strips you can convert it to 1 strip by adding a triangle to connect the 2 but it has to be done in such a way that it has zero area, ie 2 vertices has the same coordinates.[/COLOR]

[/QUOTE]

Indexing should always be preferable to adding zero-area tris.

Why is this though? as i currently understand it, with indexing, ill have to create extra instances of a vertex so that i can attache some UV info and/or face normal data for each face. So this means to fully texture a cube i will need 24 vertices in memory vs. just 12(with 4 verts for 0 area tris).

So i think there must be something im missing here because if the above statement is correct, than this implies textureing that uses UV coordinates(or any data thats in terms of faces for that matter) defeats the whole purpose of indexing.

There are a couple of things to consider here. If you are looking a cube with face normals, your best options is triangles without an index buffer. It will render with one call whereas strips will need several. But that is a very specific case.

If you are looking at meshes in general the index cost is saved when a vertex is shared and you can still render in one call whereas if you use strips you are guaranteed to need extra render calls slowing down the render time and probably costing you as much buffer space.

[QUOTE=shalnon;1257876]Why is this though? as i currently understand it, with indexing, ill have to create extra instances of a vertex so that i can attache some UV info and/or face normal data for each face. So this means to fully texture a cube i will need 24 vertices in memory vs. just 12(with 4 verts for 0 area tris).

So i think there must be something im missing here because if the above statement is correct, than this implies textureing that uses UV coordinates(or any data thats in terms of faces for that matter) defeats the whole purpose of indexing.[/QUOTE]

If you’re adding a zero area tri to join two strips, and you’re not using indices, you need to add either 4 or 5 extra vertices. A vertex is much bigger than an index. If your vertex size is 32 bytes and your index size is 4 bytes, you can do the calculations from there.

You’re also fixated on saving memory as a primary goal. This is actually quite often not the most important thing - in fact memory savings can sometimes cause lower performance. If you’ve ever had to deal with GL_RGB versus GL_RGBA you’ll be aware of this already.

Reducing the number of vertices in memory is not the “whole purpose of indexing”, as you put it. Indexing has two other purposes: (1) reducing vertex shader overhead by caching recently transformed vertices, and (2) reducing the number of draw calls by allowing you to concatenate multiple primitives (including different primitive types). Both of these will actually get you far more performance than the memory savings you seem more concerned about.

[QUOTE=tonyo_au;1257879]There are a couple of things to consider here. If you are looking a cube with face normals, your best options is triangles without an index buffer. It will render with one call whereas strips will need several. But that is a very specific case.

If you are looking at meshes in general the index cost is saved when a vertex is shared and you can still render in one call whereas if you use strips you are guaranteed to need extra render calls slowing down the render time and probably costing you as much buffer space.[/QUOTE]

I think there must be something kinda fundamental im getting confused. so lets say hypethtically i do want to use face normals, as i understand what you said, to be able to store a normal for a face i would need to copy ever vertex for that face so that each vertex normal could be set to the same value as the face normal. Would this not mean that openGL is processing a larger quantity of vertices? memory savings aside, these would be different vertices in different parts of memory with different indexes(if i was using indexes). wouldn’t this create more overhead(not sure if my understanding of the word overhead is right so forgive me if im wrong)? If strips were used, than wouldn’t opengl only have to pull one new vertex out of memory each time it wanted to render a new face, and have to process less data overall while still rendering pretty much the same image.

This is the thing im not getting about this yet. And this also extends to texture coordinates, since texture coordinates are not in terms of vertices.

I would think that adding 2 zero area tris would be more efficient than creating an additional 16 vertices when copying vertices so that each face has a unique 3 vertices.
Wouldn’t all those vertices have unique indices and create more overhead?

If you are drawing with face normals each side of your cube with require a minimum 4 vertices no matter what layout you use - 6 if you use triangles giving 36 in all, 4 with indices and 4 with strips giving 24 in all for these options.

If you wish the draw with a single call (highly desirable for performance), indices will cost the additional space of 36 indices - 36x4 = 144 bytes and this space does not increase if you add colour, normals or texture coordinates to the vertices.

If you try to render with a single call using strips you will need an additional vertex per face (and maybe more to link in the top and bottom since you cannot use the standard methodof rendering the cube as a strip).
This is a cost of 6 * 3 * 4 = 72 bytes for the coordinates plus to cost of 6 x 2 x 4 = 48 bytes for the UV and 6x3x4 = 72 for the normals - a total of 192 bytes.