Vertex array trouble

I’ve decided to move my drawing routines from standard immediate mode routines to use vertex arrays (after I get this to work properly I’ll move it to use vertex buffer objects). Basically, I have my structure for vertices, normals, and texture coordinates and I have the model loaded and stored in the varaious arrays for each type.

When I enable GL_VERTEX_ARRAY and delcare the vertex pointer for the vertex data then attempt to draw the array with glDrawElements nothing is rendered. My question to you is, based on the structures I have what is the proper way to specify the data for glVertexPointer and glDrawElements?

// defines a single vertex for the model
struct sVertex3d 
{
	sVertex3d();
	float vertex[3];
};

struct sFace
{
	sFace();
	vector<int> vertlist;	// the vertices that make up this face
	vector<int> normlist;	// the list of normals that make up this face
	vector<int> texcoordlist;	// the list of texture coordinates that make up this face
	string material;	// which material does this face use
};

sVertex3d *pvertices;
sFace *pfaces;  
glColor3f(1.0, 1.0, 1.0);
if(!pfaces[n].vertlist.empty())
{
	glEnableClientState(GL_VERTEX_ARRAY);
	glVertexPointer(3, GL_FLOAT, sizeof(sVertex3d), pvertices[0].vertex);
			
	for(i = 1; i <= pfaces[n].vertlist.size(); i++)
	{	
		if(pfaces[n].vertlist.size() == 3)
			glDrawElements(GL_TRIANGLES, pfaces[n].vertlist.size(), GL_INT, (void*)pfaces[n].vertlist[0]);
		else if(pfaces[n].vertlist.size() > 3)
			glDrawElements(GL_POLYGON, pfaces[n].vertlist.size(), GL_INT, &pfaces[n].vertlist[0]);
				//glVertex3f(pvertices[pfaces[n].vertlist[i-1]].x, pvertices[pfaces[n].vertlist[i-1]].y, pvertices[pfaces[n].vertlist[i-1]].z);
	}
} 

Wow. There’s a great deal going wrong there.

First, you need to change how your vertex data is being handled. You cannot have a separate set of indices for normals, texture coordinates and positions. You structs should look like:

struct VertexNT
{
  float position[3];
  int padding1;
  float normal[3];
  int padding2;
  float textureCoord[2];
  int padding3[2];
};

VertexNT vertexList[NUM_VERTICES];

int faceList[NUM_FACES];

faceList indexes into vertexList. Each 3 elements in faceList correspond to a single face, made up of the vertex attributes stored in vertexList.

Aside: the padding elements in VertexNT are for performance reasons; graphics cards like things to be on appropriate alignment boundaries. Generally 16-bytes, but if you’re using smaller kinds of data (shorts or bytes), then it can handle that.

To draw it, you simply do:

glVertexPointer(3, GL_FLOAT, sizeof(VertexNT), &vertexList[0].position[0]);
glNormalPointer(GL_FLOAT, sizeof(VertexNT), &vertexList[0].position[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(VertexNT), &vertexList[0].textureCoord[0]);


glDrawElements(GL_TRIANGLES, NUM_FACES, GL_INT, faceList);

You should never use glDrawElements to draw a single triangle at a time. Which is what you were doing.

I just wanted to point out that what Korval probably intended to say is that the relative order of all attributes needs to be the same, but there is nothing preventing you to keep them in separate arrays, like:
verices[4711]; normals[4711]; colors[4711];
and so on, so long as you specify the pointers and stride before you send the indices to actually reference the attributes in a draw call.

That said, alignment, locality-of-reference and other (efficiency) factors could make it more practical to keep all attributes interleaved in an array of struct like presented - but it isn’t a requirement. Should one of the attributes change frequently, it could be more efficient to rip that out of the struct and update that as a single array - but now I’m too drifting away from the topic. :slight_smile:

Korval is however 100% correct in pointing out the inefficiency in your code. I can myself only come to think of one or two more inefficient ways to draw.