View Full Version : my vertex arrays code! what is wrong here?

12-12-2000, 02:26 AM
hi.. i asked about vertex arrays a day ago, now i can see sum polys on the screen, but its all messed up! im putting my main vertex array code here.. can anyone help me out to find whats wrong? i guess its my indices calculation, but right now i have no idea how to fix! anyone have any idea?
thanks in advance!

heres the code:

float *VertexArray;
float *IndicesArray;
int mapWidth=256;
int mapHeight=256;

void setup()
// vertex array: size of map * 3 (3 elements makes a vertex: x, y , z)
VertexArray = new float [mapWidth * mapHeight * 3];

// indices array: size of map * 4 (im drawing quads, so its 4 indexes to 4
vertices that makes a quad, right?)
IndexArray = new unsigned int [mapWidth * mapHeight * 4];

/* generate terrain's vertices&indices.. get each pixel and calculate the
height at that point and save it in the VertexArray */

void generate_landscape()
int index_array=0;

// Generate terrain's vertices
for(int tileZ = 0; tileZ < mapHeight; tileZ++)
for(int tileX = 0; tileX < mapWidth; tileX++)
VertexArray[index_array+0] = tileX;
VertexArray[index_array+1] = GetHeight(tileX, tileZ);
VertexArray[index_array+2] = tileZ;


// calculate the vertex indices so can i can use glDrawElements to draw the

int index_indices=0;
for(int tileZ = 0; tileZ < mapHeight; tileZ++)
for(int tileX = 0; tileX < mapWidth; tileX++)
IndexArray[index_indices+0] = (tileZ * mapWidth) + tileX;
IndexArray[index_indices+1] = ((tileZ+1) * mapWidth) + tileX;
IndexArray[index_indices+2] = ((tileZ+1) * mapWidth) + (tileX+1);
IndexArray[index_indices+3] = (tileZ * mapWidth) + (tileX+1);


/* render landscape using vertex arrays & glDrawElements() */

void render_landscape()
// enable vertex arrays

// feed opengl the color, vertices array & indices array
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glVertexPointer(3, GL_FLOAT, 0, Landscape.VertexArray);
glDrawElements(GL_QUADS, Landscape.mapWidth*Landscape.mapHeight*4,
GL_UNSIGNED_INT, Landscape.IndexArray);

// disable

12-12-2000, 08:54 AM
3rd parameter of glVertexPointer should be 12 I believe not 0. It's 12 bytes from the first X coord to the second in the array. (3 x 4byte floats)


12-12-2000, 10:06 AM
Originally posted by Nutty:
3rd parameter of glVertexPointer should be 12 I believe not 0. It's 12 bytes from the first X coord to the second in the array. (3 x 4byte floats)


No, that is not true - the actual one was right.


12-12-2000, 11:27 AM
Just a thought but could it be that that coords for the vertices are not stored properly. I ran into this problem once where faces were rendering nut incorrectly. I discouvered that the vertex array was actuall storing the first coords of the first vertex of each face in a block at the start of the file so instead of x1,y1,z1,x2,y2... I had x1,x2,x3,....
As I said thught its just a thought

12-12-2000, 11:46 PM
CS, I am a bit puzzled by this. Reading the Red Book seems to indicate that it should be zero. However I have an application which uses the following structure

typedef struct {

hlGVector3 v; //The vertex position.

hlGVector3 n; //The vertex normal.

hlGTVector t; //The vertex texture coords.


And I dynamically allocate an array of them.
If the size of the actual vertices is ignored, and it's only the size in bytes between them, then surely I would use 20 for my stride. (12 bytes for the normal, (3 floats) and 8 bytes for the coords (2 floats) ) This does not work. I have to use 32. (It definitly works with 32, as used by glNormalPointer and glTexCoordPointer.

32 bytes is the stride between the first vertex X coord and the 2nd vertex X coord. So sureley using the same prinicple on an array of floats should yield the value of 12??

Or am I being stupid here and overlooking something?.. http://www.opengl.org/discussion_boards/ubb/smile.gif

Or is it a bug in my driver. I'm using Detonator 6.47 on a geforce 256.


12-13-2000, 06:46 AM
Hmmm.. Got a flat undulating mesh being rendered with glDrawElements, and the use of glVertexPointer, and glColorPointer.

Both vertices and colors are arrays of 3 float component Vertex structures. 12 bytes in size.

I get no difference in visuals or performance, when either 12 or 0 is used for stride on both pointer functions.

Someone please tell me what is going on here!


04-05-2001, 12:35 PM
This is an oldy but the reason u have no difference with 12 bytes or 0 is that using 0 tells openGL to assume the vertices are tightly packed in which they are in your particular case.

If you were to use

float x,y,z,pad then you would need to use a stride of 16.

btw a note for performance align to a 16byte boundry by using a padder in your vertex data.

04-05-2001, 12:42 PM
How about a good old-fashioned glEnable(GL_VERTEX_ARRAY)?

04-05-2001, 01:23 PM
Man oh man, this thread is old....

You reckon a structure of 16 bytes, will render faster than if I use 12 byte structures?


I'll have to test it.