strange problem with vertex arrays

Hi. I really need some help with my vertex arrays. Everything seems to work fine, I have multitexture running fine and so…
But as soon as I try to draw anything above the 20th (or something like that) row the program crashes! I can’t even draw a single element or the program crashes… And I have no idea why! I have tried to write my vertex and indice array to a file and they all seems okey. The map is 256 wide and high. I have the arrays as follow:

float g_vertex[MAP_SIZEMAP_SIZE][3];
float g_texture0[MAP_SIZE
MAP_SIZE][2];
float g_texture1[MAP_SIZEMAP_SIZE][2];
GLuint g_index[MAP_SIZE
MAP_SIZE*2];

Here is the code for setting up the arrays:

    int index=0;

int currentVertex=0;
const float divv=256;
const float divvd=4;
int z=0;
int x=0;

for(z=0; z < MAP_SIZE; z++)
{
for (x=0; x < MAP_SIZE; x++)
{
//jobbar med vertexarna
g_vertex[x+zMAP_SIZE][0] = x;
g_vertex[x+z
MAP_SIZE][1] = Height(pHeightMap, x, z);
g_vertex[x+zMAP_SIZE][2] = z;
//textur nummer ett (den stora)
g_texture0[x+z
MAP_SIZE][0] = x / divv;
g_texture0[x+zMAP_SIZE][1] = z / divv;
//textur nummer två (detail)
g_texture1[x+z
MAP_SIZE][0] = x / divvd;
g_texture1[x+z*MAP_SIZE][1] = z / divvd;
}
}
for(z=0; z < (MAP_SIZE - 1); z++)
{ for (x=0; x < MAP_SIZE; x++)
{
currentVertex = z * MAP_SIZE + x;
g_index[index++] = currentVertex + MAP_SIZE;
g_index[index++] = currentVertex;

  }

}

And here is the code for drawing…

for(int row=0; row <= MAP_SIZE - 1; row++)
{ glDrawElements(GL_TRIANGLE_STRIP, MAP_SIZE * 2, GL_UNSIGNED_INT, &g_index[row*(MAP_SIZE * 2)]);
}

And this will cause the program to crash, and there is no difference with locked arrays or not… And everything works fine if I only draw the first 20 rows, so I manage the arrays and so fine I think…
Is there anyone with a good answer to my problem?? I’m tearing my hair of here!

Henri,

I don’t see any code to asign your c++ pointer to the array in opengl.

Try using this:

glVertexPointer and glTexCoordPointer

John.

Im doing that. And it all runs fine wiht my multitexture, as long as row is 20 or less. As soon as I try to draw anyting above that the program crashes! So the glVertexPointer and that stuff is all set up correctly.

So whay doesen’t it work?!

you know what ? i think that you misuse your arrays…

If you use MSDEV i suggest you to add a watch to &g_index[row*(MAP_SIZE * 2)]
with row > 20 and see if you get a valid result.
By valid result i mean it does’t say expression can not be evaluated (not in an allocated memory) and if indices are well in the expected range.

If it crashes it simply means that it is accesing an unauthorized area in memory.
(but giving a brief look to your code i don’t see anything wrong, but it’s the only way i know that make a vertex array call crash…)

Just my 0.02 euros.

Joel.

update:

it seems that if i use a map witch is 64 in height and weight the arrays (index array) works perfectly drawing all the way to the edge.

but thats to small, and i dont want to use 16 maps just to get to my original 256*256 map, as i was hoping to be able to enlarge the map further when i got some extra speed and LOD working.

so my question is. why cant i use an array bigger than 64*64 without getting these strange drawing problems with my arrays (index array)???

And one more question, then I send the indices to gldrawelements, where do they get stored, in the AGP memory?

Joel: im using msdev but im not sure how to use the watch thing, could you give me some healp…

Originally posted by Henri:
[b]update:

Joel: im using msdev but im not sure how to use the watch thing, could you give me some healp…[/b]

Just go to view menu -> debug windows -> watch alt + 3
you get the watch window (surprising that you get what you want with a MS product )
&g_index[row*(MAP_SIZE * 2)] will give the adress in memory where your vertices are getting
&g_index[row*(MAP_SIZE * 2)][indexYouWant] will give you the value at the beginning of your array with an offset of “indexYouWant”.

Just to make sure you get everything the watch window has a left part that is titled Name and a right part titled value.

To me it sounds like you define MAP_SIZE with a certain size but maybe you use a diffrent hard coded value seomewhere else in your code.

Make sure the problem comes from a ill-allocated (or filled) array and then try to find why and where.

Joel.

Hi,
i think i had the same problem like you, some days ago.
You use a twodimensional array (float[mapsize][3]) This might be your problem. I did this with my texturearray and i got only a mess. Maybe your program crashes because you do this with your vertexarray and because the memory is not sorted the right way your “polys” become concave -> crash.

Allocate only a onedimensional array. This is a bit harder to do but it might be your problem. If you use a twodimensional array you never really know how C++ handles the data and if it is the way opengl wants the data.

Hope that helps.
Jan.

>>>Allocate only a onedimensional array. This is a bit harder to do but it might be your problem. If you use a twodimensional array you never really know how C++ handles the data and if it is the way opengl wants the data.
<<<

Nah, a 2 dimensional array is just a pointer to a pointer. It calculates the offset with a multiplication and add, but allocation is still a linear memory.

You can cast the pointer this way

float *pointer=&array[0][0];

and you should see your number following each other.
pointer[0]; pointer[1], pointer[2], …

I don’t know if some compilers work in a different way, but this should be the behavior.

V-man

Well, if you don´t believe me, try it yourself. I had a twodimensional array (allocated correctly) for my texture-coordinates. All i saw was a brown mass because C++ handled the array different from OpenGL so the coordinates were incorrect. Then i changed it to a onedimensional array and now everything works fine.

I don’t know if some compilers work in a different way, but this should be the behavior.

You don´t know, but you think it works this way. I thought exactly the same way, and i was wrong.
I just want to help you, so give it a try.

Jan.

Come on. What is more simple ? That you had a bug in your code and did a mistake, or that tens of thousands of people using multi dimensionnal arrays every day are all doing it wrong but yet, see no problem ?

Y.

i have tested the one dimentional array. and i get the same problem. drawing with a small height map (6464) it all works fine. but if i use a 256256 or larger, the program crashes when i try to draw something outside the frost 2k triangles or so… (and the same thing happens on my second computer…)

so right now im back to my old triangle strips…

i have tested with the code from the book opengl game programing, that i used to make my own code, and the same thing happens there when i make the height map 256*256…

the only example i have seen working with a height map bigger than 256256 was a demo where he built a vertex and tex coord on the fly for two rows at a time, then render and then two new rows… is this the only whay to go if i whant a landscape with more that 256256 vertecis… (exept for splitting the data up in smaller arryas) and if so, WHAY?

Stupid question, but your arrays are global ones, are they? If not…

Here you create a (MAP_SIZE-1)MAP_SIZE2 array of indices:

for(z=0; z < (MAP_SIZE - 1); z++)
{
for (x=0; x < MAP_SIZE; x++)
{
currentVertex = z * MAP_SIZE + x;
g_index[index++] = currentVertex + MAP_SIZE;
g_index[index++] = currentVertex;

}
}

Here you send MAP_SIZEMAP_SIZE2 indices:

for(int row=0; row <= MAP_SIZE - 1; row++)
{
glDrawElements(GL_TRIANGLE_STRIP, MAP_SIZE * 2, GL_UNSIGNED_INT, &g_index[row*(MAP_SIZE * 2)]);
}

I didn’t understand your code very well, so it just might be the way you organize your indices.

[This message has been edited by t0y (edited 08-01-2002).]

Originally posted by Jan2000:
[b]Well, if you don´t believe me, try it yourself. I had a twodimensional array (allocated correctly) for my texture-coordinates. All i saw was a brown mass because C++ handled the array different from OpenGL so the coordinates were incorrect. Then i changed it to a onedimensional array and now everything works fine.

You don´t know, but you think it works this way. I thought exactly the same way, and i was wrong.
I just want to help you, so give it a try.

Jan.[/b]

I cant say what the problem was, but I have checked how arrays are getting allocated (on my compilers anyway). It suppose to be linear. A lot of people use 2 dimensional arrays for their matrix code as well you know.

I use one dimensional for my textures, but use 10 dimensional arrays if you want, as long as you provide the right info the gl, it will take your texture.

V-man

You have fence post errors in how you create and use your arrays. As someone else pointed out, for example, you’re filling in (MAP_SIZE-1)*(MAP_SIZE)2 indices, but you’re touching (MAP_SIZE)(MAP_SIZE)*2 indices (because of the <= on row-1). This will send random uninitizlied index values into OpenGL.