OpenGL vertex arrays in C#

I’m a somewhat new C# programmer who’s just come across OpenGL and is trying to write a program to display and edit 3D terrain. I’m using SharpGL as an OpenGL wrapper; so far I’ve succeeded in following the more basic tutorials (glBegin/glEnd, gl.TRIANGLESTRIP, basic lighting, camera, etc.) but I’m having trouble when it comes to understanding vertex arrays. Part of this, I’m sure, is my having difficulty translating C++ to C#, but I’m also unclear on how glVertexPointer actually works; and the OpenGL wiki article, as well as Google searches for “Vertex Array tutorial”, haven’t cleared it up for me. If anyone could answer some basic questions for me, I would be very grateful.

As I understand it, it’s
glVertexPointer(# of components to each vertex, GL_FLOAT, something called a stride, array with vertex data);

I’m not sure what a GL_FLOAT is; Google searches suggest it’s just a float, and that it has a special name due to some peculiarity of C++. Or maybe it’s an array of floats?

The stride I don’t understand very well at all. Tutorials say it’s only used when vertex data is not tightly packed; what does “tightly packed” mean and how does stride relate to the vertex data? Does it have to do with array indices?

Even the last bit gives me trouble. :frowning: Fortunately my question is a lot simpler: what is the best way to store that vertex data? In a multi-dimensional array with a syntax like vertices[128,3] for 128 vertices with 3 coords each, or a jagged array (an array-of-arrays) like vertices[][], or just as a single array, vertices[], with the vertices’ xyz coordinates just strung out one after the other, like {x1, y1, z1, x2, y2, z2}?

Do you have to have every single vertex in your 3D world contained in one large, messy array, or can you have multiple glVertexPointers and only use glDrawElements with specific ones?

I apologize for the number of questions I have, and how basic they probably are. I’ll keep trying to figure it out, but if anyone can help me with all this, I would be very thankful.

On a slightly related note, the reference pages for OpenGL aren’t working for me; they just open a new tab which never loads completely. I’m using IE9, so according to the preface they ought to be compatible. Is there something I’m missing, or is there some other place where I might access them?

There are two major different ways to use OpenGL. The deprecated old way, before OpenGL 3, was the fixed function pipeline. This one is easier to use and understand. You do a glBegin(), define a number of vertices with glVertex()/glNormal()/glTexCoord(), and end it all with glEnd(). Using glVertexPointer() is a more efficient way to do this.

But that was the old ways. If you search for tutorials, you will find a lot that use this old style. The new way (since 2008!), is to define all your vertex data in buffers that you transfer to the GPU and then write your own shader. Now, you instead use glBufferData() (for example) to transfer data to the GPU, and glVertexAttribPointer() to define the layout if this data. This one takes all information about each vertex at the same time (coordinates, normals, texture coordinates, and anything else you like). But that means you have to write your own shader, which is the program that executes on the GPU. That is the program that is going to interpret your data. (It was possible to use your own shader before OpenGL 3).

Now a little about the stride. Suppose you have, in total, 32 bytes of data for each vertex (called vertex attrib data), in a buffer. If that buffer is packed, which means that there are no extra bytes between one record and the next, then you can either set the stride to 0 or to 32. But if your data is, for example, 31 bytes, it will be organized in memory on a 32 byte boundary, with 1 byte as filler. Now you have to define the stride as 32, even though you only have 31 bytes of actual data for each vertex. When programming in C/C++, it is very easy to have full control and understanding of memory allocation for this type of data. I don’t know how to use OpenGL from C#, so this you will have to find elsewhere. Or use a language better adapted for OpenGL.

For a nice tutorial, that also gives good information about 3D programming, see http://www.arcsynthesis.org/gltut/.

Try using Category:Core API Reference - OpenGL Wiki instead, but it will not show deprecated functions.

GL_FLOAT means 32 bit float. The other options are GL_SHORT (this is a 16 bit signed integer). GL_INT (this is a 32 bit signed integer). GL_DOUBLE is a 64 bit float.
It isn’t a perculiarity of C++. It is how computers work on planet Earth whether it is a PC or Mac or BlueBerry mephone. On planet Xenus, GL_FLOAT is a 103 bit float.

Stride is “how far away is it to the next vertex?”. It is measured in bytes.

Let’s assume that you are using floats. Each float is 4 bytes.
In memory, you have layed out your vertices like this
x y z x y z x y z…etc

Vertex 0 is at address 0.
Vertex 1 is at address 34 = 12 bytes
Vertex 2 is at address 3
42 = 24 bytes
Vertex 3 is at address 3
43 =
Vertex 4 is at address 3
4*4 =

The difference between one vertex and the next is always 12 bytes away. Therefore, the stride is 12.

I suggest a single array.

No. Use separate arrays.

No problem. That’s what the forums are for.

Thanks! With your help, I’ve managed to get buffers working, and glDrawArrays works properly.

However, glDrawElements is proving more difficult. In particular it’s the last parameter it takes that is causing me trouble. As I understand it, it’s a pointer to the start of the index array data in memory. The tutorial Kopelrativ linked me to tells me that indices must be unsigned, either ushorts, ubytes, or uints. However, in the same tutorial, glDrawElements has a fourth parameter of just 0:
glDrawElements(GL_TRIANGLES, ARRAY_COUNT(indexData), GL_UNSIGNED_SHORT, 0);
That doesn’t work for me; intellisense detects glDrawElements as having one or more invalid arguments. So how does the tutorial get away with just putting 0 if glDrawElements is looking for something like IntPtr(indices[0])?

When I try
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, new IntPtr(0));
I get a white window followed by a popup saying that vshost32.exe has stopped working.

For that matter, how come it’s looking for IntPtr and not UIntPtr, if index arrays can only be unsigned?

The fourth argument to glDrawElements() used to be a pointer to the indices vector, but that is no longer the case (since OpenGL 3). The indices are now transferred to a special buffer in the GPU, allocated and bound with GL_ELEMENT_ARRAY_BUFFER. The indices argument is now instead used as an offset into this list. If you want to draw all indices, the value should be 0. Not very logical perhaps, especially as the documentation still refers to the argument as a pointer.

Use glBufferData() to initialize the content of this buffer. Something like this (for C programming):

glGenBuffers(1, &IndexVBOID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, SizeInBytes, ptrIndexData, GL_STATIC_DRAW);

The VAO will remember this Index Buffer Object (IBO). Notice that the buffers are not different (as used for vertices or indices). The difference is in how they are bound.

If IBOs are bound with GL_ELEMENT_ARRAY_BUFFER, are VBOs still bound with GL_ARRAY_BUFFER or do they also use GL_ELEMENT_ARRAY_BUFFER?

The VBO is still bound with GL_ARRAY_BUFFER. That means you need two buffers when using glDrawElements(). One for the vertex data (GL_ARRAY_BUFFER) and one for the indices (GL_ELEMENT_ARRAY_BUFFER).

All this setup is stored in the VAO, and activated with the call to glBindVertexArray().

No, it is since GL 1.5.
GL 1.5 introduced vertex buffer objects (VBO). Therefore, if you have a VBO bound, then the pointers to glVertexPointer and the rest, are 0 based offset addresses.
If you have a IBO bound, then glDrawElements (or glDrawRangeElements), the pointer is a 0 based address.

However, I think the OP is trying to get plain old vertex arrays to work (no VBO or IBO bound), which means the pointer should be a real address (your index array).

I think he is trying to learn the basics. He isn’t in “GL 3.3 land” or “GL 4.1” land.

Oh, I’m certainly not trying to stick myself in way over my head. Thanks to Kopelrativ’s pointers, though, I got glDrawElements to work with an IBO and VBO. My goal is to render a terrain with 4,000 to 70,000 vertices. I assumed traditional glVertex commands would be far too slow, and that vertex arrays would help me render it more efficiently. If VBOs and IBOs can do the job even faster, all the better.

Thanks, everyone, for all your help! :slight_smile:

Ah, for the record, why would I need a pointer at all to get vertex arrays to work, if they don’t use buffers?

Using glDrawElements() with pointers has the disadvantage that the list of pointers is updated every time. This is a waste of effort if the list of indices hasn’t changed from one draw to another. And if the list has changed, it is not much of an effort to update it using the glBufferData(). So why would anyone use glDrawElements() with a pointer? I think that is part of the OpenGL legacy, when there was no other way.

One note of interest regarding glDrawElements(): If the same index is used more than once, which can be the case of triangles sharing a vertex attribute, there is a cache in the GPU that is used instead of calling the fragment shader again.

What Kopelrativ means by “waste of effort” is that the driver uploads your indices to video memory, then it fires a command so that the GPU uses them to render. That is why VBO/IBO were invented : to avoid that copy operation.

Exactly. There is a page in the Wiki that talks about this shit and how to improve performance.

That has been part of GL since version 1.1. We call them plain old vertex arrays.

That’s the way it is whether you use plain old vertex arrays or VBO/IBO.

No, it is not the fragment shader. It is for the vertex shader.