glDrawElements

Hi to everyone!

I’m having some troubles with glDrawElements function, since I found different approaches of how it works. So, basically I don’t understand why such code doesn’t work:


GLuint array[3] = {0, 1, 2};
glDrawElements(GL_POINTS, 3, GL_UNSIGNED_INT, (GLvoid *) array);

Though, everything is fine if I change it to:


glDrawArrays(GL_POINTS, 0, 3);

Why is it so? In OpengGL 2 it worked, as far as I remember.

This won’t work in modern OpenGL core contexts because elements must be sourced from a GL_ELEMENT_ARRAY_BUFFER object.

Hi, I’m not the one that questioned, but you are “OpenGL Pro” so, you must know something. Never used that parameter, I used to replace it with nullptr, and do the same as mhagain said, but what I never understood was why when they deprecate something, still has left overs. Like matrices, they are some kind a ghost in some ways…

Just create an OpenGL compatibility profile context of any version, and it should work fine. That’s the default type of context created unless you tell OpenGL which kind you want.

Well, you have it in your own sentence. Something deprecated does not mean removed. What you might be interested in, is a GL header by version. Slightly different, you might be interested in the header glcorearb.h where only core functions and enums are available.

Thanks for your replies.

Is it good to mix different versions of OpenGL?

It’s fine. OpenGL isn’t like Direct3D where they erase the board and start over for each new version (only game developers will put up with being jerked around like that). OpenGL versions build incrementally on each other. That way, older OpenGL programs will often still compile and work just fine on a newer GL driver.

The main reason why you might want to choose to stop using functionality exposed by older version in favor of functionality exposed by a newer version is that in some cases the latter provides better performance (classic example: switching from “immediate mode” to vertex arrays).

Because each version of OpenGL (aside from 1.0) is incrementally built on top of the prior version, by definition you’re always mixing OpenGL versions.

For example: glDrawElements is GL 1.1, glBindBuffer is GL 1.5, glEnableVertexAttribArray is GL 2.0, so put them together and you’re mixing versions.

[QUOTE=mhagain;1289200]Because each version of OpenGL (aside from 1.0) is incrementally built on top of the prior version, by definition you’re always mixing OpenGL versions.

For example: glDrawElements is GL 1.1, glBindBuffer is GL 1.5, glEnableVertexAttribArray is GL 2.0, so put them together and you’re mixing versions.[/QUOTE]

Then I don’t understand why it doesn’t work now, like I said at the beginning of the thread?

Because you’ve probably created a core GL 3.x or higher context.

This is where we need to go back to the discussion on deprecation earlier in the thread.

If you create a core context, then some (but not all) functionality from older GL versions will (assuming your driver is conformant) not work. The main constraints are that you must source all vertex and index/element data from buffer objects, you must use shaders rather than the fixed pipeline, and the old matrix stack is unavailable.

If you create a compatibility context, then all of the old functionality is available and code that used to work under GL 2 will continue to work.

The hint in your OP is:

In OpengGL 2 it worked, as far as I remember

This suggests to me that you’re using a more recent version of OpenGL, but so far you have not actually confirmed this, so it would be really nice if you’d help us to help you. At least begin by telling us which version of OpenGL you’re using, how you create your context, and whether it’s core or compatibility.

In OpenGL 3+ core profile, the last argument to glDrawElements() is an offset into the buffer currently bound to GL_ELEMENT_ARRAY_BUFFER (which, unlike other buffer bindings, is stored in the current VAO). It is an error to call glDrawElements() if no buffer is bound to that target.

In earlier versions of OpenGL, and in the compatibility profile, if no buffer is bound to that target then the last argument is treated as a pointer to client memory.

A similar principle applies to most other buffer bindings: OpenGL 3+ core profile requires a buffer to be bound, while the compatibility profile sources data from client memory if no buffer is bound. If you’re planning on transitioning legacy code to OpenGL 3+ core profile, check the descriptions of the various targets in the glBindBuffer() documentation. Most of the functions which used to take a pointer to “bulk” data now require that data to be copied to a buffer first. The main exception is that texture data can still be sourced directly from client memory, even in the core profile; you don’t have to use GL_PIXEL_UNPACK_BUFFER.

I think GClements gave a nice explanation to ‘DrawElements’ functionality. I don’t know what version of OpenGL you’re using, but I’ll assume when you tried the code you made in your initial post, you were trying to do it in 3.3 core or above.

I’ve only been learning 3.3 core, so I’ll write about it from that perspective, maybe it’ll help. As GClements said, the last argument in glDrawElements --3.3 core-- is an offset to the first element location (the first element you want to send for drawing, I mean) referenced in the currently bound element array buffer object (the state holding the currently bound element array buffer object is maintained by VAO’s, which is why you bind a VAO before calling a drawing command like glDrawElements in your program). Here’s what it says in 2.8.3 of the 3.3 core spec (DrawElementsOneInstance is equivalent to DrawElements in the spec, it’s written as DrawElementsOneInstance for spec. writing purposes):

The command ‘void DrawElementsOneInstance( enum mode, sizei count, enum type, const void *indices )’ … constructs a sequence of geometric primitives by successively transferring the count elements whose indices are stored in the currently bound element array buffer (see section 2.9.7) at the offset defined by --indices-- to the GL.

(I added the dashes to indicate that means the “indices” arugment. Same thing in later quote I mention.)

So in 3.3 core, that last argument is like an index argument.


I’ve never tried OpenGL in specifically OpenGL 2, but I took a look at the 2.1 spec. Looks like the last argument to the DrawElements function actually wants an array of indices, unlike 3.3 core, which just wants a single index. Here’s what it says in 2.8 of the 2.1 spec:

The command ‘void DrawElements( enum mode, sizei count, enum type, void *indices );’ constructs a sequence of geometric primitives using the count elements whose indices are stored in --indices–.

Interestingly, looks like OpenGL 2 wants that argument to be an array. As GClements puts it, in this case, the function may treat that last argument as something you pass a pointer to client memory (which makes sense, given that you would pass an array you define in your program [which resides in what’s called your client], and in passing an array in C/C++ you’re technically passing a pointer to the first element of the array [C/C++ element in this sense, not OpenGL element]). I’m not really sure what GClements means about having “no buffer bound to that target”, I’ve never tried OpenGL 2. In 3.3 core, I’ve found I must bind a VAO whose state includes reference to a bound element array buffer object. But nonetheless, I can see from the spec there’s indication of passing an array in OpenGL 2.


So ya, looking form the specs, there seems to be a difference in DrawElements functionality in OpenGL 2 and OpenGL 3.3 core, but this may be because of the idea of having a core profile in OpenGL 3 (OpenGL 2 doesn’t have the notion of core or compatibility profiles). I glanced at the OpenGL 3.3 compatibility spec, looks like it mentions the “array” thing like in the GL 2 spec. I guess in general, the compatibility profile for 3.3 allows you to either pass an array or an index depending on whether a certain buffer object is bound (again, not sure what specifically, guessing in 3.3, it must involve whether a VAO is bound or not before the glDrawElements call). And the core profile, in attempting to be more modern, doesn’t allow that “array” passing stuff.


Overall, what I’m seeing from the specs, in OpenGL 2, for DrawElements, you pass an array. In 3.3 core, you must pass an index. So I can see how if you tried to pass an array like you did in 3.3 core, you wouldn’t get the same results as in OpenGL 2. In fact, since passing an array is like passing a pointer, 3.3 core would probably just be interested in the first element of the array you passed and not care about the rest of the array.

And In 3.3 compatibility, I think you’re able to do both: either pass an array or pass an index, depending on some certain conditions. Hope that helps with clarification.

Actually in 2.x you can do either.

If a buffer object is bound to the ELEMENT_ARRAY_BUFFER binding point the last argument is interpreted as an offset into that buffer object’s data store.

If a buffer object is not bound then it’s an array of elements in client memory.

It’s incorrect to say that buffer objects are a modern OpenGL feature. They’ve actually been around since GL 1.5 and are available to even earlier versions via extensions. What is a modern OpenGL feature is making them mandatory.

All versions want an array of indices. The choice is between an array of indices stored in client memory and an array of indices stored in a buffer object.

(to mhagain) Oh ok, thanks for the correction. Never did any OpenGL beyond 3.3 core. That’s cool how buffer objects were around even beck around the OpenGL 1 times.

(to GClements) That’s a better way of saying it, thanks. It’s nice to see a consistency of requiring an array of elements between OpenGL 2 and 3.

Thank you all for your replies.

I’m using mac, so I don’t have much choice. I first, wrote my application using GLUT and OpenGL without any extensions, that is version 2.x, I presume. Then I rewrote everything with GLUT_3_2_CORE_PROFILE, and after that version of OpenGL and GLSL became 4.1 and some parts stopped working, like I said at the beginning. I also used glWindowPos2i and glutBitmapCharacter for some simple text print, but for now it’s not working, even if I use glWindowPos2iARB.

[QUOTE=richman.feynard;1289272]
I also used glWindowPos2i and glutBitmapCharacter for some simple text print, but for now it’s not working, even if I use glWindowPos2iARB.[/QUOTE]
glutBitmapCharacter() uses glBitmap(), which isn’t available in 3+ core profile. Likewise for glRasterPos() and glWindowPos(); there isn’t much point when all of the functions which use the raster position have been removed.

You’ll need to use textured triangles instead.

GLUT’s utility functions (text and geometric shapes) won’t work in a core profile, although its window and event management still works. Similarly, much of GLU won’t work in a core profile.