Question about indexArray types

Table 2.4 p. 24 of the 1.4 spec lists among others ‘int’ as a type for index arrays. On p. 26 same doc. it states that drawElements requires indices of type ubyte, ushort or uint.

Can the index arrays be used for something I’m not aware of, or is this simply a minor glitch in the documentation?

J.O.

That table is for source data types in the gl*Pointer() calls, not index types. All the index types for glDrawElements() are unsigned.

Aha…! So maybe I don’t need to define an index array with IndexPointer, and enabling client state INDEX_ARRAY, then, …?! Maybe something here I’ve misunderstood all the time…

What exactly are you trying to do?

If you have vertex data, say, then you give the gl a pointer to it using glVertexPointer(…), then DrawElements(…) with your indices. It all depends on what you want to do, and how your data are organized.

Hi, thanks for the interest… I have a pool of vertices (hence glVertexPointer and NormalPointer to set up corresponding arrays.)

Then I have a list of integer triples which are cursors into these lists, defining a set of triangles.

I thought I should call IndexPointer with this latter array, then enable the three states for vertex, normal and index, and finally issue a call to DrawElements with type TRIANGLES, the number of triangles times 3 (total number of vertices involved in the set of triangles), my “cursor type” (now unsigned, previously just int) and finally the index array (same as last param I used for the indexPointer-call…)

Now you got me realize that I don’t need the indexArray for this… Must look up what it’s for, then… :slight_smile:

My next problem is that even though I include glext.h (from Nvidia) it doesn’t declare glBindBufferARB as expected. Sigh.

Your triangle setup seems reasonable to me.

As for the new problem, you can get all the latest headers and extension specs here
http://oss.sgi.com/projects/ogl-sample/registry/

Thanks for the link. I was attempting to use the ARB_vertex_buffer_object extension, but it seems it isn’t supported by Nvidia after all. (Even though they say so on their web. Maybe it’s only for Windows drivers, not Linux, I don’t know…)

Ahh… I just needed to wrap the prototypes in an extern “C” {…} and then it worked…

Just pumping out GL_TRIANGLES has been fast enough for a long time, but now I needed some more. From GL_TRIANGLES at 16 fps, to classic vertex arrays at 23 fps, to VBOs at 111 fps is not bad! A sevenfold increase…
:slight_smile:

A seven fold increase is a bargain at any price. :slight_smile: