OpenGL interface improvement

Hi everyone,

while trying to polish my API independant renderer, I was just thinking that it would be great to be able to attach user data (pointer) to each opengl “object” (textures, vbo, pbo …).

<dream mode on>
GLvoid glUserDataPointerARB(GLuint id, GLvoid* userData);
</dream mode off>

any comment?

It’s just as easy for the user to do it themselves as for the API to do it. Indeed, I would suggest for an “API independant renderer” to hide details like the nature of objects (whether they are GLInts or what have you) and so forth. As such, you’re texture object type can store “user data” just as easily as GL can.

Sure it’s easy to do it myself.
I’ve just seen a lot of low level API with user data pointer and wondering why not in ogl :wink:
Also, i find that user data are a very nice way to help in API wrapping.

cheers :slight_smile:

I believe the need to use a user data pointer comes from implementing the GL path, not from wanting to expose GLint in the abstract API.

Note that the names of objects don’t have to be generated by GenXxx. You can use whatever non-0 value you want, and the GL implementation will cope. This means that you can use the actual user pointer as the GL ID for buffers, textures, etc.

Originally posted by jwatte:
This means that you can use the actual user pointer as the GL ID for buffers, textures, etc.
I don’t think that’s portable. IDs are of type GLuint, which isn’t guaranteed to be big enough to hold a pointer value. (There’s a GLintptr type which does provide this guarantee, but AFAICS nothing uses it.) I’d imagine this could be more than a theoretical quibble as we transition to 64-bit code.

GLintptr is used for vertex buffer objects to represent either a pointer to a vertex array or an offset in a VBO.

MikeC is, however, correct. GLint is defined to be 32-bits, but ‘void *’ is 64-bits on several interesting platforms. :wink:

Is GLint defined to be 32bit? I thought the purpose of the generic term int meant it could by any size?
If not, why didn’t they use the bit numbers (int16,int64) instead of short and long etc.
Bloody semantics eh?

Yes. GLint, GLuint, GLenum, and GLfloat are all 32-bits. In fact, since the GLX protocol transmits them all as 32-bits, they can only ever be 32-bits.

The specification only requires a minimum length, and the minimum length for a GLint is 32 bits. It can, of course, be longer, but not shorter.

Actually, that brings out a good point: Microsoft’s unfortunate decision to make “long” be 32 bits on the x86-64 model. The lp64 model is so nice. No need for language extensions, new types, or anything like that. Blech.

And, yes, my suggestion of using the virtual address of an object as the identifier would run a very small chance of causing a namespace collision on a 64-bit machine, unless you took other measures, like allocated all of your GL object representation objects out of a big sequential array (of static size), which clearly isn’t the right solution for everyone.