Using glvertexpointer with VAOs

I am siezed by the absurd desire to use deprecated glvertexpointer with VAOs. I dont know if this is possible, if not I will stop wasting time with this.
Here we go then…
Let us say I have an array va consisting of vertices. Normally I would write glVertexPointer(blah blah…) and send indices array to glDrawElements. Now if I introduce buffer objects, the spec mentions that pointer is interpreted as an byte offset from the start of buffer object. So suppose i generate buffer object and bind it to GL_ARRAY_BUFFER and follow it with glBufferData(GL_ARRAY_BUFFER,sizeof(va),va,GL_STATIC_DRAW). What should come in the place of const void* argument in glVertexPointer. va or 0 or reinterprete<void*>(0)? Observe that my vertex array tightly fits my buffer object.

I don’t use any VAOs at all, except the one you have to bind to be standard conform since, I think, 3.3 core. (Well it works without on nvidia, but there implementation is as standards conform as internet explorer)

Are you trying to get some new stuff into a very old application, without rewriting to much of the code base? Because I can’t think of any other reason to do this.

You get glVertexAttribPointer with GL2.0. And VAOs (GL_ARB_vertex_array_object) is core in GL3.0.

No, I am not trying to get new stuff into a very old application. I jumped into opengl 4.5 without any prior knowledge about graphics and got horribly confused with all those binding points,buffer objects,texture objects. A friend of mine suggested that I study it from opengl 1.1 to better appreciate why things are the way they are. He is of the opinion that the present state of API has more to do with historical reasons and backward compatibility issues. Anyway I got past all those fixed functionality pipeline and old stuff.

Btw I got it working with reinterpret_cast<void*>(0) though i remember Bjarne Stroustrup saying somewhere that its the crudest and nastiest type of conversions :wink:

Anyway I got past all those fixed functionality pipeline and old stuff.

glVertexPointer is “fixed functionality pipeline and old stuff”. While you can use it with shaders, it is intended to represent the position in a rendering operation, rather than arbitrary “means whatever my shader says it does” shader operations.

Btw I got it working with reinterpret_cast<void*>(0) though i remember Bjarne Stroustrup saying somewhere that its the crudest and nastiest type of conversions

Yes, and OpenGL is the crudest and nastiest type of API. Better get used to it :wink:

Did somebody say crude and nasty?

struct myVertex {
    float Position[3];
    float Normal[3];
    float TexCoord[2];
};

myVertex *v = NULL;

glBindBuffer (GL_ARRAY_BUFFER, buf);

glVertexPointer (3, GL_FLOAT, sizeof (myVertex), v->Position);
glNormalPointer (GL_FLOAT, sizeof (myVertex), v->Normal);
glTexCoordPointer (2, GL_FLOAT, sizeof (myVertex), v->TexCoord);

Well… if you are a OpenGL historian then go for it, but otherwise I would stick to a minimum version of OpenGL. depending on what the hardware has you want to run it on.

[QUOTE=mhagain;1280714]Did somebody say crude and nasty?

struct myVertex {
    float Position[3];
    float Normal[3];
    float TexCoord[2];
};

myVertex *v = NULL;

glBindBuffer (GL_ARRAY_BUFFER, buf);

glVertexPointer (3, GL_FLOAT, sizeof (myVertex), v->Position);
glNormalPointer (GL_FLOAT, sizeof (myVertex), v->Normal);
glTexCoordPointer (2, GL_FLOAT, sizeof (myVertex), v->TexCoord);

[/QUOTE]

That doesn’t work. C provides the offsetof macro for precisely this reason.

It actually does work. Try it.

It will probably work, at least on any architecture which supports OpenGL. But the behaviour is at best implementation-defined and possibly undefined. C++ and C99 provide offsetof, which should be used. For C89, a more reliable approach would be to use


(char*)v->Position - (char*)v

This will work if pointers aren’t simple integers holding zero-based byte addresses, provided that the implementation of -> doesn’t trap when applied to a null pointer.