different glBindBuffer for one glDrawElements

Hello,

i do have a problem that my program crashes on one hardware and on the other it works fine. The crashing hardware is a notebook: NVIDIA GeForce 8400M GS.

I thought I would make it easier if I create one buffer for each datatype: one buffer with GLfloat (for the colour, normals, uv-coordinates, vertex, …) and one with GLboolean (for the edge-flags). As an alternative I could store everything in the same buffer but in this case I have to use the old c-functions with malloc and it is not that easy to have index-access to an element in the buffer ([]-operator) because the datatypes have different sizes (sizeoff(GLboolean) != sizeoff(GLfloat)). Therefore I stored each datatype in one separate buffer.

My program looks something like this:

...
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboID_Flt);

unsigned int sizeRowVbo = m_drawData.getSizeRowVbo();
unsigned int sizeRowVboByte = sizeRowVbo * sizeof(GLfloat);

glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, sizeRowVboByte, offsetUv));

glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3, GL_FLOAT, sizeRowVboByte, offsetColor));

glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeRowVboByte, offsetNormal));

glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeRowVboByte, offsetVertex));

glBindBufferARB(GL_ARRAY_BUFFER_ARB, NULL);
 
glEnableClientState(GL_EDGE_FLAG_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboID_Bool);
glEdgeFlagPointer(sizeof(char), 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, NULL);
...

The problem is that it crashes on the GeForce 8400M when I try to unbind the buffer with the floats and bind the buffer with the edge-flags for setting the edgel-flag-pointer with glEdgeFlagPointer.

Is there a possibility to do it like this or do I have to bild just one buffer for each glDrawElements(…)?

Thank you.

A few small things wrong with your code which may or may not be contributing to this:[ul][]You should be using the GLboolean data type for your edge flag array.[]To unbind, you should be passing 0 as the buffer object instead of NULL.[]If data in an array is tightly packed I normally prefer to specify a stride of 0.[/ul]Otherwise, yes, you can specify multiple VBOs; the currently active VBO affects subsequent glPointer calls, not the actual draw calls (which may be affected by index buffers instead).

Thank you for this answere! However, this is not the problem, or maybe not the only problem.

If you are trying to render after those calls, you should not call glBindBufferARB(GL_ARRAY_BUFFER_ARB, NULL);
because that disables.

To unbind, you should be passing 0 as the buffer object instead of NULL.

There is no difference between NULL and 0 in C/C++.

The problem is that it crashes on the GeForce 8400M when I try to unbind the buffer with the floats and bind the buffer with the edge-flags for setting the edgel-flag-pointer with glEdgeFlagPointer.

Sounds like a driver bug. My guess is that they didn’t test edge flags usage with buffer objects. Or at least, not thoroughly enough.

Theoretically, there are in C. NULL is defined as “(void*)0” in C but simply as 0 in C++.

Just to make my idiot clever guy :smiley:

Theoretically, there are in C. NULL is defined as “(void*)0” in C but simply as 0 in C++.

Just to make my idiot clever guy :smiley: [/QUOTE]
I know there’s no difference but still, it smacks of relying on an implementation detail rather than something definite, and the spec and documentation continuously refers to “a non-zero buffer object”, so make it 0; do things in accordance with the spec and documentation.

Thank you, but there is no general problem with the glEdgeFlagPointer-function on this driver. If I do not swap the buffer and set the edge-flag-pointer to a position in the float-buffer it runs ok. I did this for testing. This is logical wrong (of course) but it runs without a crash. I think my solution will be that I’ll make the float-VBO one column bigger and store the edge-flag in this row. In this case I will loose 3 Bytes per each edge-flag (sizeof(float) = 4, sizeof(bool) = 1). But it seems that this solution will also run on the GeForce 8400M GS. An other advantage is that I do have to create just two buffers for each object (float for the vertex, colors ect. and unsigned int for the elements) and not three (float, bool and unsigned int).

Edit:
As an alternative I could create the VBO with malloc(…) and alocate just one byte for the edge-flag. But I am not that familiar with the old c-functions.