Hey guys, new comer here… please forgive me if i don’t post my stuff the way you’d like to see it .
Anyway, I’m working on a 3D engine and I’m responsible for the Linux and OpenGL portions. I’m implementing Vertex and Index Buffers at the moment and I’ve ran into a problem that has me stumped. Here’s what i do to setup the buffers:
Index Buffer
GLExtensions::glGenBuffersARB(1, &m_buffer);
// make active
GLExtensions::glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, m_buffer);
// allocate it
GLenum bufferType = (isDynamic() ? GL_DYNAMIC_DRAW_ARB : GL_STATIC_DRAW_ARB);
GLExtensions::glBufferDataARB(
GL_ELEMENT_ARRAY_BUFFER_ARB,
m_size,
NULL,
bufferType
);
Vertex Buffer
GLExtensions::glGenBuffersARB(1, &m_buffer);
// make active
GLExtensions::glBindBufferARB(GL_ARRAY_BUFFER_ARB, m_buffer);
// allocate it
GLenum bufferType = (isDynamic() ? GL_DYNAMIC_DRAW_ARB : GL_STATIC_DRAW_ARB);
GLExtensions::glBufferDataARB(
GL_ARRAY_BUFFER_ARB,
m_size,
NULL,
bufferType
);
Activation:
if (pos>0) {
int oset = m_pStreamSetDefinition->getElementOffset(index,IStreamDefinition::POSITION);
glVertexPointer(
3,
GL_FLOAT,
vertexSize,
BUFFER_OFFSET(oset)
);
glEnableClientState(GL_VERTEX_ARRAY);
} else {
glDisableClientState(GL_VERTEX_ARRAY);
}
// normal?
if (normal>0) {
int oset = m_pStreamSetDefinition->getElementOffset(index,IStreamDefinition::NORMAL);
glNormalPointer(
GL_FLOAT,
vertexSize,
BUFFER_OFFSET(oset)
);
glEnableClientState(GL_NORMAL_ARRAY);
} else {
glDisableClientState(GL_NORMAL_ARRAY);
}
// texcoords?
if (texcoords>0) {
int oset = m_pStreamSetDefinition->getElementOffset(index,IStreamDefinition::TEXTURE0);
glTexCoordPointer(
2,
GL_FLOAT,
vertexSize,
BUFFER_OFFSET(oset)
);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
} else {
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Rendering:
int indexCount = (triangleCount*3);
GLExtensions::glDrawRangeElements(
GL_TRIANGLES,
firstIndex,
firstIndex+indexCount,
indexCount,
GL_UNSIGNED_SHORT,
NULL
);
Ok, so I’m running Linux (kernel 2.6) latest stable Nvidia drivers. I have a 6800oc. The same engine code runs on windows with OpenGL… but an ATI card. When i run the direct rendering method (glBegin()…glEnd()) of OpenGL it works fine. I get an error in what looks like the nvidia driver, in a symbol named “_nv001100gl()”. What happens is that 4 iterations of the main loop render everything fine. On the 5th iteration i get a segfault in that symbol. I’ve verified that the values passed to glDrawRangeElements are accurate (they are the same every time, the desired result).
Thanks in advance for any help!