PDA

View Full Version : Trouble with glDrawElements and AccessViolation



qcrist
09-01-2012, 10:43 AM
I am working on an opengl project and I have been running into access violation errors. I managed to isolate the error into the code below, but I do not understand why it errors. Any help would be appreciated.

BTW: I know that the genbuffers and deletebuffers are used in a dumb way. The code below just replicates the error.


#include <gl/glew.h>
#include <gl/glut.h>
void draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glEnableClientState(GL_VERTEX_ARRAY);
float v[]={
0,0,0,
1,1,1,
2,2,2,
};
unsigned char i[]={
0,1,2
};
unsigned int a;
unsigned int b;
glGenBuffers(1,&a);
glGenBuffers(1,&b);
glBindBuffer(GL_ARRAY_BUFFER,a);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,b);
glBufferData(GL_ARRAY_BUFFER,36,v,GL_STATIC_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,3,i,GL_STATIC _DRAW);
glVertexPointer(3,GL_FLOAT,0,0);
glDrawElements(GL_TRIANGLES,3,GL_UNSIGNED_BYTE,0);
glDeleteBuffers(1,&a);
glDeleteBuffers(1,&b);
glBegin(GL_LINES);
glVertex3f(0,0,0);
glVertex3f(0,0,0);
glEnd();
glutSwapBuffers();
}
int window;
int width,height;
void changeSize(int w, int h)
{
if(h == 0)
h = 1;
double ratio = 1.0* w / h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45,ratio,.01f,1000);
glMatrixMode(GL_MODELVIEW);
width = w;
height = h;
}
void initWindow()
{
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
glClearColor(0.1f,0.1f,0.1f,1);
glEnableClientState(GL_VERTEX_ARRAY);
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(800,600);
window = glutCreateWindow("OpenGl");
glewInit();
initWindow();

glutIdleFunc(draw);
glutDisplayFunc(draw);
glutReshapeFunc(changeSize);
glutMainLoop();
return 0;
}


Thanks,
Qcrist

Dan Bartlett
09-01-2012, 02:07 PM
The OpenGL code itself doesn't seem to be doing anything invalid that could cause an AV.

Is glewInit returning GLEW_OK + are the function pointers for OpenGL > 1.1 set? If not have you tried installing graphics drivers from your GPU maker. What line in particular is causing the error?

V-man
09-01-2012, 03:05 PM
it might have something to do with your calls to glDeleteBuffers. Try to not destroy your buffers.

qcrist
09-01-2012, 07:07 PM
Is glewInit returning GLEW_OK + are the function pointers for OpenGL > 1.1 set?


I believe so. The code crashes after rendering a couple times, not the first time.



If not have you tried installing graphics drivers from your GPU maker. What line in particular is causing the error?
Yes, ATI Mobility Radeon HD 5870



it might have something to do with your calls to glDeleteBuffers. Try to not destroy your buffers.

Nope, didn't fix it :[

mhagain
09-01-2012, 07:20 PM
it might have something to do with your calls to glDeleteBuffers. Try to not destroy your buffers.

I don't think that matters here - the OP has already acknowledged it and indicated that this is just a repro case. It would be worth trying anyway just to confirm.

An AV with this kind of code generally happens when an array that should not be used is left enabled and the draw call overflows it, but that's obviously not the case here either. It's also worth trying some or all of the following:


Switch the GL_ELEMENT_ARRAY_BUFFER to GL_UNSIGNED_SHORT or GL_UNSIGNED_INT as these formats are more likely to be supported in hardware.
Use sizeof rather than hard-coded sizes in your glBufferData calls.
Reorder your glBindBuffer and glBufferData calls so that they work on the same buffer type sequentially; i.e. glBindBuffer (GL_ARRAY_BUFFER, ...), glBufferData (GL_ARRAY_BUFFER, ...), then for GL_ELEMENT_ARRAY_BUFFER.
Add a glBindBuffer (..., 0) for each of GL_ARRAY_BUFFER and GL_ELEMENT_ARRAY_BUFFER before your glBegin call.
Likewise add a glDisableClientState (GL_VERTEX_ARRAY) in the same place.

None of these are actually required by the GL spec, true, but in the absence of any other obvious issues (such as those suggested by Dan Bartlett) the possibility of a misbehaving driver must be considered.

Dan Bartlett
09-02-2012, 03:01 AM
I managed to reproduce this problem on my ATi Mobility Radeon HD5650 too.

It crashes the second time it reaches glutSwapBuffers, unless I either:
(1) Switch from GL_UNSIGNED_BYTE to GL_UNSIGNED_SHORT or GL_UNSIGNED_INT or
(2) Remove the immediate mode rendering from draw();

Definitely seems like a driver issue. Updating drivers had no effect, I got the crash with both 12.6 + 12.8 Catalyst drivers.

qcrist
09-02-2012, 01:40 PM
Use sizeof rather than hard-coded sizes in your glBufferData calls.
Reorder your glBindBuffer and glBufferData calls so that they work on the same buffer type sequentially; i.e. glBindBuffer (GL_ARRAY_BUFFER, ...), glBufferData (GL_ARRAY_BUFFER, ...), then for GL_ELEMENT_ARRAY_BUFFER.
Add a glBindBuffer (..., 0) for each of GL_ARRAY_BUFFER and GL_ELEMENT_ARRAY_BUFFER before your glBegin call.
Likewise add a glDisableClientState (GL_VERTEX_ARRAY) in the same place.


Didn't Work :[




Switch the GL_ELEMENT_ARRAY_BUFFER to GL_UNSIGNED_SHORT or GL_UNSIGNED_INT as these formats are more likely to be supported in hardware.


Works :D, but wastes space when I only have 36 vertices :[


I managed to reproduce this problem on my ATi Mobility Radeon HD5650 too.

It crashes the second time it reaches glutSwapBuffers, unless I either:
(1) Switch from GL_UNSIGNED_BYTE to GL_UNSIGNED_SHORT or GL_UNSIGNED_INT or
(2) Remove the immediate mode rendering from draw();

Definitely seems like a driver issue. Updating drivers had no effect, I got the crash with both 12.6 + 12.8 Catalyst drivers.
Cool, I thought I was buffering the data wrong or something.. :]

mhagain
09-02-2012, 02:43 PM
Works :D, but wastes space when I only have 36 vertices :[
Not worth worrying about. Do the calculations - the amount of memory you would save is measured in bytes. Trade that off against (a) something that you know will always be accelerated by hardware and will therefore be faster, and (b) something that works. Which one wins?