glDrawElements access violation on AMD card, weird behavior

hey, yesterday i’ve finally got my AMD card for test(HD6670) and what a piece of crap it is… i’m talking mostly about drivers. i can’t debug. almost everything doesn’t work, but no OpenGL errors. best hint i can get - gDebugger catches access violation errors occasionally.

configuration: win 7 x64, catalyst 13.1, Gigabyte HD 6670, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB.
my application works flawlessly on my main nVidia card, gDebugger doesn’t catch any errors. all deprecated stuff replaced.

example of code, that doesn’t work on AMD:

//init quad


   GLfloat quadVertices[] = {
   -1.0f, -1.0f, 0.0f, 0.0f, 0.0f,         //bottom left corner
   -1.0f,  1.0f, 0.0f, 0.0f, 1.0f,         //top left corner
    1.0f,  1.0f, 0.0f, 1.0f, 1.0f,         //top right corner
    1.0f, -1.0f, 0.0f, 1.0f, 0.0f};      //bottom right corner

   GLubyte quadIndices[] = {2,1,0, // first triangle (bottom left - top left - top right)
    3,2,0};                                  // second triangle (bottom left - top right - bottom right)

   glGenVertexArrays(1, &quadVertexArrayObjectId);
    glBindVertexArray(quadVertexArrayObjectId);

    glGenBuffers(1, &quadVertexBufferId);
    glBindBuffer(GL_ARRAY_BUFFER, quadVertexBufferId); //Bind the vertex buffer
    glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 20, &quadVertices[0], GL_STATIC_DRAW);

    glGenBuffers(1, &quadIndexBufferId);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, quadIndexBufferId); //Bind the vertex buffer
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLubyte) * 6, &quadIndices[0], GL_STATIC_DRAW);

    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, BUFFER_OFFSET(0)); //vertex
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, BUFFER_OFFSET(12)); //tex coord
    glEnableVertexAttribArray(0);
    glEnableVertexAttribArray(1);

    glBindVertexArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);

//render sprite for occlusion query:

        glBindVertexArray(quadVertexArrayObjectId);
        SShader[3].applyProgram(8);

        glBeginQueryARB(GL_SAMPLES_PASSED_ARB, occQueryObject);
        modelViewMatrix = currentViewMatrix * modelMatrix;
        billBoard();
        modelViewMatrix = glm::scale(modelViewMatrix, glm::vec3(occluderScale * 3.0f, occluderScale* 3.0f, 1.0f));

        glUniformMatrix4fv(SShader[shaderId].shaderSet[programId].uniform_modelViewMatrix, 1, 0, glm::value_ptr(modelViewMatrix));
        glUniformMatrix4fv(SShader[shaderId].shaderSet[programId].uniform_projectionMatrix, 1, 0, glm::value_ptr(currentProjectionMatrix));

        glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, 0); //access violation with gDebugger
        glEndQueryARB(GL_SAMPLES_PASSED_ARB);

there are several cases like that in my code. i’ve checked contents of Framebuffer, this code draws correctly without gDebugger. but conditional rendering doesn’t work. neither for these sprites, neither for normal objects, which are not getting any errors on that stage and also draw correctly into occlusion query buffer and gbuffer.

also, amd doesn’t seem to like that:

glBindFramebuffer(GL_FRAMEBUFFER, lightBufferId);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, gBufferId);
    glBlitFramebuffer(0, 0, gBufferWidth, gBufferHeight, 0, 0, screenw, screenh, GL_DEPTH_BUFFER_BIT, GL_NEAREST);

althrough they don’t explain why. just another access violation with gDebugger and nothing working in normal mode;
maybe i’m doing something embarassing?

ok, my first auto-answer:

i guess, you shouldn’t use GL_UNSIGNED_BYTE for indices. it looks like a driver bug to me, but now it works.

second:
glBlitFrameBuffer works if i change:

glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, buffersArray[f].defWidth, buffersArray[f].defHeight);

to

glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, buffersArray[f].defWidth, buffersArray[f].defHeight);

in my framebuffer class initialization code for attached depth. is that normal? seems like another buggy behavior to me;

and spec glRenderbufferStorage here says:

internalformatSpecifies the color-renderable, depth-renderable, or
stencil-renderable format of the renderbuffer. Must be one of the
following symbolic constants:
GL_RGBA4,
GL_RGB565,
GL_RGB5_A1,
GL_DEPTH_COMPONENT16, or
GL_STENCIL_INDEX8.

what?

BTW, now it is access-violating my SwapBuffers(hdc); call. trying to find the reason. conditional render still not working, i don’t even know what to post. i’ve already posted example of sprite rendering to occlusion query. and in main rendering passes i just do:

glBeginConditionalRender(occQueryObject, GL_QUERY_NO_WAIT);
//draw
glEndConditionalRender();

and occlusion query FBO is just a normal FBO with depth attached and glColorMask(0, 0, 0, 0), and rendering to it works fine.

Have you re-bound the VAO before drawing? Otherwise the glDrawElements call will be trying to pull indices from client memory, rather than from the quadindexBufferId buffer object.

i already fixed glDrawElements issues(presumably). yes i do include call to glBindVertexArray(quadVertexArrayObjectId); but i shouldn’t use bytes for indices.

fixed the 1st post.

but 2 problems remain: conditional rendering not working; SwapBuffer call fails(detected only with gDebugger, GetLastError() says “0”).

and spec https://www.khronos.org/opengles/sdk...ferStorage.xml here says:

Maybe you should look up the [i]OpenGL[/i] documentation, not OpenGL ES. You’ll get a much larger list of formats.

yeah, i’ve just noticed i’ve looked at the wrong spec. but google gives it 1st position if you search for glRenderbufferStorage and title just says “glRenderbufferStorage - Khronos Group”, no signs of OpenGL ES, except for a link.

could SwapBuffer access violation just a gDebugger malfunctioning? it doesn’t show any problems if i test application normally.
and as for conditional rendering - i’ve created topic in a drivers section.

you shouldn’t use GL_UNSIGNED_BYTE for indices

Thanks for the hint. I am having similar problems with this version of the driver. All my code is working fine on nVidia but I am getting heaps of problems in code that was working fine on older version of the AMD driver.

My current problem is getting an incomplete frame buffer error from glDrawElementsBaseVertex but when I build the frame buffer glCheckFramebufferStatus returns GL_FRAMEBUFFER_COMPLETE.

i had to put
glDrawBuffer(GL_NONE);
for depth-only framebuffers in initialization, it requires disabling draw-buffer if you don’t have color attacment. but that is for the case of GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER. if it says just incomplete attachment, then i can’t say without actual example.

imagine my shock, because i began testing my application for amd cards with this driver. i’m exhausted.

http://www.opengl.org/wiki/Framebuffer_Object_Examples#Depth_only

yes, i know it should be like that. that was just another possible hint. nvidia doesn’t complain about that, so many people are unaware or forget.