Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 24

Thread: AMD conditional rendering broken

Threaded View

  1. #1
    Member Regular Contributor Nowhere-01's Avatar
    Join Date
    Feb 2011
    Location
    Novosibirsk
    Posts
    251

    Post AMD conditional rendering broken

    so conditional render always passes if i use GL_QUERY_NO_WAIT, or stalls the GPU driver if i use GL_QUERY_WAIT as a parameter. i couldn't find much info about this issue, except for: http://www.g-truc.net/post-0300.html
    in my case, it acting differently. i have HD6670 with Catalyst 13.1 Core Profile Forward-Compatible Context 9.12.0.0

    example of rendering(simplified):
    //render to occlusion query
    Code :
    void renderToOcclusion()
        {
            if(!isEnabled || !isInFrustum || !isDiscardable) {
                return;
            }
            glBeginQueryARB(GL_SAMPLES_PASSED_ARB, occQuery);
            modelViewMatrix = currentViewMatrix * modelMatrix;
     
            //RENDER
            lodAvailable = modelStorage[modelId].lodAvailable;
            glBindVertexArray(modelStorage[modelId].data[lodAvailable].vertexArrayObject);
     
            for(unsigned s = 0, off = 0; s < modelStorage[modelId].data[lodAvailable].numSurfaces; s++)
            {
                SShader[3].applyProgram();
     
                glUniformMatrix4fv(SShader[shaderId].shaderSet[programId].uniform_modelViewMatrix, 1, 0, glm::value_ptr(modelViewMatrix));
                glUniformMatrix4fv(SShader[shaderId].shaderSet[programId].uniform_projectionMatrix, 1, 0, glm::value_ptr(currentProjectionMatrix));
     
                glDrawElements(GL_TRIANGLES, modelStorage[modelId].data[lodAvailable].numIndices[s], GL_UNSIGNED_SHORT, BUFFER_OFFSET(off));
                off += modelStorage[modelId].data[lodAvailable].numIndices[s] * sizeof(short);
            }
     
            glEndQueryARB(GL_SAMPLES_PASSED_ARB);
     
            //debug:
            unsigned numSamples = 0;
            unsigned occQueryAvailable = 0;
            while(!occQueryAvailable) {
                glGetQueryObjectuiv(occQuery,GL_QUERY_RESULT_AVAILABLE, &occQueryAvailable); 
            }
            glGetQueryObjectuiv(occQuery, GL_QUERY_RESULT, &numSamples);
            LOG << numSamples << endl;   //!THAT OUTPUTS CORRECT NUMBER OF SAMPLES!
            }
        }

    //render:
    Code :
    if(isEnabled && isInFrustum)
            {
                if(isDiscardable)
                    glBeginConditionalRender(occQuery, GL_QUERY_NO_WAIT);
     
                modelViewMatrix = defaultViewMatrix * modelMatrix;
                normalMatrix = glm::transpose(glm::inverse(glm::mat3(modelViewMatrix)));
     
                //RENDER
                drawGeometry();
                if(isDiscardable)
                    glEndConditionalRender();
            }

    as described above, that results in always rendering all objects with GL_QUERY_NO_WAIT, and freeze with GL_QUERY_WAIT. but glGetQueryObjectuiv(occQuery, GL_QUERY_RESULT, &numSamples) generates correct values and if instead of conditional render i use it's results, i get correct occlusion. but old occlusion query is such a pain in the ass to synchronize. i just don't want to use it anymore. i did expect AMD still having minor problems with their OpenGL implementation, but this is ridiculous. i'm in the debugging nightmare.
    Last edited by Nowhere-01; 02-13-2013 at 10:38 AM. Reason: null

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •