Mesa 10.3.0 driver with ATI graphics card and opengl 3.3 (Driver bug ?)

Hi!

I’m trying to use moddern opengl with opensource drivers on linux.

But I have a crash when I’m trying to generate a vao : (After creating the opengl context and the window, the opengl context is creating well because glGetString gives me the version 3.3 for opengl and for the shading language)


GLuint vertexArrayID;
glCheck(glGenVertexArrays(1, &vertexArrayID));

My graphic card is an ATI mobility radeon HD 5470 (512 MB) and my operating system is ubuntu 14.04. (64 bits)

Unfortunately I can’t use the proprietary drivers because it didn’t work with the graphical interface of ubuntu. (I get a black screen instead of the logon screen)

PS : I’ve set GlewExperimental to GL_TRUE before initializing Glew but it doesn’t solved the problem.

Can you double-check that you’ve initialized GLEW after you’ve created the context? Apologies if this seems obvious to you, but it is a common error. Also - are you checking the return value from your glewInit call?

It was glew which was not initialized, c++ simply doesn’t want to enter into my function which initialize glew so I typed all the code of the function into the constructor of my window and now it works. :slight_smile:

Very strange I have no compilation errors, and the function was not called…

Mmm … I think that driver doesn’t support opengl 3.3 very, well.

Opengl return me an invalid error while generating the vao.

And the compilation of the shaders fails :


const std::string vertexShader =
            "#version 330 
"
            "layout(location = 0) in vec3 vertex_position;"
            "layout(location = 1) in vec4 vertex_color;"
            "layout(location = 2) in vec2 vertex_texCoords0;"
            "layout(location = 3) in vec3 vertex_normal;"
            "layout(location = 10) in mat4 mvp;"
            "out vec2 texCoords;"
            "out vec4 color;"
            "void main () {"
                "gl_Position = mvp * vec4(vertex_position, 1.0);"
                "texCoords = vertex_texCoords0;"
                "color = vertex_color;"
            "}";

It tells me that there is an unexpected ; at line 2. (…)

And I have a lot of invalid enum errors with other opengl functions. (With some glDisable and glEnable functions, glLoadMatrix, etc…)

But until opengl 3.0 it works well…

[QUOTE=Lolilolight;1260486]Mmm … I think that driver doesn’t support opengl 3.3 very, well.

Opengl return me an invalid error while generating the vao.

And the compilation of the shaders fails :


const std::string vertexShader =
            "#version 330 
"
            "layout(location = 0) in vec3 vertex_position;"
            "layout(location = 1) in vec4 vertex_color;"
            "layout(location = 2) in vec2 vertex_texCoords0;"
            "layout(location = 3) in vec3 vertex_normal;"
            "layout(location = 10) in mat4 mvp;"
            "out vec2 texCoords;"
            "out vec4 color;"
            "void main () {"
                "gl_Position = mvp * vec4(vertex_position, 1.0);"
                "texCoords = vertex_texCoords0;"
                "color = vertex_color;"
            "}";

It tells me that there is an unexpected ; at line 2. (…)

[/QUOTE]
If you add a
at the end of each C-src line you would be able to tell where it complains.

And I have a lot of invalid enum errors with other opengl functions. (With some glDisable and glEnable functions, glLoadMatrix, etc…)

But until opengl 3.0 it works well…

For OpenGL v3.1, 3.2, and 3.3, Mesa only supports -CORE- profile. So, enumerations and functions not found in core profile, for example glLoadMatrix, do NOT work and that is NOT a bug.

Mmm…, I have a core profile, if I try to create a compatibility profile it fails to create the opengl context.

And I load my shader from the memory so I need the
otherwise, the text is collapsed like this (#version xxxlayout(location=0) etc…) so it fails to compile, and with GLSL 1.3 this code is compiling perfectly :


const std::string  vertexShader =
           "#version 130 
"
           "void main () {"
                "gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;"
                "gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;"
                "gl_FrontColor = gl_Color;"
           "}";

So I think this is a bug.

The glLoadMatrix function generate an error so I think it’s creating a compatibility profile but why…

I’ve tried this :


int attributes[] =
                    {
                        GLX_CONTEXT_MAJOR_VERSION_ARB, static_cast<int>(m_settings.majorVersion),
                        GLX_CONTEXT_MINOR_VERSION_ARB, static_cast<int>(m_settings.minorVersion),
                        GLX_CONTEXT_FLAGS_ARB        , GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
                        GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
                        0, 0
                    };
                    m_context = glXCreateContextAttribsARB(m_display, configs[0], toShare, true, attributes);

I haven’t opengl errors anymore but the shader still fails to compile, I’ll try to lod it from a file to see…

No, even if I put the source code into a separated file, the version 330 of my shaders doesn’t compile.

Haha!
It seems that x11 don’t want to create me a CORE profile, I have opengl errors when I call opengl functions.


#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <string>
#include <iostream>
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <GL/glew.h>
#include <GL/gl.h>
#include <GL/glx.h>
#define GLX_CONTEXT_MAJOR_VERSION_ARB       0x2091
#define GLX_CONTEXT_MINOR_VERSION_ARB       0x2092
#define glCheck(call) ((call), glCheckError(__FILE__, __LINE__))
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);

// Helper to check for extension string presence.  Adapted from:
//   http://www.opengl.org/resources/features/OGLextensions/
static bool isExtensionSupported(const char *extList, const char *extension)
{
  const char *start;
  const char *where, *terminator;

  /* Extension names should not have spaces. */
  where = strchr(extension, ' ');
  if (where || *extension == '\0')
    return false;

  /* It takes a bit of care to be fool-proof about parsing the
     OpenGL extensions string. Don't be fooled by sub-strings,
     etc. */
  for (start=extList;;) {
    where = strstr(start, extension);

    if (!where)
      break;

    terminator = where + strlen(extension);

    if ( where == start || *(where - 1) == ' ' )
      if ( *terminator == ' ' || *terminator == '\0' )
        return true;

    start = terminator;
  }

  return false;
}

static bool ctxErrorOccurred = false;
static int ctxErrorHandler( Display *dpy, XErrorEvent *ev )
{
    ctxErrorOccurred = true;
    return 0;
}
void glCheckError(const char* file, unsigned int line)
{
    // Get the last error
    GLenum errorCode = glGetError();
    if (errorCode != GL_NO_ERROR)
    {
        std::string fileString(file);
        std::string error = "unknown error";
        std::string description  = "no description";

        // Decode the error code
        switch (errorCode)
        {
            case GL_INVALID_ENUM :
            {
                error = "GL_INVALID_ENUM";
                description = "an unacceptable value has been specified for an enumerated argument";
                break;
            }

            case GL_INVALID_VALUE :
            {
                error = "GL_INVALID_VALUE";
                description = "a numeric argument is out of range";
                break;
            }

            case GL_INVALID_OPERATION :
            {
                error = "GL_INVALID_OPERATION";
                description = "the specified operation is not allowed in the current state";
                break;
            }

            case GL_STACK_OVERFLOW :
            {
                error = "GL_STACK_OVERFLOW";
                description = "this command would cause a stack overflow";
                break;
            }

            case GL_STACK_UNDERFLOW :
            {
                error = "GL_STACK_UNDERFLOW";
                description = "this command would cause a stack underflow";
                break;
            }

            case GL_OUT_OF_MEMORY :
            {
                error = "GL_OUT_OF_MEMORY";
                description = "there is not enough memory left to execute the command";
                break;
            }

            case GL_INVALID_FRAMEBUFFER_OPERATION_EXT :
            {
                error = "GL_INVALID_FRAMEBUFFER_OPERATION_EXT";
                description = "the object bound to FRAMEBUFFER_BINDING_EXT is not \"framebuffer complete\"";
                break;
            }
        }

        // Log the error
        std::cerr << "An internal OpenGL call failed in "
              << fileString.substr(fileString.find_last_of("\\/") + 1) << " (" << line << ") : "
              << error << ", " << description
              << std::endl;
    }
}
int main(int argc, char* argv[])
{
  Display *display = XOpenDisplay(NULL);

  if (!display)
  {
    printf("Failed to open X display
");
    exit(1);
  }

  // Get a matching FB config
  static int visual_attribs[] =
    {
      GLX_X_RENDERABLE    , True,
      GLX_DRAWABLE_TYPE   , GLX_WINDOW_BIT,
      GLX_RENDER_TYPE     , GLX_RGBA_BIT,
      GLX_X_VISUAL_TYPE   , GLX_TRUE_COLOR,
      GLX_RED_SIZE        , 8,
      GLX_GREEN_SIZE      , 8,
      GLX_BLUE_SIZE       , 8,
      GLX_ALPHA_SIZE      , 8,
      GLX_DEPTH_SIZE      , 24,
      GLX_STENCIL_SIZE    , 8,
      GLX_DOUBLEBUFFER    , True,
      GLX_SAMPLE_BUFFERS  , 1,
      GLX_SAMPLES         , 4,
      None
    };

  int glx_major, glx_minor;

  // FBConfigs were added in GLX version 1.3.
  if ( !glXQueryVersion( display, &glx_major, &glx_minor ) ||
       ( ( glx_major == 1 ) && ( glx_minor < 3 ) ) || ( glx_major < 1 ) )
  {
    printf("Invalid GLX version");
    exit(1);
  }

  printf( "Getting matching framebuffer configs
" );
  int fbcount;
  GLXFBConfig* fbc = glXChooseFBConfig(display, DefaultScreen(display), visual_attribs, &fbcount);
  if (!fbc)
  {
    printf( "Failed to retrieve a framebuffer config
" );
    exit(1);
  }
  printf( "Found %d matching FB configs.
", fbcount );

  // Pick the FB config/visual with the most samples per pixel
  printf( "Getting XVisualInfos
" );
  int best_fbc = -1, worst_fbc = -1, best_num_samp = -1, worst_num_samp = 999;

  int i;
  for (i=0; i<fbcount; ++i)
  {
    XVisualInfo *vi = glXGetVisualFromFBConfig( display, fbc[i] );
    if ( vi )
    {
      int samp_buf, samples;
      glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLE_BUFFERS, &samp_buf );
      glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLES       , &samples  );

      printf( "  Matching fbconfig %d, visual ID 0x%2x: SAMPLE_BUFFERS = %d,"
              " SAMPLES = %d
",
              i, vi -> visualid, samp_buf, samples );

      if ( best_fbc < 0 || samp_buf && samples > best_num_samp )
        best_fbc = i, best_num_samp = samples;
      if ( worst_fbc < 0 || !samp_buf || samples < worst_num_samp )
        worst_fbc = i, worst_num_samp = samples;
    }
    XFree( vi );
  }

  GLXFBConfig bestFbc = fbc[ best_fbc ];

  // Be sure to free the FBConfig list allocated by glXChooseFBConfig()
  XFree( fbc );

  // Get a visual
  XVisualInfo *vi = glXGetVisualFromFBConfig( display, bestFbc );
  printf( "Chosen visual ID = 0x%x
", vi->visualid );

  printf( "Creating colormap
" );
  XSetWindowAttributes swa;
  Colormap cmap;
  swa.colormap = cmap = XCreateColormap( display,
                                         RootWindow( display, vi->screen ),
                                         vi->visual, AllocNone );
  swa.background_pixmap = None ;
  swa.border_pixel      = 0;
  swa.event_mask        = StructureNotifyMask;

  printf( "Creating window
" );
  Window win = XCreateWindow( display, RootWindow( display, vi->screen ),
                              0, 0, 100, 100, 0, vi->depth, InputOutput,
                              vi->visual,
                              CWBorderPixel|CWColormap|CWEventMask, &swa );
  if ( !win )
  {
    printf( "Failed to create window.
" );
    exit(1);
  }

  // Done with the visual info data
  XFree( vi );

  XStoreName( display, win, "GL 3.0 Window" );

  printf( "Mapping window
" );
  XMapWindow( display, win );

  // Get the default screen's GLX extension list
  const char *glxExts = glXQueryExtensionsString( display,
                                                  DefaultScreen( display ) );

  // NOTE: It is not necessary to create or make current to a context before
  // calling glXGetProcAddressARB
  glXCreateContextAttribsARBProc glXCreateContextAttribsARB = 0;
  glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc)
           glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB" );

  GLXContext ctx = 0;

  // Install an X error handler so the application won't exit if GL 3.0
  // context allocation fails.
  //
  // Note this error handler is global.  All display connections in all threads
  // of a process use the same error handler, so be sure to guard against other
  // threads issuing X commands while this code is running.
  ctxErrorOccurred = false;
  int (*oldHandler)(Display*, XErrorEvent*) =
      XSetErrorHandler(&ctxErrorHandler);

  // Check for the GLX_ARB_create_context extension string and the function.
  // If either is not present, use GLX 1.3 context creation method.
  if ( !isExtensionSupported( glxExts, "GLX_ARB_create_context" ) ||
       !glXCreateContextAttribsARB )
  {
    printf( "glXCreateContextAttribsARB() not found"
            " ... using old-style GLX context
" );
    ctx = glXCreateNewContext( display, bestFbc, GLX_RGBA_TYPE, 0, True );
  }

  // If it does, try to get a GL 3.0 context!
  else
  {
    int context_attribs[] =
      {
        GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
        GLX_CONTEXT_MINOR_VERSION_ARB, 3,
        //GLX_CONTEXT_FLAGS_ARB        , GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
        GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
        None
      };

    printf( "Creating context
" );
    ctx = glXCreateContextAttribsARB( display, bestFbc, 0,
                                      True, context_attribs );

    // Sync to ensure any errors generated are processed.
    XSync( display, False );
    if ( !ctxErrorOccurred && ctx )
      printf( "Created GL 3.0 context
" );
    else
    {
      // Couldn't create GL 3.0 context.  Fall back to old-style 2.x context.
      // When a context version below 3.0 is requested, implementations will
      // return the newest context version compatible with OpenGL versions less
      // than version 3.0.
      // GLX_CONTEXT_MAJOR_VERSION_ARB = 1
      context_attribs[1] = 1;
      // GLX_CONTEXT_MINOR_VERSION_ARB = 0
      context_attribs[3] = 0;

      ctxErrorOccurred = false;

      printf( "Failed to create GL 3.0 context"
              " ... using old-style GLX context
" );
      ctx = glXCreateContextAttribsARB( display, bestFbc, 0,
                                        True, context_attribs );
    }
  }

  // Sync to ensure any errors generated are processed.
  XSync( display, False );

  // Restore the original error handler
  XSetErrorHandler( oldHandler );

  if ( ctxErrorOccurred || !ctx )
  {
    printf( "Failed to create an OpenGL context
" );
    exit(1);
  }

  // Verifying that context is a direct context
  if ( ! glXIsDirect ( display, ctx ) )
  {
    printf( "Indirect GLX rendering context obtained
" );
  }
  else
  {
    printf( "Direct GLX rendering context obtained
" );
  }

  printf( "Making context current
" );
  glXMakeCurrent( display, win, ctx );
  const unsigned char* glslversion = glGetString(GL_SHADING_LANGUAGE_VERSION);
  printf("%s
", glslversion);
  const unsigned char* openglversion = glGetString(GL_VERSION);
  printf("%s
", openglversion);
  GLuint vertexArray;
  glewExperimental = GL_TRUE;
  GLenum status = glewInit();
  if (status == GLEW_OK) {
    printf("Glew initilized!
");
  } else {
    printf("Failed to initialise glew : %s
", glewGetErrorString(status));
  }
  glCheck(glGenVertexArrays(1, &vertexArray));
  glCheck(glLoadIdentity());
  glClearColor( 0, 0.5, 1, 1 );
  glClear( GL_COLOR_BUFFER_BIT );
  glXSwapBuffers ( display, win );

  sleep( 1 );

  glClearColor ( 1, 0.5, 0, 1 );
  glClear ( GL_COLOR_BUFFER_BIT );
  glXSwapBuffers ( display, win );

  sleep( 1 );
  glDeleteVertexArrays(1, &vertexArray);
  glXDestroyContext( display, ctx );
  glXMakeCurrent( display, 0, 0 );


  XDestroyWindow( display, win );
  XFreeColormap( display, cmap );
  XCloseDisplay( display );
  return 0;
}


Someone doesn’t know how to force x11 to create a core profile and not a compatibility profile ?

This function returns 1 as value which is corresponding to the core profile :


GLint profile;
  glGetIntegerv(GLX_CONTEXT_PROFILE_MASK_ARB, &profile);
  std::cout<<"profile : "<<profile;

https://www.opengl.org/registry/specs/ARB/glx_create_context.txt

So now there is no other ways possible, the bug comes probably from the driver.

O_O I’ve tried the source code of the first triangle from thte tutorial, and even if opengl returns me some errors, it’s displaying something, so…

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.