When an opensource driver for opengl 4.0 and more

Hi!

The mesa driver is the only driver which works. (without having a blackscreen when I try to logon)

But this driver only support opengl 3.0 and not version 4.0 and more (I don’t know what’s the last version of opengl) like proprietary drivers.

I tried the last version of mesa supporting opengl 330 but I get this error at runtime :

X Error of failed request : BadMatch (invalid parameter attributes)
Major opcode of failed request : 156 (GLX)
Minor opcode of failed request : ()
Serial number of failed request : 77
Current serial number in input stream : 76.

[QUOTE=Lolilolight;1264836]Hi!

The mesa driver is the only driver which works. (without having a blackscreen when I try to logon)

But this driver only support opengl 3.0 and not version 4.0 and more (I don’t know what’s the last version of opengl) like proprietary drivers.

I tried the last version of mesa supporting opengl 330 but I get this error at runtime :

X Error of failed request : BadMatch (invalid parameter attributes)
Major opcode of failed request : 156 (GLX)
Minor opcode of failed request : ()
Serial number of failed request : 77
Current serial number in input stream : 76.[/QUOTE]

What is the hardware? What version of Mesa? What is the distribution?

For Mesa, to get GL version 3.1 or higher is only for core profile, not for compatibility. The highest version of Mesa “without compatibility profile” is 3.3 (with some 4.x features via extensions). The highest version of Mesa “with compatibility profile” is 3.0.

Here is the version of mesa that I have : OpenGL version string: 3.0 Mesa 10.1.3

I don’t know if it’s possible to have per pixel linked list features with mesa yet. (Or some stuff to read/write pixels on a texture simultaneously in the fragment shader with image load/store)

Maybe I’ll have to wait the release of Vulkan…, and getting a newer graphic card or coding a driver myself to get a better FPS with more complex scenes.

Ok now I’ve this version : OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD CEDAR
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.6.0-devel (git-31dc63d 2015-03-28 trusty-oibaf-ppa)
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:

Where can I find the extensions ?

But this driver doesn’t seems to be work very well.

Here I want to multiply the vertex color by the texture’s color :


int main(int argc, char* argv[])
{
    RenderWindow window (sf::VideoMode (800, 600), "Test", sf::Style::Default, sf::ContextSettings(0, 0, 4, 3, 3));
    const std::string vertexShader =
    "#version 330 core
"
    "layout (location = 0) in vec4 position;"
    "layout (location = 1) in vec4 color;"
    "layout (location = 2) in vec2 uv;"
    "layout (location = 3) in vec3 normals;"
    "uniform mat4 projMatrix;"
    "uniform mat4 viewMatrix;"
    "uniform mat4 modelMatrix;"
    "uniform mat4 texMatrix;"
    "out vec4 f_color;"
    "out vec2 f_uv;"
    "void main() {"
    "   gl_Position = modelMatrix * viewMatrix * projMatrix * position;"
    "   f_uv = (texMatrix * vec4(uv.xy, 0, 0)).xy;"
    "   f_color = color;"
    "}";
    const std::string fragmentShader =
    "#version 330 core
"
    "uniform sampler2D texSampler;"
    "in vec2 f_uv;"
    "in vec4 f_color;"
    "void main() {"
    "   vec4 texel = texture (texSampler, f_uv);"
    "   gl_FragColor = f_color * texel;"
    "}";
    Texture tex;
    tex.loadFromFile("tilesets/herbe.png");
    Tile tile (&tex, Vec3f(0, 0, 0), Vec3f(100, 50, 0), sf::IntRect(0, 0, 100, 50));
    Shader simpleShader;
    simpleShader.loadFromMemory(vertexShader, fragmentShader);
    Matrix4f projMatrix = window.getView().getProjMatrix().getMatrix().transpose();
    Matrix4f viewMatrix = window.getView().getViewMatrix().getMatrix().transpose();
    Matrix4f modelMatrix = tile.getTransform().getMatrix();
    Matrix4f texMatrix = tex.getTextureMatrix().transpose();
    simpleShader.setParameter("projMatrix", projMatrix);
    simpleShader.setParameter("viewMatrix", viewMatrix);
    simpleShader.setParameter("modelMatrix", modelMatrix);
    simpleShader.setParameter("texMatrix", texMatrix);
    simpleShader.setParameter("texSampler", Shader::CurrentTexture);
    RenderStates states;
    states.shader = &simpleShader;
    while (window.isOpen()) {
        window.clear();
        window.draw(tile, states);
        window.display();
        sf::Event event;
        while (window.pollEvent(event)) {
            if (event.type == sf::Event::Closed) {
                window.close();
            }
        }
    }
    return 0;

I use a texture matrix because SFML use one. (I had to change the SFML code to support modern opengl version.

So I’ve just modified the draw function and putting everything into VBO.


glCheck(glBindBuffer(GL_ARRAY_BUFFER, states.vertexBufferId));
                        glCheck(glEnableVertexAttribArray(0));
                        glCheck(glVertexAttribPointer(0, 3,GL_FLOAT,GL_FALSE,sizeof(Vertex), (GLvoid*) 0));
                        glCheck(glEnableVertexAttribArray(1));
                        glCheck(glVertexAttribPointer(1, 4,GL_UNSIGNED_BYTE,GL_FALSE,sizeof(Vertex),(GLvoid*) 12));
                        glCheck(glEnableVertexAttribArray(2));
                        glCheck(glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) 16));
                        glCheck(glBindBuffer(GL_ARRAY_BUFFER, states.normalBufferId));
                        glCheck(glEnableVertexAttribArray(3));
                        glCheck(glVertexAttribPointer(3, 3,GL_FLOAT,GL_FALSE,sizeof(Vector3f), (GLvoid*) 0));
                        static const GLenum modes[] = {GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_TRIANGLES,
                                                       GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS};
                        GLenum mode = modes[type];
                        if (indexes == nullptr) {
                            glCheck(glDrawArrays(mode, 0, vertexCount));
                        } else if (indexes != nullptr) {
                            glCheck(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, states.indexBufferId));
                            glCheck(glDrawElements(mode, indexesCount,GL_UNSIGNED_INT,0));
                       }

If I’ve well understand, color component are normalized into the fragment shader aren’t they ? (They are between 0 and 255 in my VBO)
Or should I normalize them myself ?

If I just pass the texture color or the vertex color it works but if I multiply both colors I haven’t the texture color but white color. :confused:

I’ve found the solution sorry, I just had to change this line by passing GL_TRUE otherwise opengl doens’t normalize the colors components :


glCheck(glVertexAttribPointer(1, 4,GL_UNSIGNED_BYTE,GL_TRUE,sizeof(Vertex),(GLvoid*) 12));

Now I just need to find an extension supporting opengl 4.x features to optimize my game and it should improve rendering performances.

Arf, image load/store isn’t already supported :

http://cgit.freedesktop.org/mesa/mesa/tree/docs/GL3.txt

So I guess I have to wait.

It should also be noted that image load/store isn’t really about “optimizing” anything. Sure, there are ways you could use it in some instances to make certain things faster. But it’s much more about doing things you couldn’t do otherwise (like order-independent transparency).

But it’s much more about doing things you couldn’t do otherwise (like order-independent transparency).

It’s especially for that that I need it. (All my tiles are semi-transparent)

At the moment I draw every face one by one and I apply the blending in the fragment shader, but it’s too slow.

Ther isn’t a good solution that I know until opengl 4.2. :confused:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.