Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Problem with uniform blocks

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2012
    Posts
    3

    Problem with uniform blocks

    Hi all.

    I have a weird problem with uniform blocks:
    This is my test app
    Code :
    // Triangle_opengl_3_1
    // A cross platform version of
    // http://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29
    // with some code from http://www.lighthouse3d.com/opengl/glsl/index.php?oglexample1
    // and from the book OpenGL Shading Language 3rd Edition, p215-216
    // Daniel Livingstone, October 2010
     
    #include <GL/glew.h>
    #define FREEGLUT_STATIC
    #include <GL/freeglut.h>
    #include <iostream>
    #include <fstream>
    #include <string>
     
    using namespace std;
     
     
    // loadFile - loads text file into char* fname
    // allocates memory - so need to delete after use
    // size of file returned in fSize
    const char* loadFile(const char *fname, GLint &fSize)
    {
        ifstream::pos_type size;
        char * memblock;
        string text;
     
        // file read based on example in cplusplus.com tutorial
        ifstream file (fname, ios::in|ios::binary|ios::ate);
        if (file.is_open())
        {
            size = file.tellg();
            fSize = (GLuint) size;
            memblock = new char [size];
            file.seekg (0, ios::beg);
            file.read (memblock, size);
            file.close();
            cout << "file " << fname << " loaded" << endl;
            text.assign(memblock);
        }
        else
        {
            cout << "Unable to open file " << fname << endl;
            exit(1);
        }
        return memblock;
    }
     
    // printShaderInfoLog
    // From OpenGL Shading Language 3rd Edition, p215-216
    // Display (hopefully) useful error messages if shader fails to compile
    void printShaderInfoLog(GLint shader)
    {
        int infoLogLen = 0;
        int charsWritten = 0;
        GLchar *infoLog;
     
        glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen);
     
        // should additionally check for OpenGL errors here
     
        if (infoLogLen > 0)
        {
            infoLog = new GLchar[infoLogLen];
            // error check for fail to allocate memory omitted
            glGetShaderInfoLog(shader,infoLogLen, &charsWritten, infoLog);
            cout << "InfoLog:" << endl << infoLog;
            delete [] infoLog;
        }
    }
     
    void reshape(int w, int h)
    {
        glViewport(0,0,(GLsizei)w,(GLsizei)h);
    }
     
    int main (int argc, char* argv[])
    {
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
        glutInitWindowSize(600,600);
        glutCreateWindow("Triangle Test");
        glewInit();
        GLenum err = glewInit();
        if (GLEW_OK != err)
        {
            /* Problem: glewInit failed, something is seriously wrong. */
            cout << "glewInit failed, aborting." << endl;
            exit (1);
        }
        cout << "Status: Using GLEW " << glewGetString(GLEW_VERSION) << endl;
        cout << "OpenGL version " << glGetString(GL_VERSION) << " supported" << endl;
     
        int l;
        const char *s = loadFile("sh", l);
        int fragmentShaderId = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShaderId, 1, &s, &l);
        glCompileShader(fragmentShaderId);
     
        int programId = glCreateProgram();
        glAttachShader(programId, fragmentShaderId);
        glLinkProgram(programId);
     
        printShaderInfoLog(fragmentShaderId);
        GLint linked;
        glGetProgramiv(programId, GL_LINK_STATUS, &linked);
        if (linked == GL_FALSE) {
            // if a link error occured ...
            GLint logLength;
            glGetProgramiv(programId, GL_INFO_LOG_LENGTH, &logLength);
            if (logLength > 0) {
                GLsizei length;
                char *log = new char[logLength];
                glGetProgramInfoLog(programId, GLsizei(logLength), &length, log);
                cout<<log<<endl;
                delete[] log;
            }
            glDeleteProgram(programId);
            programId = 0;
            throw exception();
        } else {
            cout << "Linked\n";
        }
        printf("%d\n",glGetError());
     
        GLfloat pixels[4];
     
        glClear(GL_COLOR_BUFFER_BIT);
     
        glUseProgram(programId);
        glBegin(GL_QUADS);
        glVertex2i(-1, -1);
        glVertex2i(1, -1);
        glVertex2i(1, 1);
        glVertex2i(-1, 1);
        glEnd();
     
        glReadPixels(0, 0, 1, 1,  GL_RGBA,  GL_FLOAT, &pixels);
        printf("\n %f %f %f %f\n",pixels[0],pixels[1],pixels[2],pixels[4]);
     
        return 0;
    }
    this is my fragment shader (written in the external file 'sh'):
    Code :
    #version 330
     
    uniform b { float u; };
    layout(location=0) out vec4 color;
    void main() { color = vec4(0.1, 0.45, 0.5, 0.0); }

    and this is the output i get from the app:
    Code :
    Status: Using GLEW 1.7.0
    OpenGL version 3.3.11631 Compatibility Profile Context supported
    file sh loaded
    InfoLog:
    Fragment shader was successfully compiled to run on hardware.
    Linked
    0
     
     0.000000 0.000000 0.000000 0.000000

    So the shader compiles and builds fine, glGetError() returns 0 but the glReadPixels gives me an empty (0,0,0,0) pixel instead of the value set in the shader.
    But if i comment out the "uniform b { float u; };" in the shader it all works as expected and the pixel is (0.098039, 0.450980, 0.501961, 0.000000), or whatever value i put in the shader.
    I'm really clueless, i hope you can help me.

    I have a Radeon HD 4850, on Linux and catalyst 12.4. This is the output of my glxinfo.

  2. #2
    Junior Member Regular Contributor
    Join Date
    Aug 2009
    Location
    Poland
    Posts
    111
    I just ran Your example (unmodified), and got following output:
    Code :
    Status: Using GLEW 1.7.0
    OpenGL version 3.3.0 NVIDIA 302.17 supported
    file sh loaded
    InfoLog:
    Linked
    0
     
     0.098039 0.450980 0.498039 0.000000

    Edit:
    I ran it again on laptop with AMD card (Radeon HD5650)
    with following result:
    Code :
    Status: Using GLEW 1.7.0
    OpenGL version 4.2.11631 Compatibility Profile Context supported
    file sh loaded
    InfoLog:
    Fragment shader was successfully compiled to run on hardware.
    Linked
    0
     
     0.000000 0.000000 0.000000 0.000000

    Edit2:
    I modified Your program (added simplest pass-through vertex shader) and it worked on AMD:
    Code :
    Status: Using GLEW 1.7.0
    OpenGL version 4.2.11631 Compatibility Profile Context supported
    file sh loaded
    file vertex.glsl.vs loaded
    InfoLog:
    Fragment shader was successfully compiled to run on hardware.
    InfoLog:
    Vertex shader was successfully compiled to run on hardware.
    Linked
    0
     
     0.098039 0.450980 0.501961 0.000000

    You must specify vertex and fragment shaders in core profile context (You gave only fragment).
    I don't know if this is the case in compatibility profile.

    We do not know version of GL context
    requested by the program (glew should default to highest available, but I'm not sure).

    btw. why did you use layout qualifier on fragment shader output?
    Last edited by kowal; 06-27-2012 at 11:28 AM.

  3. #3
    Junior Member Newbie
    Join Date
    Jun 2012
    Posts
    3
    You're right, it was that. Thanks a lot!
    It's strange though that the vertex shader wasn't necessary without the uniform block.

    btw. why did you use layout qualifier on fragment shader output?
    I took that shader from an already written piece of code, and it was there.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Code :
    You must specify vertex and fragment shaders in core profile context (You gave only fragment).

    I read that and it perplexed me. As far as I read the spec that's not correct. First of all, it is certainly incorrect that it is dependent on the kind of profile the context is created with. Second, the spec doesn't seem to say that a program objects actually needs to have any shader objects attached to it and it doesn't state that linkage would fail in such a case. What is said, however, is that if no valid object code for a stage exists, the program object will not be considered active for the stage it is missing a shader object for. It goes on to state that for a missing object for the VERTEX_SHADER stage (similarly for the FRAGMENT_SHADER stage) behavior will be undefined with that program:

    Quote Originally Posted by The Spec
    When the program object currently in use for the vertex stage includes a vertex shader, its vertex shader is considered active and is used to process vertices. If the current vertex stage program object has no vertex shader, or no program object is current for the vertex stage, the results of vertex shader execution are undefined.
    I know that vendors tend to simply put out black as the fragment color (definitely the case for AMD and NVIDIA at least on Linux - probably on Windows too since most of the driver code is common) if there is no fragment shader in a program object. However, you don't need a fragment shader to transform your vertices correctly. One usecase where you don't need any fragment shader is if you simply want to generate depth values. You can use a program which only contains a vertex shader which correctly transforms your vertices. No fragment shading needed.

    We do not know version of GL context requested by the program (glew should default to highest available, but I'm not sure).
    GLEW is in no way responsible for context creation. In fact, you can't even use GLEW without a valid GL context in the first place.

  5. #5
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    A vertex shader is a must when using core profile. A fragment shader is optional. So for shadow rendering and/or transform feedback, you need to have only a vertex shader.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    A vertex shader is a must when using core profile.
    But where is the piece of the spec that clearly states something like

    Quote Originally Posted by Hypothetical Spec Statement
    A program objects needs to have an executable shader object for the vertex stage. Shader objects for other stages are optional.
    ? And why should it be profile dependent? I assume that if the program isn't active for the vertex stage then vertex processing is handled by fixed-function or emulated fixed-function when using a compatibility profile. But that doesn't change the tenor of the core spec. Just because there is sort of fallback to compensate for inactive shader stages with the compat. profile it doesn't mean that the program objects are handled differently with respect to linkage and validity. What is does say is that behavior is undefined - which leaves defining what happens to you guys (@aqnuep). Right?

    If I missed something in the spec please give me a hint.
    Last edited by thokra; 06-28-2012 at 08:28 AM. Reason: Add some clarification.

  7. #7
    Advanced Member Frequent Contributor arekkusu's Avatar
    Join Date
    Nov 2003
    Posts
    781
    Quote Originally Posted by thokra View Post
    If I missed something in the spec please give me a hint.
    It is poorly specified, but it is in the Core Profile spec. See these parts:

    (Programs)
    If UseProgram is called with program set to 0, then the current rendering state refers to an invalid program object, and the results of vertex and fragment shader execution are undefined.


    (Program Validation)
    Undefined behavior results if the program object in use has no fragment shader unless transform feedback is enabled, in which case only a vertex shader is required.


    (Appendix E.2.2 Removed Features)
    * ... fixed-function vertex processing ... A vertex shader must be defined in order to draw primitives.
    * Fixed-function fragment processing ...


    On the other hand, in Compatibility Profile, I'd expect your example to work.

  8. #8
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    (Programs) If UseProgram is called with program set to 0, then the current rendering state refers to an invalid program object, and the results of vertex and fragment shader execution are undefined.
    But my concerns aren't about setting the current program to 0. It's about using a program object, which has a non-zero handle and no shader objects stages and if that is permissible (in contrast to meaningful or wise) or not.

    (Program Validation) Undefined behavior results if the program object in use has no fragment shader unless transform feedback is enabled, in which case only a vertex shader is required.
    The thing is that the term "undefined behavior" doesn't mean that it's illegal. Does it imply that validation will fail and the program will not execute?

    A vertex shader must be defined in order to draw primitives.
    So it has been there and was changed from "absolutely necessary" to "undefined behavior"?

    On the other hand, in Compatibility Profile, I'd expect your example to work.
    Yeah, me too. Which is not surprising since replacing vertex and fragment processing is inherently not necessary since fixed-function or emulation thereof is guaranteed to be there to take over - at least that's what I strongly assume.

    I'd think that something as crucial would be defined as precisely and as unambiguous as possible.

  9. #9
    Intern Contributor
    Join Date
    Jul 2006
    Posts
    72
    Quote Originally Posted by thokra View Post
    The thing is that the term "undefined behavior" doesn't mean that it's illegal. Does it imply that validation will fail and the program will not execute?
    "undefined behavior" means the behavior is undefined! Expect it to burn your house down etc ("undefined behavior" is a superset of all imaginable and unimaginable behaviors). "illegal" means is is not allowed and usually such things also have a behavior defined in spec if one tries to do the "illegal" thing anyway.

    Quote Originally Posted by thokra View Post
    I'd think that something as crucial would be defined as precisely and as unambiguous as possible.
    "undefined behavior" is very precise and unambiguous imho - the behavior is undefined, don't do that.

    ----
    Not to imply that the spec is flawless.
    Last edited by tanzanite; 06-28-2012 at 08:51 PM.

  10. #10
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    I guess I'll just check it out on all hardware available to me and see how the actual implementations deal with it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •