Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: How to get Uniform Block Buffers to work correctly

Threaded View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Intern Newbie
    Join Date
    Oct 2016
    Posts
    46

    How to get Uniform Block Buffers to work correctly

    I dont know what i am doing wrong, according to all guides ive read and my understanding of how uniform buffers work, this should work.
    I am trying to allocate a Uniform Block buffer which is supposed to hold an array of structs. This buffer will hold all Materials i use in my 3D world and i want to access it with indices.

    This is the data that i want to store and use in my shader
    Code :
        //In C++
        struct Material {
        			float shininess = 0.0f;
        			float specularReflection = 1.0f;
        			float diffuseReflection = 1.0f;
        			float opacity = 1.0f;
        	};
     
        vector<Material> allMaterials;
    this is my uniform block in the shader

    Code :
        //GLSL
        #define MAX_MATERIAL_COUNT 32
     
        struct Material{
        	float shininess;
        	float specularReflection;
        	float diffuseReflection;
        	float opacity;
        };
     
        layout(std140) uniform MaterialBuffer{
        	Material materials[MAX_MATERIAL_COUNT];
        };
     
        void main(){
        ...
        //this is how i access the buffer right now
        float diffuseReflection = materials[material_index].diffuseReflection;
        ...
        }

    I have created the buffer "materialBuffer" with glGenBuffers() and bind it once, like this. I am using the same ShaderProgram i am rendering with.

    Code :
     //MATERIAL BUFFER
     
        	int binding_index = 1;
        	ShaderProgram::use("deferredShader_lStage");
        	int block_index = glGetUniformBlockIndex(ShaderProgram::currentProgram->ID, "MaterialBuffer");
        	glUniformBlockBinding(ShaderProgram::currentProgram->ID, block_index, binding_index);
        	ShaderProgram::unuse();
     
        	glBindBufferRange(GL_UNIFORM_BUFFER, binding_index, materialBuffer, 0, sizeof(allMaterials));
        	glNamedBufferData(materialBuffer, sizeof(allMaterials), &allMaterials, GL_STATIC_DRAW);
        	glMapNamedBuffer(materialBuffer, GL_READ_ONLY);

    And finally i render like this:



    Code :
     //Prepare default framebuffer
                glBindFramebuffer(GL_FRAMEBUFFER, 0);
            	...
            	glBindVertexArray(SCREEN_VAO.ID);
            //use ShaderProgram
                ShaderProgram::initiate("deferredShader_lStage")
     
            //do i need to bind the materialBuffer?
            	glBindBuffer(GL_UNIFORM_BUFFER, materialBuffer);
     
            //bind buffer textures which are being sampled in this shader, part of deferred shading
            	glActiveTexture(GL_TEXTURE0);
            	glBindTexture(GL_TEXTURE_2D, positionBufferTexture);
            	....
           //the texture which stores the material indices per pixel
            	glActiveTexture(GL_TEXTURE4);
            	glBindTexture(GL_TEXTURE_2D, materialBufferTexture);
            //draw screen-sized quad
            	glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
     
            	glBindVertexArray(0);

    I am sorry for this silly ask for plain debugging, but i dont know what else to do. I am not getting any error messages from gluGetErrorString() and according to many sources this should work... and if someone could tell me that the error is not in this code, i could find it much easier.
    Last edited by stimulate; 02-12-2017 at 07:00 AM.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •