How to get Uniform Block Buffers to work correctly

I dont know what i am doing wrong, according to all guides ive read and my understanding of how uniform buffers work, this should work.
I am trying to allocate a Uniform Block buffer which is supposed to hold an array of structs. This buffer will hold all Materials i use in my 3D world and i want to access it with indices.

This is the data that i want to store and use in my shader


    //In C++
    struct Material {
    			float shininess = 0.0f;
    			float specularReflection = 1.0f;
    			float diffuseReflection = 1.0f;
    			float opacity = 1.0f;
    	};
    
    vector<Material> allMaterials;

this is my uniform block in the shader

    //GLSL
    #define MAX_MATERIAL_COUNT 32
    
    struct Material{
    	float shininess;
    	float specularReflection;
    	float diffuseReflection;
    	float opacity;
    };
    
    layout(std140) uniform MaterialBuffer{
    	Material materials[MAX_MATERIAL_COUNT];
    };
    
    void main(){
    ...
    //this is how i access the buffer right now
    float diffuseReflection = materials[material_index].diffuseReflection;
    ...
    }

I have created the buffer “materialBuffer” with glGenBuffers() and bind it once, like this. I am using the same ShaderProgram i am rendering with.

 //MATERIAL BUFFER
    	
    	int binding_index = 1;
    	ShaderProgram::use("deferredShader_lStage");
    	int block_index = glGetUniformBlockIndex(ShaderProgram::currentProgram->ID, "MaterialBuffer");
    	glUniformBlockBinding(ShaderProgram::currentProgram->ID, block_index, binding_index);
    	ShaderProgram::unuse();
    	
    	glBindBufferRange(GL_UNIFORM_BUFFER, binding_index, materialBuffer, 0, sizeof(allMaterials));
    	glNamedBufferData(materialBuffer, sizeof(allMaterials), &allMaterials, GL_STATIC_DRAW);
    	glMapNamedBuffer(materialBuffer, GL_READ_ONLY);

And finally i render like this:

 //Prepare default framebuffer
            glBindFramebuffer(GL_FRAMEBUFFER, 0);
        	...
        	glBindVertexArray(SCREEN_VAO.ID);
        //use ShaderProgram
            ShaderProgram::initiate("deferredShader_lStage")
    
        //do i need to bind the materialBuffer?
        	glBindBuffer(GL_UNIFORM_BUFFER, materialBuffer);
        
        //bind buffer textures which are being sampled in this shader, part of deferred shading
        	glActiveTexture(GL_TEXTURE0);
        	glBindTexture(GL_TEXTURE_2D, positionBufferTexture);
        	....
       //the texture which stores the material indices per pixel
        	glActiveTexture(GL_TEXTURE4);
        	glBindTexture(GL_TEXTURE_2D, materialBufferTexture);
        //draw screen-sized quad
        	glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
        
        	glBindVertexArray(0);

I am sorry for this silly ask for plain debugging, but i dont know what else to do. I am not getting any error messages from gluGetErrorString() and according to many sources this should work… and if someone could tell me that the error is not in this code, i could find it much easier.

regarding your material struct:
diffuse and specular reflection are usually described per color channel:
vec3 diffuseReflection ;
vec3 specularReflection;

but keep in mind that vec3 should be avoided in glsl when using standard memory qualifier (std140)

you can easily set the uniform block binding in the shader code, for example:
layout (std140, binding = 4) uniform MY_BLOCK_DATA {
Material materials[…];
};

if you use glBindBufferRange(…), your offset / size values have to be a multiple of a certain value

you can bind the whole buffer by using glBindBufferBase(…) instead

once you map a buffer by calling glMapNamedBuffer(…) or glMapBuffer(…), you have to unmap it before you can use it again for drawing etc …
(i think “persistent mapping” is an exception to that)

beside that, did you check for gl errors ?
https://www.khronos.org/opengl/wiki/OpenGL_Error

So your code is running but you don’t get the results that you expect?

What kind of values do you get in the Shader? Try a very simple uniform buffer first, like an array of ints and check the very first element of that array.
Also, try to name your UBO instance in your shader:


layout(std140) uniform MaterialBuffer{
    	int array_of_ints[MAX_MATERIAL_COUNT];
} MyMatBuffer;

I have had similar problems, I think I can help you if we narrow the problem down.

Sorry for not answering for so long, but it turns out the problem kind of fixed itself(?) or i fixed it without knowing it… anyways, i just kept trying things out, played here and there and then at one point i wanted to give up and changed everything back to how (i thought) it was and then all of a sudden everything worked perfectly. I still dont know what made the difference. And yes, i am checking for opengl errors, but i have not been getting any.

But it seems to be the way i allocate the buffer in OpenGL, because this is now different from what i have posted here before:


//...get uniform block index and bind it to uniform buffer binding array index 0...

glBindBuffer(GL_UNIFORM_BUFFER, materialBuffer);
	glBindBufferBase(GL_UNIFORM_BUFFER, 0, materialBuffer);
	glNamedBufferStorage(materialBuffer, sizeof(Material)*allMaterials.size(), &allMaterials[0], GL_MAP_READ_BIT);

But i still have not grasped the whole concept behind uniform buffers, which gives me trouble with my light uniform array now. I understand that for every shader program there is an array of binding points for GL_UNIFORM_BUFFER which is GL_MAX_UNIFORM_BUFFER_BINDINGS in size. These bindings are used to link glsl uniform blocks to buffers in OpenGL.
So appearently this is all there is to it; i upload data to a buffer, link that buffer to a binding Index (glBindBufferBase) and then i can link a glsl uniform block to that binding point and then i have access to the data in the buffer from that uniform block.

This i how far i can get everything to work. But then of course there is data layout aswell. And this is what i have problems with with my light data. to get rid of any kind of padding and to also be able to manage different types of lights (directional, point, spot) more performance friendly, i have decided to create an array of floats which holds all the light data for every kind of lightsource in my application. Then i have an array of Light structs which contain indices into the float array. and if i want to get the color of a light i just say vec3(lightData[light.colorIndex], lightData[light.colorIndex + 1], lightData[light.colorIndex + 2]); in the shader. because there should be no padding… (array of floats…right?)… i thought this will work perfectly, but appearently there is still something i dont know about. I am getting data into the lightData buffer, but it is not layed out correctly. When i acces the lightData array with hardcoded indices, some values are correct, others are 0, and others seem to be undefined, huge or negative garbage values.

What is new about this (compared to the material buffer), is that this buffer needs to be streamed to and this might be where the data corruption is happening.
So this is how i do the lights right now:

OpenGL:


//Here i allocate the buffer space and link them and the uniform blocks to their binding points.
void initOpenGLBuffer(){
//...
        ShaderProgram::use("deferredShader_lStage");

	int block_index = glGetUniformBlockIndex(ShaderProgram::currentProgram->ID, "LightDataBuffer");
	glUniformBlockBinding(ShaderProgram::currentProgram->ID, block_index, 1);

	block_index = glGetUniformBlockIndex(ShaderProgram::currentProgram->ID, "LightIndexBuffer");
	glUniformBlockBinding(ShaderProgram::currentProgram->ID, block_index, 2);

	ShaderProgram::unuse();

	
	//Lightbuffers
	glBindBuffer(GL_UNIFORM_BUFFER, lightDataBuffer);
	glBindBufferBase(GL_UNIFORM_BUFFER, 1, lightDataBuffer);
	glNamedBufferData(lightDataBuffer, sizeof(float)*11*App::MAX_LIGHTS_COUNT, nullptr, GL_DYNAMIC_DRAW);

	
	glBindBuffer(GL_UNIFORM_BUFFER, lightIndexBuffer);
	glBindBufferBase(GL_UNIFORM_BUFFER, 2, lightIndexBuffer);
	glNamedBufferData(lightIndexBuffer, sizeof(unsigned int)*3*App::MAX_LIGHTS_COUNT, nullptr, GL_DYNAMIC_DRAW);
}

// i call this every frame when all my light data is updated to upload the updated data to the buffers. for this i invalidate the old buffer ( which should stay around for later use, according to the concept of orphaning) and then i refill the entire buffer
void uploadLightData(){

glInvalidateBufferData(lightDataBuffer);
	glInvalidateBufferData(lightIndexBuffer);
	
	glBindBuffer(GL_UNIFORM_BUFFER, lightDataBuffer);

//one light can at max contain 11 floats in my implementation
	glMapBufferRange(GL_UNIFORM_BUFFER, 0, sizeof(float) * 11 * App::MAX_LIGHTS_COUNT, GL_MAP_WRITE_BIT | GL_MAP_UNSYNCHRONIZED_BIT);
	glBufferData(GL_UNIFORM_BUFFER, sizeof(float) * 11 * App::MAX_LIGHTS_COUNT, &allLightData[0], GL_DYNAMIC_DRAW);

//unmapping the buffer throws an opengl error string "invalid operation"
	//glUnmapBuffer(GL_UNIFORM_BUFFER);
	checkOpenGLErrors("OpenGL::uploadLightData()1:");

// the index array should be ignored right now, because i am still hardcoding the indices in the shader
	glBindBuffer(GL_UNIFORM_BUFFER, lightIndexBuffer);
	glMapBufferRange(GL_UNIFORM_BUFFER, 0, sizeof(unsigned int) * 3 * App::MAX_LIGHTS_COUNT, GL_MAP_WRITE_BIT | GL_MAP_UNSYNCHRONIZED_BIT);
	glBufferData(GL_UNIFORM_BUFFER, sizeof(unsigned int) * 3 * App::MAX_LIGHTS_COUNT, &lightIndices[0], GL_DYNAMIC_DRAW);

//unmapping the buffer throws an opengl error string "invalid operation"
	//glUnmapBuffer(GL_UNIFORM_BUFFER);
	glBindBuffer(GL_UNIFORM_BUFFER, 0);
	checkOpenGLErrors("OpenGL::uploadLightData()2:");
}

And in glsl i have declared the uniform blocks like this


//not in use yet
struct Light{
	unsigned int baseIndex; // index to 4 floats; color and brightness
	unsigned int positionIndex; // index to 3 floats
	unsigned int frustumIndex; // index to 4 floats; frustum direction and angle
};


//notice the missing layout(std140), i have tried it with it, but no success. i actually dont want to use it, because i am thinking with an array of only floats i dont need any automatic layout
uniform LightDataBuffer{
	float lightData[11*MAX_LIGHT_COUNT];
};

uniform LightIndexBuffer{
	Light lights[MAX_LIGHT_COUNT];
};

void main(){
//color for the first light
vec3 lightColor = vec3(lightData[0], lightData[1], lightData[2]);

...
}

I guess i will just screw around some more again and hope that the problem fixes itself again. i am betting its just one teenytiny command i am missing that changes everything…

i’d really recommend you read the openGL 4.5 core specification:
7.6.2.2 Standard Uniform Block Layout

if you declare a struct in GLSL, like “Material” or “SpotLight” or whatever, you have to make it 16-byte aligned to be able to use it in arrays

for example:
in glsl you have

struct Material {
vec4 Ka;
vec4 Kd;
vec4 Ks;
float Ns;
};

the corresponding struct in cpp has to look like this:

struct Material {
vec4 Ka;
vec4 Kd;
vec4 Ks;
float Ns;
float padding[3];
};

thats because the glsl structs Ns variable just consumes 4 byte, to make the whole struct 16byte aligned, we have to add “float padding[3];” at the end, so that the cpp struct consumes a multiple of 16bytes

another example, the same stuff:

struct Material {
vec4 Ka;
vec4 Kd;
vec4 Ks;
float Ns;
};
struct Material {
vec3 Ka;
float padding1;
vec3 Kd;
float padding2;
vec3 Ks;
float padding3;
float Ns;
float padding4[3];
};

again, thats the same as the previous example, with the exception that you can now treat Ka / Kd / Ks as vec3’s as you should be
you might try to also declare in GLSL those variables as vec3’s: that wont work most likely, because many drivers dont get the specs correct, thats what you can read in the wiki
https://www.khronos.org/opengl/wiki/Interface_Block_(GLSL)#Memory_layout

the last thing: dont declare your array of structs too large, there is a memory limit for the uniform block: GL_MAX_UNIFORM_BLOCK_SIZE (in bytes)
https://www.khronos.org/opengl/wiki/Uniform_Buffer_Object#Limitations

it would make sense to first query that value and then use it in you shaders source code before you compile the shaders

GLint max_uniform_block_size = 0;
glGetIntegerv(GL_MAX_UNIFORM_BLOCK_SIZE, &max_uniform_block_size);
unsigned int max_materials = max_uniform_block_size / sizeof(Material);
// then use "max_materials" as array size ...
shader.setsource(blabla ..., max_materials);

Thanks for the useful advice, but this should not be the problem i have with my light data, right? Also i am trying to avoid padding as much as possible, because it is of course pure memory waste.
Because i am sending the actual data for all my lights as one single vector<float> from c++, the data should line up perfectly with the glsl array “float lightData[MAX_LIGHT_COUNT*MAX_FLOATS_PER_LIGHT];”, right?

I am not using any structs yet, basically i am just trying to transfer an array of floats from c++ to glsl, which always has a size of a multiple of 4 bytes.

I just observed that the data i read in the shader turns out different, when the c++ vector´s capacity is different.
Before, i uploaded an array of 6 floats into a buffer (with glBufferData) which was expecting 110 floats to be filled. I always assumed that at least the initialized data is what its supposed to be and the remaining capacity may contsain garbage, but appearently opengl tries to perform some kind of interpolation or something.
If I use glBufferSubData with an offset of 0 and the size of my vector after i have initialized an empty buffer of the maximum size, the results are different again, but still not what i expect.

Does anyone have an explanation for this?