View Full Version : Strange behaviour when using SSBO

03-13-2016, 01:04 PM
I've got a rather large (18x18x80x45) array of GLfloats that I'm passing to a compute shader via an SSBO. When I change the dimensions to 18x18x80x1, it works perfectly. However, as I increase the final index, there is a noticeable delay when dispatching the compute shader - I haven't even bothered waiting for the delay of the full-size 45 index array. Surely passing an array of ~4.5MB from CPU to GPU shouldn't take several minutes? Even changing the number of work groups and work size both to 1 doesn't change anything. However what's even more strange is that if I compile my program with the final index as 5 for example, it will take a few seconds to run the compute shader - for the first time only. After this, even when closing and re-running the application, there will be no delay. Only changing the final index and recompiling changes the time delay.

Also, I am using GL_ARB_compute_variable_group_size.

Can anyone shed some light onto this puzzling matter?


auto pixels = new GLuint[18][18][80][45];

for (int i = 0; i < 18; ++i)
for (int j = 0; j < 18; ++j)
for (int k = 0; k < 80; ++k)
for (int l = 0; l < 45; ++l)
pixels[i][j][k][l] = 0;
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(pixels), pixels, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo);
glDispatchComputeGroupSizeARB(1, 1, 1, 1, 1, 1);

Compute shader:

#version 440 core
#extension GL_ARB_compute_variable_group_size : enable

layout(local_size_variable) in;

layout(std430, binding = 0) buffer ssbo
uint pixels[18][18][80][45];

void main()