Vertex shader stops passing values on AMD Hardware

Hey,

we stumbled into a strange issue with our software and AMD hardware/drivers.

We are rendering large amounts of point cloud data and noticed that on our AMD system (R7 370 with Radeon v18.3.1, Win10) most of the point cloud will be flickering or black.
Closer examination via Renderdoc showed that the color data is copied and passed to the the Vertex shader, but after the 4096th element in the memory it just stops outputting the color attribute data it receives.
It also surfaced the two following warnings (VERTEX_ATTRIB[0] is the color attribute):


glDrawArrays uses input attribute 'VERTEX_ATTRIB[0]' which is specified as 'type = GL_UNSIGNED_BYTE size = 3'; this combination is not a natively supported input attribute type
glDrawArrays uses input attribute 'VERTEX_ATTRIB[0]' with stride '3' that is not optimally aligned; consider aligning on a 4-byte boundary

Neither of that is present when the program is run on our Nvidia GPU (Gtx 980, Win10).
Changing the stride to 4 “solves” the problem in the way that the color values are passed and the flickering stops, but passing an additional byte for every color and increasing the data size seems a bit excessive to me. Especially as passing colors in RBG888 as a vertex attribute doesn’t seem very exotic to me.

Are we doing something wrong in OpenGL and encountering unspecified behaviour (that works with Nvidia by chance) or is there some issue with the AMD driver?
Has any of you encountered something similiar?

We wrote a minimal version demonstrating the problem. I will post the relevant code for rendering below, but if you want to have a closer look or build it yourself you can find it here. If it shows a white dot everything is fine, if the dot flickers you’ve got the same problem as us.

Additional Information:
GLVendor: ATI Technologies Inc.
GLVersion: 3.2.13507 Core Profile Forward-Compatible Context 23.20.15027.1004
GLRenderer:AMD Radeon ™ R7 370 Series

Rendering:


GLuint program = newProgram(vertexShader, fragmentShader);

glUseProgram(program);

GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
		

const auto colorAttrib = glGetAttribLocation(program, "color");
const auto posAttrib = glGetAttribLocation(program, "position");

// Create Vertex Buffer
GLuint vertexBuffer = 0;
glGenBuffers(1, &vertexBuffer);
				
// Fill vertex buffer with demo data (16,777,216 vec2 float Values, all zero)
const std::vector<float> vertices = std::vector<float>(134217728, 0.f);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), vertices.data(), GL_DYNAMIC_DRAW);
		
// Create Color Buffer
GLuint colorBuffer = 0;
glGenBuffers(1, &colorBuffer);

// Fill color buffer with demo data (16,777,216 RGB Values, all 255)
const std::vector<char> colors = std::vector<char>(50331648, (char)255);
glBindBuffer(GL_ARRAY_BUFFER, colorBuffer);
glBufferData(GL_ARRAY_BUFFER, colors.size(), colors.data(), GL_DYNAMIC_DRAW);
		
// Bind color buffer and set format
glEnableVertexAttribArray(colorAttrib);
glBindBuffer(GL_ARRAY_BUFFER, colorBuffer);
glVertexAttribPointer(colorAttrib, 3, GL_UNSIGNED_BYTE, GL_TRUE, 0, nullptr);

// Bind vertex buffer and set format
glEnableVertexAttribArray(posAttrib);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 0, nullptr);


while (!glfwWindowShouldClose(window)) {
	glfwPollEvents();
	if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
		glfwSetWindowShouldClose(window, GLFW_TRUE);

	glClearColor(0.1f, 0.1f, 0.1f, 1.f);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
			
	glPointSize(10.f);
	glDrawArrays(GL_POINTS, 2048, 4096);

	glfwSwapBuffers(window);
}

Vertex shader:


#version 450
precision mediump float;

in vec2 position;
in vec3 color;

out vec4 vertColor;

void main() {
	gl_Position = vec4(position, 0.0, 1.0);
	vertColor = vec4(color, 1.f);
}

Fragment shader:


#version 450
precision mediump float;

out vec4 outColor;

in vec4 vertColor;

void main() {
	outColor = vertColor;
}

[QUOTE=FThiel;1290741]we stumbled into a strange issue with our software and AMD hardware/drivers.

We are rendering large amounts of point cloud data and noticed that on our AMD system (R7 370 with Radeon v18.3.1, Win10)
most of the point cloud will be flickering or black.

It also surfaced the two following warnings (VERTEX_ATTRIB[0] is the color attribute):


glDrawArrays uses input attribute 'VERTEX_ATTRIB[0]' which is specified as 'type = GL_UNSIGNED_BYTE size = 3'; this combination is not a natively supported input attribute type
glDrawArrays uses input attribute 'VERTEX_ATTRIB[0]' with stride '3' that is not optimally aligned; consider aligning on a 4-byte boundary

Neither of that is present when the program is run on our Nvidia GPU (Gtx 980, Win10).
Changing the stride to 4 “solves” the problem in the way that the color values are passed and the flickering stops, but passing an additional byte for every color and increasing the data size seems a bit excessive to me. …

Are we doing something wrong in OpenGL and encountering unspecified behaviour (that works with Nvidia by chance) or is there some issue with the AMD driver?

Additional Information:
GLVendor: ATI Technologies Inc.
GLVersion: 3.2.13507 Core Profile Forward-Compatible Context 23.20.15027.1004
GLRenderer:AMD Radeon ™ R7 370 Series
[/QUOTE]

That’s interesting. Sure looks like a driver bug in its reading of normalized UBYTE[3] vertex attributes (what you’re doing with the color attrib looks reasonable to me). A perf warning that this format might be suboptimal for the driver seems reasonable, but failing to convert the format properly? Not cool.

I see that this came up with the AMD/ATI drivers 9 years ago with glColorPointer LINK), but there was a comment that this was fixed.

You might generate a short stand-alone repro case and submit it to the AMD driver folks for a fix.

Sorry for the late response, I was traveling for our institution.

Thank you for the input, I really hoped it would be our fault, because we could easy fix that…

I already included a stand alone repo case in the first post in this thread, but I assume it got a bit lost between all the information.
It can be found here.

I’ll submit it to AMD and keep you posted if something comes back.

No news from AMD yet, but we found something.

If we change the OpenGL profile from Core to Compatibility everything works as expected.
No real solution, but probably a reasonable workaround for us and everyone else who experiences this behaviour, till it is fixed.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.