Nothing drawn but the clear color on ATI

My renderer, which is working fine on my desktop with an NVIDIA card, won’t display anything but the clear color on my laptop with an ATI Mobility Radeon.

I’m creating a 3.2 forward-compatible context and all buffer objects are created successfully (checked with gDebugger as well). However, nothing’s drawn to the screen. I’ve simplified the source code to the following… do you see any (obvious?) problems with this? I’m still confusing legacy stuff with new functionality sometimes… Thank you!

I have the latest 10.12 Catalyst drivers, btw.

Initialization:

// GEOMETRY

float *vert = new float[9];

vert[0] =-0.3; vert[1] = 0.5; vert[2] =-1.0;
vert[3] =-0.8; vert[4] =-0.5; vert[5] =-1.0;
vert[6] = 0.2; vert[7] =-0.5; vert[8]= -1.0;
	
glGenBuffers( 1, &vboID );
glBindBuffer( GL_ARRAY_BUFFER, vboID );
glBufferData( GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW );

glBindBuffer( GL_ARRAY_BUFFER, 0 );
		
// SHADER

string vsText = loadTextfile( "Shaders\\GLSL\	ext.vs" );
string fsText = loadTextfile( "Shaders\\GLSL\	ext.fs" );

const char *vsString = vsText.c_str();
const char *fsString = fsText.c_str();

vshaderID = glCreateShader( GL_VERTEX_SHADER );
glShaderSource( vshaderID, 1, &vsString, 0 );
glCompileShader( vshaderID );

fshaderID = glCreateShader( GL_FRAGMENT_SHADER );
glShaderSource( fshaderID, 1, &fsString, 0 );
glCompileShader( fshaderID );

programID = glCreateProgram();

glAttachShader( programID, vshaderID );
glAttachShader( programID, fshaderID );

glBindAttribLocation( programID, 0, "in_Position" );

glLinkProgram( programID );

render loop:

glViewport( 0, 0, 800, 600 ); 

glClearColor( 0.5, 0.5, 0.5, 1 );
glClear( GL_COLOR_BUFFER_BIT );

glUseProgram( programID );

glBindBuffer( GL_ARRAY_BUFFER, vboID );

glVertexAttribPointer( 0, 3, GL_FLOAT, false, 0, 0 );
glEnableVertexAttribArray( 0 );

glDrawArrays( GL_TRIANGLES, 0, 3 );

glBindBuffer( GL_ARRAY_BUFFER, 0 );

Vertex shader:

#version 140

in vec3 in_Position;

void main(void)
{
	gl_Position = vec4(in_Position, 1.0);
}

Fragment shader:

#version 140

out vec4 out_Color;

void main(void)
{
	out_Color = vec4(1.0);
}

Probably due the usual suspect - no VAO bound, which according to the appendix of the OpenGL spec, should cause INVALID_OPERATION after calling glVertexAttribPointer/drawing commands, but NVidia still allows using the default VAO, so you won’t catch the problem until you test on ATI.

For a quick fix, you can create a named VAO that impersonates the default VAO, by using:

//at start-up stage
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
...
// at shut-down stage
glDeleteVertexArrays(1, &vaoID);

or consider using VAOs more fully.

You’re right, that’s it! Thank you! :-*

Geometry is being drawn properly now but only as long as I don’t use uniforms contained in uniform buffers.

This is how I set up uniform buffers:

shared_ptr<UniformBuffer> OGL3GraphicsEngine::createUniformBuffer( const string &blockName, unsigned long blockSize, const void *data )
{
	// setup a new uniform buffer
	shared_ptr<UniformBuffer> uniformBuffer = shared_ptr<UniformBuffer>( new UniformBuffer() );

	uniformBuffer->blockSize = blockSize;
	uniformBuffer->blockName = blockName;
	
	// register uniform buffer with OpenGL
	glGenBuffers( 1, &uniformBuffer->id );
	
	// define a new slot for this uniform block
	uniformBuffer->slot = m_uniformBuffers.size();

	// associate uniform block to its binding point (=slot)
	glBindBufferBase( GL_UNIFORM_BUFFER, uniformBuffer->slot, uniformBuffer->id );

	// upload data (if provided)
	updateUniformBuffer( uniformBuffer, data, "" );

	// register uniform block with all programs
	for( uint i = 0; i < m_programs.size(); i++ )
	{
		registerUniformBlock( m_programs.at(i), uniformBuffer );
	}

	m_uniformBuffers.push_back( uniformBuffer );

	return uniformBuffer;
}

bool OGL3GraphicsEngine::updateUniformBuffer( shared_ptr<UniformBuffer> uniformBuffer, const void *data, const string &element )
{
	if( !uniformBuffer )
		return false;

	if( m_activeUniformBuffer != uniformBuffer )
	{
		glBindBuffer( GL_UNIFORM_BUFFER, uniformBuffer->id );
		m_activeUniformBuffer = uniformBuffer;
	}

	if( element.length() )
	{
		const int &size = uniformBuffer->blockElements[element].first;
		const int &offset = uniformBuffer->blockElements[element].second;

		glBufferSubData( GL_UNIFORM_BUFFER, offset, size, data );
	}
	else
	{
		glBufferData( GL_UNIFORM_BUFFER, uniformBuffer->blockSize, data, GL_DYNAMIC_DRAW );
	}

	return true;
}

bool OGL3GraphicsEngine::registerUniformBlock( shared_ptr<Program> program, shared_ptr<UniformBuffer> uniformBuffer )
{
	static unordered_map<int, int> datatypeSizes = initializeDatatypeSizes();

	// check whether uniform block is used in program; get its index if it is
	uint index = glGetUniformBlockIndex( program->id, uniformBuffer->blockName.c_str() );

	if( index == GL_INVALID_INDEX )
		return false;

	// assign uniform block to associated slot in program
	glUniformBlockBinding( program->id, index, uniformBuffer->slot );

	// get number of uniforms
	int activeUniformsInBlock;
	glGetActiveUniformBlockiv( program->id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORMS, &activeUniformsInBlock );

	// retrieve the associated indices
	int *indices = new int[activeUniformsInBlock];
	glGetActiveUniformBlockiv( program->id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, indices );

	int type, offset;
	char uniformName[256];
		
	for( uint i = 0; i < activeUniformsInBlock; i++ )
	{
		const uint &index = (uint)indices[i];
				
		// get uniform's name
		glGetActiveUniformName(	program->id, index, 256, 0, uniformName );

		// get uniform's data type and its offset from the block's beginning
		glGetActiveUniformsiv( program->id, 1, &index, GL_UNIFORM_TYPE, &type );
		glGetActiveUniformsiv( program->id, 1, &index, GL_UNIFORM_OFFSET, &offset );

		// retrieve data type size of uniform
		const int &size = datatypeSizes[type];

		// store uniform's size and offset inside the program (for packed uniform buffers)
		program->uniformBlocks[uniformBuffer->blockName].elements[uniformName].first = size;
		program->uniformBlocks[uniformBuffer->blockName].elements[uniformName].second = offset;

		// store the same information in the uniform buffer
		// itself, possibly overwriting previous information!
		uniformBuffer->blockElements[uniformName].first = size;
		uniformBuffer->blockElements[uniformName].second = offset;
	}

	return true;
}

And this is how I use it in my sample program:

struct ProjectionBlock
{
	mat4 perspective;
	mat4 orthographic;
};

ProjectionBlock projection;
projection.perspective = glm::perspective( 60.0f, 800.0f/600.0f, 1.0f, 100.0f );
projection.orthographic = glm::ortho( 0.0f, 800.0f, 0.0f, 600.0f, -1.0f, 1.0f );

m_projectionBuffer = Engine::getGraphicsEngine()->createUniformBuffer( "Projection", sizeof(ProjectionBlock), &projection );

On my NVIDIA card, this successfully creates a uniform buffer and uploads the two matrices. On my ATI card, the uniform buffer is created as well but no data is being uploaded; gDebugger lists the uniform buffer and the two matrices it contains but lists their values as “N/A”.

Am I creating/updating uniform blocks properly?

I downloaded gDebugger + tested it on a simple test app that uses uniform blocks + I get N/A displayed for the values too, but the uniforms have the correct effect in the shader program + gDebugger also shows the correct values in the buffer viewer.

You could try putting a known test value into one of the matrices + seeing if it’s being handled correctly by putting something like: “out_Color = perspective[0];” in the fragment shader.

As for whether you’re creating/updating them correctly, I think so (but not 100% sure). Although I don’t think it would be guaranteed to work with all possible uniform block definitions - it might have problems with a (non-desirable) structure like:

struct ProjectionBlock
{
	float val;
        mat4 perspective;
	mat4 orthographic;
};

Because, according to the spec:

So it would be unsafe relying on the struct being the same size/layout as the uniform block, which you’re doing by using

// unsafe unless you've declared your uniform block with "layout(std140)"
// and laid out the fields of ProjectionBlock according to the rules
m_projectionBuffer = Engine::getGraphicsEngine()->createUniformBuffer( "Projection", sizeof(ProjectionBlock), &projection );

I guess an OpenGL implementation could even re-arrange the order of the fields in the buffer object, so unless you’re using layout(std140), you couldn’t fill the whole buffer in one go even if the sizes match, unless you’ve queried first + laid out the data in the correct positions.

For my (Delphi) test app, using that structure (with the extra float), I get 132 for SizeOf(ProjectionBlock), and 144 when querying the data size required for the uniform block on an ATI implementation using:

glGetActiveUniformBlockiv(program->id, index, GL_UNIFORM_BLOCK_DATA_SIZE, &dataSize);
  • The ATI implementation requires 12 bytes of padding between val + perspective, although another implementation could choose to move val to the end of the buffer, or not require any padding to provide a GL_UNIFORM_BLOCK_DATA_SIZE of 132.

thank you for your reply! i checked again and you’re right, according to the buffer viewer the matrices are uploaded correctly and in the correct order. still, their values are all zero when used in the vertex shader… i’m confused.

btw, I alwys declare uniform buffers as using std140, so I don’t think layout should be a problem.

here’s the vertex shader for the sake of completeness:

#version 150

layout(location = 0) in vec4 in_vertexPos;

layout(std140) uniform Projection
{
	mat4 perspective;
	mat4 orthographic;
} projection;

void main()
{
	gl_Position = projection.orthographic*vec4(in_vertexPos.xyz, 1.0);
}

Update: The tried the same with a uniform buffer which contains only a single float value and got the same result - buffer’s value is correctly shown in the buffer view window but 0.0 in the shader.

Found the solution… posting it here, in case anyone faces the same problem:

On ATI hardware, you have to call

glGetUniformBlockIndex( program, blockName )
glUniformBlockBinding( program, blockIndex, slot )

BEFORE

glBindBufferBase( GL_UNIFORM_BUFFER, slot, UBO )

On NVIDIA hardware, the other way around is fine as well.

I can’t reproduce it on my ATI card.
Could you help to provide a simple demo? thanks!

my email: quentin.lin@amd.com