View Full Version : Nothing drawn but the clear color on ATI
01-21-2011, 03:04 AM
My renderer, which is working fine on my desktop with an NVIDIA card, won't display anything but the clear color on my laptop with an ATI Mobility Radeon.
I'm creating a 3.2 forward-compatible context and all buffer objects are created successfully (checked with gDebugger as well). However, nothing's drawn to the screen. I've simplified the source code to the following.. do you see any (obvious?) problems with this? I'm still confusing legacy stuff with new functionality sometimes.. Thank you!
I have the latest 10.12 Catalyst drivers, btw.
float *vert = new float;
vert =-0.3; vert = 0.5; vert =-1.0;
vert =-0.8; vert =-0.5; vert =-1.0;
vert = 0.2; vert =-0.5; vert= -1.0;
glGenBuffers( 1, &vboID );
glBindBuffer( GL_ARRAY_BUFFER, vboID );
glBufferData( GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
string vsText = loadTextfile( "Shaders\\GLSL\\text.vs" );
string fsText = loadTextfile( "Shaders\\GLSL\\text.fs" );
const char *vsString = vsText.c_str();
const char *fsString = fsText.c_str();
vshaderID = glCreateShader( GL_VERTEX_SHADER );
glShaderSource( vshaderID, 1, &vsString, 0 );
glCompileShader( vshaderID );
fshaderID = glCreateShader( GL_FRAGMENT_SHADER );
glShaderSource( fshaderID, 1, &fsString, 0 );
glCompileShader( fshaderID );
programID = glCreateProgram();
glAttachShader( programID, vshaderID );
glAttachShader( programID, fshaderID );
glBindAttribLocation( programID, 0, "in_Position" );
glLinkProgram( programID );
glViewport( 0, 0, 800, 600 );
glClearColor( 0.5, 0.5, 0.5, 1 );
glClear( GL_COLOR_BUFFER_BIT );
glUseProgram( programID );
glBindBuffer( GL_ARRAY_BUFFER, vboID );
glVertexAttribPointer( 0, 3, GL_FLOAT, false, 0, 0 );
glEnableVertexAttribArray( 0 );
glDrawArrays( GL_TRIANGLES, 0, 3 );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
in vec3 in_Position;
gl_Position = vec4(in_Position, 1.0);
out vec4 out_Color;
out_Color = vec4(1.0);
01-21-2011, 04:43 AM
Probably due the usual suspect - no VAO bound, which according to the appendix of the OpenGL spec, should cause INVALID_OPERATION after calling glVertexAttribPointer/drawing commands, but NVidia still allows using the default VAO, so you won't catch the problem until you test on ATI.
For a quick fix, you can create a named VAO that impersonates the default VAO, by using:
//at start-up stage
// at shut-down stage
or consider using VAOs more fully.
01-21-2011, 06:23 AM
You're right, that's it! Thank you! :-*
01-22-2011, 06:08 AM
Geometry is being drawn properly now but only as long as I don't use uniforms contained in uniform buffers.
This is how I set up uniform buffers:
shared_ptr<UniformBuffer> OGL3GraphicsEngine::createUniformBuffer( const string &blockName, unsigned long blockSize, const void *data )
// setup a new uniform buffer
shared_ptr<UniformBuffer> uniformBuffer = shared_ptr<UniformBuffer>( new UniformBuffer() );
uniformBuffer->blockSize = blockSize;
uniformBuffer->blockName = blockName;
// register uniform buffer with OpenGL
glGenBuffers( 1, &uniformBuffer->id );
// define a new slot for this uniform block
uniformBuffer->slot = m_uniformBuffers.size();
// associate uniform block to its binding point (=slot)
glBindBufferBase( GL_UNIFORM_BUFFER, uniformBuffer->slot, uniformBuffer->id );
// upload data (if provided)
updateUniformBuffer( uniformBuffer, data, "" );
// register uniform block with all programs
for( uint i = 0; i < m_programs.size(); i++ )
registerUniformBlock( m_programs.at(i), uniformBuffer );
m_uniformBuffers.push_back( uniformBuffer );
bool OGL3GraphicsEngine::updateUniformBuffer( shared_ptr<UniformBuffer> uniformBuffer, const void *data, const string &element )
if( !uniformBuffer )
if( m_activeUniformBuffer != uniformBuffer )
glBindBuffer( GL_UNIFORM_BUFFER, uniformBuffer->id );
m_activeUniformBuffer = uniformBuffer;
if( element.length() )
const int &size = uniformBuffer->blockElements[element].first;
const int &offset = uniformBuffer->blockElements[element].second;
glBufferSubData( GL_UNIFORM_BUFFER, offset, size, data );
glBufferData( GL_UNIFORM_BUFFER, uniformBuffer->blockSize, data, GL_DYNAMIC_DRAW );
bool OGL3GraphicsEngine::registerUniformBlock( shared_ptr<Program> program, shared_ptr<UniformBuffer> uniformBuffer )
static unordered_map<int, int> datatypeSizes = initializeDatatypeSizes();
// check whether uniform block is used in program; get its index if it is
uint index = glGetUniformBlockIndex( program->id, uniformBuffer->blockName.c_str() );
if( index == GL_INVALID_INDEX )
// assign uniform block to associated slot in program
glUniformBlockBinding( program->id, index, uniformBuffer->slot );
// get number of uniforms
glGetActiveUniformBlockiv( program->id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORMS, &activeUniformsInBlock );
// retrieve the associated indices
int *indices = new int[activeUniformsInBlock];
glGetActiveUniformBlockiv( program->id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, indices );
int type, offset;
for( uint i = 0; i < activeUniformsInBlock; i++ )
const uint &index = (uint)indices[i];
// get uniform's name
glGetActiveUniformName( program->id, index, 256, 0, uniformName );
// get uniform's data type and its offset from the block's beginning
glGetActiveUniformsiv( program->id, 1, &index, GL_UNIFORM_TYPE, &type );
glGetActiveUniformsiv( program->id, 1, &index, GL_UNIFORM_OFFSET, &offset );
// retrieve data type size of uniform
const int &size = datatypeSizes[type];
// store uniform's size and offset inside the program (for packed uniform buffers)
program->uniformBlocks[uniformBuffer->blockName].elements[uniformName].first = size;
program->uniformBlocks[uniformBuffer->blockName].elements[uniformName].second = offset;
// store the same information in the uniform buffer
// itself, possibly overwriting previous information!
uniformBuffer->blockElements[uniformName].first = size;
uniformBuffer->blockElements[uniformName].second = offset;
And this is how I use it in my sample program:
projection.perspective = glm::perspective( 60.0f, 800.0f/600.0f, 1.0f, 100.0f );
projection.orthographic = glm::ortho( 0.0f, 800.0f, 0.0f, 600.0f, -1.0f, 1.0f );
m_projectionBuffer = Engine::getGraphicsEngine()->createUniformBuffer( "Projection", sizeof(ProjectionBlock), &projection );
On my NVIDIA card, this successfully creates a uniform buffer and uploads the two matrices. On my ATI card, the uniform buffer is created as well but no data is being uploaded; gDebugger lists the uniform buffer and the two matrices it contains but lists their values as "N/A".
Am I creating/updating uniform blocks properly?
01-22-2011, 05:20 PM
I downloaded gDebugger + tested it on a simple test app that uses uniform blocks + I get N/A displayed for the values too, but the uniforms have the correct effect in the shader program + gDebugger also shows the correct values in the buffer viewer.
You could try putting a known test value into one of the matrices + seeing if it's being handled correctly by putting something like: "out_Color = perspective;" in the fragment shader.
As for whether you're creating/updating them correctly, I think so (but not 100% sure). Although I don't think it would be guaranteed to work with all possible uniform block definitions - it might have problems with a (non-desirable) structure like:
Because, according to the spec:
By default, uniforms contained within a uniform block are extracted from buffer storage in an implementation-dependent manner.
So it would be unsafe relying on the struct being the same size/layout as the uniform block, which you're doing by using
// unsafe unless you've declared your uniform block with "layout(std140)"
// and laid out the fields of ProjectionBlock according to the rules
m_projectionBuffer = Engine::getGraphicsEngine()->createUniformBuffer( "Projection", sizeof(ProjectionBlock), &projection );I guess an OpenGL implementation could even re-arrange the order of the fields in the buffer object, so unless you're using layout(std140), you couldn't fill the whole buffer in one go even if the sizes match, unless you've queried first + laid out the data in the correct positions.
For my (Delphi) test app, using that structure (with the extra float), I get 132 for SizeOf(ProjectionBlock), and 144 when querying the data size required for the uniform block on an ATI implementation using:
glGetActiveUniformBlockiv(program->id, index, GL_UNIFORM_BLOCK_DATA_SIZE, &dataSize); - The ATI implementation requires 12 bytes of padding between val + perspective, although another implementation could choose to move val to the end of the buffer, or not require any padding to provide a GL_UNIFORM_BLOCK_DATA_SIZE of 132.
01-23-2011, 04:26 AM
thank you for your reply! i checked again and you're right, according to the buffer viewer the matrices are uploaded correctly and in the correct order. still, their values are all zero when used in the vertex shader.. i'm confused.
btw, I alwys declare uniform buffers as using std140, so I don't think layout should be a problem.
here's the vertex shader for the sake of completeness:
layout(location = 0) in vec4 in_vertexPos;
layout(std140) uniform Projection
gl_Position = projection.orthographic*vec4(in_vertexPos.xyz, 1.0);
01-24-2011, 02:50 AM
Update: The tried the same with a uniform buffer which contains only a single float value and got the same result - buffer's value is correctly shown in the buffer view window but 0.0 in the shader.
01-25-2011, 01:45 PM
Found the solution... posting it here, in case anyone faces the same problem:
On ATI hardware, you have to call
glGetUniformBlockIndex( program, blockName )
glUniformBlockBinding( program, blockIndex, slot )
glBindBufferBase( GL_UNIFORM_BUFFER, slot, UBO )
On NVIDIA hardware, the other way around is fine as well.
01-29-2011, 10:39 PM
I can't reproduce it on my ATI card.
Could you help to provide a simple demo? thanks!
my email: firstname.lastname@example.org
Powered by vBulletin® Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.