View Full Version : Using glReadBuffer with GL_RG32I buffer

06-01-2015, 02:19 PM
I have a deferred shading pipeline wherein I keep an "ID" attachment in my G-buffer. It's format is set to GL_RG32I, where each channel is an ID. Red is a material ID, which is looked up against a UBO of materials. Green is the ID of the model so I can do fast mouse picking. The G-buffer also has attachments for color (albedo), encoded normals, and depth.

I'm using the Cinder creative coding framework, but trust that this code wraps its GL counterparts properly.

int32_t MyApp::pick( const vec2& v )
vec2 offset( 8.0f );
Area area( v - offset, v + offset );
area = area.getClipBy( mFboGBuffer->getBounds() );

const gl::ScopedFramebuffer scopedFramebuffer( mFboGBuffer );
gl::readBuffer( GL_COLOR_ATTACHMENT1 );

GLint x = area.x1;
GLint y = mFboGBuffer->getHeight() - area.y2;
GLsizei h = area.getHeight();
GLsizei w = area.getWidth();
size_t len = w * h;
GLint* data = new GLint[ len ];
glReadPixels( x, y, w, h, GL_GREEN, GL_INT, (GLvoid*)data );

map<int32_t, size_t> count;
for ( size_t i = 0; i < len; ++i ) {
int32_t modelId = (int32_t)data[ i ];
if ( modelId >= 0 ) {
++count[ modelId ];

delete [] data;

int32_t id = -1;
if ( !count.empty() ) {
id = max_element( count.begin(), count.end(), count.value_comp() )->first;
return id;

glReadPixels gives me a GL_INVALID_ENUM. However, it works if I change the attachment to 0 (albedo) or 2 (encoded normals). I'm wondering whatI need to do to read the green channel from a GL_RG32I formatted buffer texture. My pack alignment is 4, btw. I messed around with glPixelStorei to no avail. I also tried reading the value as GL_FLOAT so no conversion happens. Nothing. The data in the attachment is valid as I can draw it with a shader. The values are actual integers (ie, 392), not floats.

Alfonse Reinheart
06-01-2015, 02:40 PM
The GL_INVALID_ENUM appears to be due to the fact that you are reading from an integer texture, but you're using the wrong pixel transfer format.

If you want to read integer data from an integer image format, you must set the pixel transfer format (https://www.opengl.org/wiki/Pixel_Transfer#Pixel_format) accordingly. Specifically, your format should end in "_INTEGER". All other formats are for reading floating-point values (whether pure float or normalized integers (https://www.opengl.org/wiki/Normalized_Integer)).

Also, if you want this to have something not entirely unlike performance, you should read all of the available channels back. Just pick out the channel of interest.

06-01-2015, 02:48 PM
Thanks! All I did was change GL_GREEN to GL_GREEN_INTEGER and it worked. :D