View Full Version : How to read 1 bit framebuffer into 32bits integers

11-26-2015, 03:29 AM
Hi all,

I'm looking for a method to read a 1-bit context (1 bitplane), indexed-color framebuffer, into an array of 32bits integers efficiently. (1 integer representing 32 pixels)

The reason i'm looking for this is that my openGL image is around 7500x7500 pixels with just black or white. And i have around 1000 of these images that need to be processed quickly into the file system
Using GLReadPixels with 8-bits per pixels is getting slow already; doing it asynchonised is i assume also slow because all 1000 images are different and drawn after one another.

I would like to store this as a 1-bit .png image ultimately.
But before i can do that i need to manipulation; and the most efficient method to me seems to be as an array of 32bits integers, where 1 integer represents 32 pixels.

I looked at the GLReadPixels method and GLPixelTransfer/GLPixelStore methods but can't figure out if it's possible yet. Somehow it should be i guess to only transfer 1 bit instead of 8 per pixel..?

Anybody an idea?

11-26-2015, 07:37 AM
After some hours of searching;

It should be possible with ReadPixels to read of type Bitmap when the context is ColorIndexed.

However my context returns only 0's and no 1's.

Also GET IndexMode returns false while i do initiate on color index mode.

I'm not sure if my 'new' graphics card supports color index mode.. anyone?

11-28-2015, 04:57 AM
We need some more context/information to be able to help. What version of OpenGL are you targeting? What hardware is it supposed to run on? Any other limitations?

Modern OpenGL does not support color index.

The smallest values for texels you can work with are bytes.
The most realistic way to do it would be to render something in "black and white" on 1-byte texels and then after that use a simple shader to pack each texel into a single bit.
Then you can sending the data from vram to system memory without overhead.

But from the sound of it, your application in its current state was made before the concept of shaders even existed?!

11-28-2015, 06:07 AM
Hi Osbios,

Thanks for the reply, i think that's enough reply for me to continue the search.

The application is a new piece of code for a very accurate 3d printer. It has a CAD 3d View and should output slices as 1 or 8 bit .png's
I would like the software to produce the slices of 7500x7500 pixels very rapidly; so you can check them visually before a printjob.
It should run on any modern pc. It already works fine for some time but the wait of a couple of minutes for slicing is sometimes irritating.

Since i only recently learned openGL i started with the oldest book i could find which taught the basis of 1.1. (since i had to write the 3d editor/view part of the program as well)
Along the way i added some OpenGL 4; but it can run with just 1.1 if necessary.

I hoped to use color-index mode to speed up the GPU even more; but as i read a lot on the internet it's almost never used anymore..

So i haven't touched shaders yet; but i'll look into that next to see if that's possible.
The current bottle-neck is the sending of data from GPU to CPU; which takes around 200ms for each slice of 7500x7500pixels.

Kind regards,

11-28-2015, 10:37 AM
I hoped to use color-index mode to speed up the GPU even more; but as i read a lot on the internet it's almost never used anymore..

Your best bet is likely to be to create a framebuffer object whose colour attachment is a single-channel 8-bpp texture (GL_R8 or GL_R8UI). Once you've finished rendering to it, you can use it as a source texture while rendering with a GL_R32UI or GL_RGBA32UI texture as the colour buffer in order to reduce the depth from 8 bpp to (effectively) 1 bpp.

The main limitation is that an implementation isn't guaranteed to support a 7500x7500 texture, although current-generation hardware probably will.