rendering on the back buffer

I’m working on color picking in my project. I have everything working correctly. The primitives each get their own color and a mouse event correctly identifies one. The problem is, I can only seem to read from the front buffer, when the primitives are rendered on screen. I need this to happen ‘behind the scenes’. I thought I could use something like:

glDrawBuffer(GL_BACK)

//render

glReadBuffer(GL_BACK)

glReadPixels(…)

This doesn’t appear to work. Should it? Alternatives?

In advance, thanks for your time!

It should work.
Even if you work on the back buffer, you still have to make sure the read part is not covered by another window or if your window is not minimized (pixel ownership test).

Sounds like you might need to ensure that you’ve created a double-buffered context.

I’ve done this before, long time ago, so I may not remember correctly and hardware may behave differently nowadays. Off the top of my head…

Make sure you call glFinish so that everything is properly rendered and ready before you read it back. Swapbuffers supposedly does this (IIRC causes the well known pipeline stall), and glReadpixels seems to call glFinish under-the-hood with most drivers, but you never know (glFinish appears to be there for a reason).
By the way, I don’t think v-sync has any implications in this case, since we’re not calling Swapbuffers.

Ensure a double buffered context as mentioned by others.

Pixel ownership is a big issue. If the window is partially obscured by another overlapping window (or dropdown menus from the application), the pixels under that region may be black or typically garbage and not what you intended to render there. For picking/selection this is typically not an issue, but it might be for area selection, screengrabs, radiosity lightmap rendering, texture baking and other fancy stuff.
I should note that on nVidia hardware with latest drivers, pixels are actually rasterized proper (the spec allows this), but don’t rely on this behavior. In combination with FSAA and glViewport calls, you may still get garbage (or on nVidia, pixels that fail pixelownership test have the gray window background color).

Use PBO or FBO if you want to be sure you get pixels on offscreen surfaces. I’ve found PBO’s easier for non-realtime critical image feedback stuff since they are easier to set up than FBO’s, but if you want total control and maximum hardware acceleration, go for FBO. PBO support may also have diminished quality in modern drivers due to neglect, as FBO’s are supposed to replace them.

Ensure that the drivers don’t force full screen anti-aliasing and transparancy anti-aliasing (alpha-to-coverage) if you’re using alpha-testing. This will obviously screw up your colors and thus indices!

Also, the OpenGL spec still doesn’t (AFAIK) guarantee pixel accuracy, both color and raster position.

As for rasterization position accuracy, for single-click picking, read back 3x3 pixels to ensure you always catch 1x1 pixels (e.g. for vertex selection). While on the subject, use glPointSize/glLineWidth to make it easier to catch vertices/lines. Bonus points if you push back occluding geometry with glPolygonOffset (so that lines and vertices don’t z-fight with polygons) to make vertex/edge selection easier for the user.

When you do color picking, allow some margin. E.g. RGB 25/25/25 to RGB 27/27/27 correspond to the same index, rather than exactly RGB 26/26/26). Color picking will likely fail if the user has desktop bit depth set to 16-bit. With some effort you can adapt the color margin to the bit depth, this will greatly reduce the range of indices you can use, though multi-pass rendering can be a solution (yikes!). If you only have a handful of objects/triangles/vertices, this should not be an issue though.

Watch out for rounding errors when doing the RGB<–>index conversion. Assume the colors will be slightly off (margin!), and always do a bounds check on the readback index result to catch index-out-of-bounds errors.

If you’re smart you will do a conformance test when the application starts, and notify the user if the bit-depth or pixel accuracy fails acceptable standards. Ideally, display this along with the GL_VENDOR string so that the user can report to the manufacturer that they need to fix their crappy drivers (cough Intel).

Ultimately, picking/selection is best done with pure software rendering/plain math calculations, but if you take care of the things I mentioned it should be fairly reliable.

Hope that helps.

Make sure you call glFinish so that everything is properly rendered and ready before you read it back.

Never seen this, are you sure ?

You never need glflush or finish, unless you are dealing with a single buffered window. On Vista/7 it’s required for single buffered windows otherwise you’ll never draw anything.

The description in the manual seems to imply that whenever you use feedback functions (glGet, glReadPixels etc.) you need to call glFinish:

glFinish does not return until the effects of all previously called GL commands are complete. Such effects include all changes to GL state, all changes to connection state, and all changes to the frame buffer contents.

That said, no, I can’t say that I’ve ever seen strange things. As the poster above mentioned, seems only absolutely necissary for single buffered contexts.

I kind of assumed that driver writers invoked glFinish in the background on feedback function calls to simplify implementation. I can imagine that a user reading from the framebuffer (while it’s still being rendered to) might cause trouble, especially with internal framebuffer compression and other optimizations.

Anyway, AFAIK there are no side effects to glFinish calls, I’ve always stuck it in my code without much thought.

The only function that requires an implied glFinish is glReadPixels. It must wait for the GPU to finish. glGet doesn’t need the gpu.

Single buffer context needs a glFlush. See the Red Book. Why would anyone want a single buffered context? I know that in the past people did some crazy things like rendering with GDI.

glFinish is important in benchmarking. All other apps and games should not use this performance killer.

glFinish breaks CPU/GPU parallelization so I agree that it should normally be never used unless you’ve determined for definite that you do have a need for it. glReadPixels should automatically block until all previous commands have completed, but you do need to be certain that you’re drawing to the same buffer as you’re reading from.

All of this assumes a fully conformant driver, of course.

I see, makes sense.

glGet doesn’t need the gpu.

I assume because most stuff is cached by drivers to avoid roundtrip time?

PS, readers should ignore my five stars, I think that’s due to a forum bug.