glCopyPixels() problem on some Intel cards

Hello all,

My application uses glCopyPixels() to copy the contents of the back buffer to front buffer (leaving the back buffer contents intact).
It works fine, except on some Intel cards.

Any idea of what I may be doing wrong?
Has anyone experienced a similar problem?

Thanks in advance.
Hugo.

Intel + OpenGL = tons of problems.

You didnt explain what doesn’t work. What is result and what you expect as result?

Sorry.

I expect to see the rendered image but what I get is a blank viewport.

As I said, the problem only occurs on some Intel cards.

Hugo.

Hmmm… it’s inviting a bug and performance hits trying this.

I’ve seen flags or attributes to guarantee backbuffer behavior after a swap. Default is discard, there’s also copy and flip. You want copy, but this is from D3D.

Depending on the application you could RTT or copy to a texture and use that to restore the backbuffer. Not free but thinking your backbuffer to frontbuffer copy is free is probably a mistake too.

It won’t be optimal and is probably inviting trouble unless it’s done through a mechanism like SwapEffect.

I can take the performance hit - it is hardly noticeable anyway. But why do you say it is a invitation for bugs?

I’ve seen flags or attributes to guarantee backbuffer behavior after a swap. Default is discard, there’s also copy and flip. You want copy, but this is from D3D.

If I understand correctly in OpenGL those flags are only hints and may be ignored - needless to say it makes them pretty much useless if you distribute your application.

Depending on the application you could RTT or copy to a texture and use that to restore the backbuffer. Not free but thinking your backbuffer to frontbuffer copy is free is probably a mistake too.

It won’t be optimal and is probably inviting trouble unless it’s done through a mechanism like SwapEffect.

Yes, I could. But why is it better than copying the back buffer to front buffer using glCopyPixels (sure it may be faster but performance is not the issue).

Hugo.

Having debugged more opengl apps than I can care to recall. Just don’t bother. Redraw the damn scene, it’ll probably be faster than what you are doing.

I can take the performance hit - it is hardly noticeable anyway. But why do you say it is a invitation for bugs?

[/QUOTE]

Some stuff is well used, well tested and important to implementors. Other stuff isn’t, and may work for you but there will be some crappy card out there somewhere that will screw it up. You learn to avoid these problem areas and to some degree code to the lowest common denominator.

Persistent backbuffers smells to me like one of those areas you don’t want to rely on.

I suggest you do this.

Implement it your way (or a similar simple test), then readback the second frontbuffer after swap & check the results.

If it matches your expectations within the bounds of a diff, then do it the fast way. Otherwise, do a readpixels/swapbuffers/drawpixels, on the backbuffer, as a fallback and let that hardware take the hit. If they don’t fix their drivers your app will be a little bit less optimal on their card.

Thank you all for your answers.

I think I’ll give up and just redraw the whole scene whenever I need.

Thanks again,
Hugo.