Accessing the rendering backbuffer

Is there any way to access the backbuffer directly such that, if you have a simple imaging pipeline with a mixture of GL 3D filters & non-GL image filters, you could operate on that buffer as you would a standard memory buffer?

I am currently implementing a GL filter using p-buffers to take advantage of HW acceleration and want to miminize or avoid copying to/from system memory.

I know this can be done with directx (directdraw) with surfaces that are created in video memory.

I believe the answer is no. I know you can use ReadPixels(),DrawPixels(), or glCopyPixels() to access the buffer. I was generally looking to see of there was some extension to support something comparable to directdraw surfaces which can be allocated in video memory and does allow access to the actually bytes.

This could be answered in the beginer forum but.
no there isn’t any equ to the ddraw features when u are using opengl.

The only options are: Using pBuffers as u are starting to do or much more simpler use glReadPixel.

Anyhow if u need to apply back your modifications to the backbuffer you will certainly notice a significant hit onto performances.

Just to let u know… mixing 2d and 3d stages are not recommended at all (even with Dx)
Then it simply explains why OpenGl has not been designed for this purposes… it is the same for some restriction about textures,lights and so on… Thus, constraints most of the time lead the code design but it’s all about performances issues.

example: why is there only 8 lights with GL and not 255?? )
bcoz u don’t need that much!

hth

ps: why don’t u read the opengl programer’s guide before starting coding with GL? it’s a goldmine of infos with very good samples.

Originally posted by wsalivar:
[b]Is there any way to access the backbuffer directly such that, if you have a simple imaging pipeline with a mixture of GL 3D filters & non-GL image filters, you could operate on that buffer as you would a standard memory buffer?

I am currently implementing a GL filter using p-buffers to take advantage of HW acceleration and want to miminize or avoid copying to/from system memory.

I know this can be done with directx (directdraw) with surfaces that are created in video memory.[/b]

well, it really depends on what type of filters you wish to apply. i’ve seen plenty of image filters applied with fragment shaders and rendered with full hardware acceleration. you can do this by rendering your scene to an off-screen buffer (as you are already doing) and then drawing a screen-aligned quad using the pbuffer as a texture and do the filtering using a fragment shader.

there are a number of presentations and demos at the ATI developer site: http://www.ati.com/developer/ that involve 2D image filtering using fragment (pixel) shaders.

[This message has been edited by chrisATI (edited 03-05-2003).]

Ozzy,

I doubt this could be fully answered in the beginner forum since I am drawing on the experience of more advanced users so see if there is any extended functionality that would help. Also, based on my own reading of OpenGL capabilities, my own answer was no.

I have found the Programming Guide, Reference Guide and OpenGL SuperBible to be excellent references to OpenGL but they typically don’t go into any detail beyond the basic implementation of OpenGL (e.g. using p-buffers).

Although, not an expert, I have been developing using OpenGL and DirectX long enough to completely understand the drawbacks of mixing 3D and non-3D operations and why T&L operations in OpenGL must be used in moderation (only 8 lights supported). DirectX is more flexible in allowing access to video memory surfaces for manipulation without significant performance degradation (depending on what you do).

Your response was not terribly helpful or constructive. The whole purpose of this forum is to help, not be derogatory. I much prefer chrisATI’s post.

ChrisATI,

What I am trying to accomplish is insert my OpenGL code into our SDK which is generally a non-accelerated, imaging pipeline made of one or more filters. It is entirely based on memory buffers. My OpenGL code functionality replaces the current rendering and filtering pieces of this pipeline. Thus, the use of p-buffers for offscreen rendering.

We are currently trying to determine if we need to re-implement the pipeline to be completely hardware based (OpenGL) or if there is some mechanism by which we could get a reference to the back buffer and use that throughout the pipeline as a destination. From what I have been able to tell, the back buffer is not available for direct access in the manner that we require. P-buffers seems to be the only way but of course that would either require the OpenGL filter to be last in the pipeline (pre-render) or a copy to system memory would be required which makes p-buffers irrelevant.

From your previous post, it appears that converting the pipeline to be 100% OpenGL based may be the only answer. Thanks for your input.

The contemporary idea of GL accelerated 2D filters on PC hardware is to load a texture with your image (render to texture for example) and use multitexture to tap the fragments with multiple registers.

This means instead of going through the GL image processing pipeline in the classic sense you construct your filter kernel using a fragment program and texture coordinate peturbations on each texture unit while tapping the same source image.

It’s not really a new idea (I think Haeberli and Akeley published this, albeit multipass), but there’s much more you can do with it now that you have multitexture and subtraction and more in your fragment programs.