PDA

View Full Version : GPU to GPU (copying data in vram using PBO's)



MeneerDePeer
01-28-2008, 11:58 AM
Like I mentioned in my other topic, I'd like to copy data from a depth (render)buffer in a FBO to the main depth buffer. Since there are no other ways for me to accomplish that goal, I'm currently trying to use PBO's to make the whole glRead/DrawPixel-stuff a bit faster.

I initialize my FBO as usual, it's all fine. I generate a new PBO, like this:


GLuint pbo;

glGenBuffers(1, &pbo);
glBindBuffer(GL_ARRAY_BUFFER, pbo);
glBufferData(GL_ARRAY_BUFFER, width * height * sizeof(GL_FLOAT), 0l, GL_DYNAMIC_COPY);
glBindBuffer(GL_ARRAY_BUFFER, 0);


Then, at runtime, I copy the depth buffer:


// Bind FBO here
// ...

glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);


Then, I draw the buffered data using:


// Switching to main framebuffer
// ...

glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo);
glDrawPixels(width, height, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);


Could someone tell me if this is the correct way to copy data around (in vram)?

It seems to work once, but when I execute the testprogram more than once, everything goes bad. Odd crashes from random libraries, not able anymore to re-start the program (immediate crash in OpenGL library), cat gets killed, etc.

This whole "copy-fbo-depth-to-main-depth"-issue is starting to drive me mad. Am I writing bad code? Is the ATI driver bad? Are aliens to blame?

V-man
01-28-2008, 11:24 PM
I've never tried that myself but more importantly, it's not such a great idea to write to the depth buffer since that turns off hierarchal depth.

XChanger
01-28-2008, 11:30 PM
As I know, ATI driver does not support pixel buffers in hardware, this is Nvidia's priority :)

XChanger

Zengar
01-29-2008, 12:55 AM
I think you shoudl have a look at this: http://www.opengl.org/registry/specs/EXT/framebuffer_blit.txt

Does ATI at last support this extension, btw?

MeneerDePeer
01-29-2008, 01:41 AM
I've never tried that myself but more importantly, it's not such a great idea to write to the depth buffer since that turns off hierarchal depth.

You're right :) I think it's not a big issue though. I'm only trying to draw to the depth buffer just before doing the actual *real* rendering.

It's for updating static/unmodified image data in a 3d view (like happens in many DCC-apps). For that, I need to transfer depth buffer information from the FBO to the main depth buffer, so that everything that gets rendered on top of the static image data is still clipped correctly (ie. when there's an object in the foreground, the newly rendered object should not render in front of that other object).

A small (not so useful) example:

This is what I currently get by copying the static image data (gray teapots in the background are rendered only once, not each frame update) from the FBO to the main (color) buffer, using a screen aligned quad. There's no depth buffer information, so it looks like the object is in front or on top of the others:
http://xs223.xs.to/xs223/08052/t1421.png

This is the effect I'm looking for. The yellow teapot is correctly rendered, at the same position as in the image above, between the other teapots:
http://xs223.xs.to/xs223/08052/t2876.png
(sample image taken by brute force rendering all the teapots, which is painfully slow, hence the need for buffered data)

Normally it should also be possible to use the buffer region extensions, but those are not available on my hardware/driver, not even the Kinetix one.

If anyone knows another (quick) way to do this, I'm open and thankful to suggestions :)

In the other topic it was also proposed to use fragment programs and copy the data using gl_FragDepth, but that does not seem to work for the main depth buffer.


As I know, ATI driver does not support pixel buffers in hardware, this is Nvidia's priority :)

You mean the actual PBO's (Pixel Buffer Objects)? Or PBuffers (Pixel Buffers)? :)


I think you shoudl have a look at this: http://www.opengl.org/registry/specs/EXT/framebuffer_blit.txt

Does ATI at last support this extension, btw?

Yeah, that extension would be awesome to have :) it has been proposed in my other topic as well, but I can't use it for some reason.

The entry points seem to be available in 7.11, but it's not listed in the extension string and calling the functions gives me a crash only.

bobvodka
01-29-2008, 03:13 AM
granted I haven't seen the rest of your topics but I'm left wondering why you can't just stay rendering to FBOs and make your 'final' image pass just rendering a screen aligned quad with the final image texture attach to it?

That way you at least only require one common depth buffer which you can reuse as required between passes.

MeneerDePeer
01-29-2008, 03:55 AM
I've thought about that, too. But I'm a little afraid of the memory requirements. It's yet another screen sized buffer. I'm going to need a lot of vram for (other) textures as well, that's why I'd rather save on memory.

Also, a common depth buffer would mean I cannot draw convex objects on top of the static data, because I would not have the option to write new depth information to this common depth buffer (the background data has to be available each frame, without re-rendering).

If that could be solved somehow, that seems to be a nice method indeed :)

-NiCo-
01-29-2008, 04:00 AM
You can always create two framebuffer objects, each with a color and depth texture. Write your static objects to the first FBO. Then, with each dynamic object update use copypixels to copy color/depth data from the first FBO to the second FBO (this is basically what the blit extension does) and draw your dynamic object. Finally draw the color texture of the second FBO as a fullscreen quad in your main window.

Like you already pointed out, this will take up twice the amount of vram.

N.

MeneerDePeer
01-29-2008, 08:26 AM
Well, I guess I can finally shut down this chapter. I don't have to use the double FBO's :) it appears I have just figured out how to make the fragment programs work correctly.

Unless what I was thinking a few days ago, depth testing has to be *enabled* (along with depth writing, of course) for writing to gl_FragDepth through a fragment program. Seems a bit odd to me, but hey, it works!!

Unfortunately, writing depth fragments using fragment programs isn't really much faster than using glXxxPixels, but it's supposed to be used for partial updates anyway so basically I'm happy with what I've got now :)

Thanks for all the comments!