rendering, saving and recalling

Hi all.
I posted this in the beginners section, but maybe I’ll get more response here.
I’ve been trying to get this answer for a while now .
I want to render a 3D scene, save it (on the card’s memory somewhere), and then recall it. In between I may want to render a different scene, and later recall that one too. I also want to manage the whole thing: I want to know how much more resources I have, so I don’t overwrite previous scenes.
I thought about using textures, i.e. render a scene and then copy it to a texture (glCopyTexImage), thus managing an array of textures. This is not great, because my scenes are 3D (I need the depth component as well).
This is very frustrating, because this task seems very useful (in games for example), if you have a complex scene, and you want to do the complex rendering only once, but displaying many times.
And by the way, aux buffers and pbuffers didn’t look very attractive to me either.
Thanks for any help.

If you want to keep everything in graphics memory, using render-to-texture (that is, using pbuffers) is the only reasonable way to go.

ok, thanks for the reponse.
Is this what games or simulator applications do?
Is there anyone with a code example of using pbuffers with resource management (i.e. querying how much memory I have free and so on)?

Try WGL_ARB_buffer_region you should be able to make copies of the color and depth buffers…

pbuffers are great for saving both depth and color information.

However, if your camera moves, a saved scene is no use. A saved image of something like a single tree might be (known as an impostor). Games that let the camera move around typically won’t save the scene bits, because that would be useless. Some games copy out the rendered scene for post-processing, though, which is related but different :slight_smile:

Does anyone have a working example of pbuffers? I’m using windows, with vc6, I have opengl ver. 1.3.1 (that’s what glGetString (GL_VERSION) says), with nvidia geforce4, but I can’t compile wglGetExtensionsStringARB.

Thanks.

Originally posted by naf:
[b]Does anyone have a working example of pbuffers? I’m using windows, with vc6, I have opengl ver. 1.3.1 (that’s what glGetString (GL_VERSION) says), with nvidia geforce4, but I can’t compile wglGetExtensionsStringARB.

Thanks.[/b]

I assume you mean wglGetProcAddress() returns NULL when you pass in “wglGetExtensionsStringARB”? If so, you need to be sure that you have a current context when you call wglGetProcAddress() or it won’t call into the OpenGL driver.

Generally what this means is that you create a dummy window (doesn’t have to be displayed), choose a hardware pixel format, create a dummy context, and call wglMakeCurrent() first. Then you can destroy the dummy context/window when you’ve gotten the information you need.

It is confusing that you need to create a window/pixel format so you can query function pointers that help you work with pbuffers and pixel formats.

Yes, I know it’s stupid, but that’s the way it works on Windows.

Originally posted by naf:
[b]Does anyone have a working example of pbuffers? I’m using windows, with vc6, I have opengl ver. 1.3.1 (that’s what glGetString (GL_VERSION) says), with nvidia geforce4, but I can’t compile wglGetExtensionsStringARB.

Thanks.[/b]

nvdia has a demo with a teapot spinning, rendered into a pbuffer, and copied to a texture, and mapped onto a spinning quad.

It’s called pbuffer_to_texture_rectangle

V-man

regarding the wgl thingy: as stated above, since it’s an extension (not in headers/libs), you have to get the dll entry point manually, using wglGetProcAddress().

it’s not necessary to waste time & effort with dummy windows - you just have to make sure that (warning, this is a bit technical) the opengl dll’s are loaded before calling any win32 api functions which are dependent on opengl subsystem. (win32 won’t load opengl32.dll unless linker)

in practical terms, if you’re linking statically (ie #include <gl/gl.h> and link with opengl32.lib), you must force the linker section to load opengl32.dll by making a call to opengl proper, like glClear() or whatever, even if it’s in a function that’s never called. as far as i know, the simplest way to do this is wglMakeCurrent(NULL, NULL) (unbind any context for current thread); i’m pretty sure that special form is guaranteed to be harmless.

if you’re loading opengl32.dll dynamically (quake2/3 style), make sure the dll init is done prior to making any wgl* calls…