render to texture

What’s the best way to render to texture? pbuffer or does glCopyTex2D do the same thing?

I’m guessing pbuffers are used so that a scene can be changed, and glTexSubImage2D()is for getting straigt from the framebuffer?

[This message has been edited by robert (edited 01-10-2003).]

If I understand it correctly, the render-to-texture extension literally allows you to render into a texture image (the pbuffer). Alternatively you can render to the backbuffer and then pixel copy into a texture. I assume the extension provides a performance increase.

I assume the extension provides a performance increase

Well, at the moment, nVidia drivers don’t actually provide a performance increase with the render-to-texture extension. This is for an unspecified reason that should, hopefully in the near future, be corrected.

Actually the performance of render-to-2D-texture with WGL_ARB_render_texture is quite ok in newer NVIDIA drivers (gives still worse performance than DX 8 though). But rendering to a cube map is still horrible.

Btw, another important occasion for using pbuffers is when you’re rendering to a window (and not in full screen mode), or when the size of the texture you want to render to is bigger than the framebuffer. In these cases, copying from the framebuffer won’t work.

There’s also discussion going on about a non-pbuffer based render-to-texture. The new-style API would make rendering to texture a lot simpler.

That would be really great. Render-to-texture is pretty much the only thing where the Direct3D interface is better.

If you’re fixing that, how about also allowing attaching various depth surfaces to various render targets, and perhaps multiple render targets, too? DX9, which is now released, does both of those pretty well.

Originally posted by Asgard:
Btw, another important occasion for using pbuffers is when you’re rendering to a window (and not in full screen mode), or when the size of the texture you want to render to is bigger than the framebuffer. In these cases, copying from the framebuffer won’t work.

Also, with a pbuffer you can control FSAA separately from your main window. You don’t necessarily want 4x FSAA enabled while you’re rendering to a texture.

– Tom

Originally posted by Asgard:
But rendering to a cube map is still horrible.

Phew! Here I was thinking it was just me…

Originally posted by cass:
[b]
There’s also discussion going on about a non-pbuffer based render-to-texture. The new-style API would make rendering to texture a lot simpler.

[/b]

my dreams come true?
you’re jokeing, right?

This is exactly what I was hoping for. Actually, I was expecting some of those “GL2” extensions to be made available ( again, in one form or another ). I’ll be surprised if it’s something not related to GL2.

Originally posted by PH:
This is exactly what I was hoping for. Actually, I was expecting some of those “GL2” extensions to be made available ( again, in one form or another ). I’ll be surprised if it’s something not related to GL2.

There are a whole set of issues with the current pbuffer extensions (potential need for separate contexts, windowing system dependencies), that aren’t insurmountable, but are annoying. The current solutions also lack some flexibility (e.g., you might need a Z buffer when rendering to a render-texture pbuffer, but you don’t need it after the final image has been rendered). We’ve also recently added several pieces of interesting functionality that suffers from the same general problems (rendered shadow maps, floating-point pbuffers). This is an area of interest of several OpenGL vendors.

The closest thing in “GL2” whitepapers appears to be image buffers in the memory management whitepapers, but it doesn’t seem to be obviously connected to render texture, pbuffers, and the like.

I personally think this is very important functionality that I’d like to see in an OpenGL 1.5, OpenGL 2.0, or whatever you want to call it.

Originally posted by Asgard:
But rendering to a cube map is still horrible.

Originally posted by rgpc:
Phew! Here I was thinking it was just me…

My understanding is that rendering to a cubemap should be significantly better in the next NVIDIA driver release.

[This message has been edited by pbrown (edited 01-13-2003).]

[This message has been edited by pbrown (edited 01-13-2003).]

pbrown:
I personally think this is very important functionality that I’d like to see in an OpenGL 1.5, OpenGL 2.0, or whatever you want to call it.

I fully agree, but I also would like to clarify one thing:

The closest thing in “GL2” whitepapers appears to be image buffers in the memory management whitepapers, but it doesn’t seem to be obviously connected to render texture, pbuffers, and the like
“OpenGL 2.0 Objects” v1.2 paper describes glAttach/DetachBufferObject() functions, which could allow achieving the same affects as with DX9’s SetRenderTarget().

This is how, according to the proposal, RTT with sharing depth+stencil buffer might look like:

// initialisation:
GLhandle WindowFrameBuffer = glCreateFrameBufferObject(…); /* main FB /
GLhandle OffscreenFrameBuffer = glCreateFrameBufferObject(…); /
‘pbuffer’ */
GLhandle DepthAndStencilBuffer = glCreateBufferObject(GL_DEPTH, …);
GLhandle RenderTexture = glCreateTextureObject(…)

GLhandle OffscreenColorBuffer = /* acquire GL_FRONT_LEFT_BUFFER_HANDLE from the
OffscreenFrameBuffer, proposal doesn’t say
how to do this, there is no GetObjectParameter()
returning values of GLhandle type */
glAttachImageObject(RenderTexture, OffscreenColorBuffer, 0);
(…)

// rendering loop:
{
// dettach depth+stencil from the window:
glDetachObject(WindowFrameBuffer, DepthAndStencilBuffer);
// attach depth+stencil to the ‘pbuffer’:
glAttachBufferObject(OffscreenFrameBuffer, DepthAndStencilBuffer);
// set render target:
glUseFrameBufferObject(GL_DRAW, OffscreenFrameBuffer);

// now render to the ‘pbuffer’
(…)

// dettach depth+stencil from the 'pbuffer':

glDetachObject(OffscreenFrameBuffer, DepthAndStencilBuffer);
// attach depth+stencil to the window
glAttachBufferObject(WindowFrameBuffer, DepthAndStencilBuffer);
// set render target:
glUseFrameBufferObject(GL_DRAW, WindowFrameBuffer);

// now render to the window

(…)
// bind the ‘pbuffer’ as texture:
glUseTextureObject(RenderTexture, /texture unit/);
(…)
}

IMO GL2 proposal includes solutions for all RTT-related problems i’m aware now.

Originally posted by pbrown:
[b] There are a whole set of issues with the current pbuffer extensions (potential need for separate contexts, windowing system dependencies), that aren’t insurmountable, but are annoying. The current solutions also lack some flexibility (e.g., you might need a Z buffer when rendering to a render-texture pbuffer, but you don’t need it after the final image has been rendered). We’ve also recently added several pieces of interesting functionality that suffers from the same general problems (rendered shadow maps, floating-point pbuffers). This is an area of interest of several OpenGL vendors.

The closest thing in “GL2” whitepapers appears to be image buffers in the memory management whitepapers, but it doesn’t seem to be obviously connected to render texture, pbuffers, and the like.

I personally think this is very important functionality that I’d like to see in an OpenGL 1.5, OpenGL 2.0, or whatever you want to call it.[/b]

I would definitely like to see this in GL 1.5 ( the sooner the better ). The reason I mentioned GL2 was because I think it would make sense to get this solved once and for all. If this new API replaces the proposed GL2 buffer objects then that’s fine with me too. And like MZ mentioned, GL2 proposed a method that seems to be just as flexible as DX9.

Anyway, do you have an approximate timeframe for this ? Are we talking a few months or more / less ?

It’s really good to hear that work is done on a better render-to-texture solution for OpenGL.
I’d much prefer it if any new extension is not just created as an interim solution, but rather as PH and MZ suggest with regard to an upcoming GL 2.0, so that the functionality can be integrated into core OpenGL 2.0 in the future.

so lets hope to see a vao and a rendertexture soon…

i am currently in love with the clean and awesome easy way to work with dx9, but i see that i always step back to gl anyways, its fun … can’t wait to see gl without its bugs (wich i did started to lovehate…)

i hope the only os-specific thing in gl will be a common frontbuffer, to blit to (or a front/back buffer pair to swap then…), and thats it. no other wgl/xgl/macgl/ingsomeotherosgl functions, please. there is no need for any others… just look at d3d9, i think it would be very easy to port actually… only one function actually needs a hwnd… (except some gdi access to render with gdi… but you don’t actually need that), and thats about it…

thats what i want to see from gl again, too… gl should be portable… wgl kills that

www.gametutorials.com has a tutorial on render to a texture

Can render2texture be done with pbuffers under Linux using the latest Nvidia drivers?

Originally posted by djdjdjdjdj:
Can render2texture be done with pbuffers under Linux using the latest Nvidia drivers?

The 41.91 drivers support GLX_SGIX_pbuffer. So I am assuming so.