MRT, render to texture

It’s time to have an extension for rendering to many textures and the frame buffer.

I guess p-buffers are out of the loop because they have their own contexts, but perhaps p-buffer + other textures is a possible valid combination.

Has this been suggested before? Is the problem beeing addressed?

You mean like rendering to a texture and framebuffer at the same time? Instead of render to a texture (pbuffer) then copying it to the framebuffer.

-SirKnight

Who said anything about both targets containing the same thing?

MRT renders to AUX buffers. Any RTT extension worth the effort allows you to bind to AUX buffers, so you can render to multiple textures.

Originally posted by Korval:
MRT renders to AUX buffers. Any RTT extension worth the effort allows you to bind to AUX buffers, so you can render to multiple textures.
I know that but what is the procedure? If I render to an AUX buffer, I think I would have to copy it to a texture with glCopyTexSubImage2D

Render to a texture directly would be better.

i’ve never tried this, but surely you’d just bind a texture to the correct aux buffer and it would have the content mapped to it?

I know that but what is the procedure?
What “procedure”? There’s no RTT extension yet (unless you want to count WGL_ARB_RTT), so there’s no rendering to a texture in any way, shape, or form.

WGL_ARB_render_texture may be ugly, but it does work.

You can use render to texture with MRT, you just bind the AUX buffers as textures just as you would the front buffer. There is an example in our SDK (multiple draw buffers)that demonstrates this:
http://download.nvidia.com/developer/SDK/Individual_Samples/samples.html

Originally posted by Korval:
[quote]I know that but what is the procedure?
What “procedure”? There’s no RTT extension yet (unless you want to count WGL_ARB_RTT), so there’s no rendering to a texture in any way, shape, or form.
[/QUOTE]

I don’t understand what’s “ugly” about WGL_ARB_render_texture? It’s fairly well defined, and it provides the actual functionality you typically need when rendering to textures.

Between CopyTexSubImage() and WGL_ARB_render_texture, the only thing I really miss from Direct3D is the ability to re-attach depth buffers to different render surfaces, and the ability to use a single context (device) for various disparate pbuffers (render targets) – say, float vs int, and double-buffer vs single-buffer.

Between CopyTexSubImage() and WGL_ARB_render_texture, the only thing I really miss from Direct3D is the ability to re-attach depth buffers to different render surfaces, and the ability to use a single context (device) for various disparate pbuffers (render targets) – say, float vs int, and double-buffer vs single-buffer.
Let’s not forget the performance…

Thanks simon,

always good to have an example.

What is WGL_NO_TEXTURE_ARB?

GLint texFormat = WGL_NO_TEXTURE_ARB;
wglQueryPbufferARB(m_hPBuffer, WGL_TEXTURE_FORMAT_ARB, &texFormat);

    if (texFormat != WGL_NO_TEXTURE_ARB)
        m_bIsTexture = true;   

and where is it explained?

Jwatte, using these WGL is not straighforward.
I know I had struggled back then when I began using them. They really require an example because the spec is not enough.
They could also improve these WGL functions to have better error reporting.

And as for my subject : from what I understood it’s not possible to render to the frame buffer and a texture.
The example renders to a single p-buffer’s 4 aux buffers.
For MRT… well, how am I suppose to render to multiple textures + a frame buffer?

Does anyone else see the problem?

Does anyone else see the problem?
No. The normal fragment output goes to the framebuffer. The rest of them go to the AUX buffers. So, if you bind textures to the AUX buffers, but leave the framebuffer alone, you’re fine.

Yes, I understand the problem. I’ve been looking for a solution to this as well, but it does not appear to exist.

To get everyone on the same page, the problem at hand is this. Suppose that you have a single rendering context with front and back color buffers, a depth/stencil buffer, and a single AUX buffer. Now you render using a shader that uses MRT to render to the back color buffer and the AUX buffer simultaneously, and also writes to the depth buffer. Once you’re done with that, you switch to a different shader for which you want to modify the color buffer, but only read from the AUX buffer as a texture. You also still need the depth buffer for your depth test. Unfortunately, this cannot be done yet, and the only solution is to copy the contents of the AUX buffer to a texture using glCopyTexImage.

You may be thinking “Just everything to a pbuffer with color, depth, and AUX components, and then bind the pbuffer’s AUX buffer as a texture when rendering to the main frame buffer.” But you can’t perform your depth test using the depth buffer in the pbuffer when rendering to the main frame buffer. Again, this requires a copy. We’re screwed.

I just hope the upcoming frame buffer object extension addresses things like this.

OK, this is kind of weird. The “simple_draw_buffer” demo from Nvidia is running slow on ATI with 4.10, Radeon 9500. 10 FPS or so.
It can’t do MRT and it is simulating?
I had to modify it and also there were some gl errors for the first call to display, but after it’s ok.

No. The normal fragment output goes to the framebuffer. The rest of them go to the AUX buffers. So, if you bind textures to the AUX buffers, but leave the framebuffer alone, you’re fine.
But binding would make the p-buffer not renderable, I think. The NVidia demo using is binding and using it to render 4 textured quads (teapots).
I will try it shortly.

Once you’re done with that, you switch to a different shader for which you want to modify the color buffer, but only read from the AUX buffer as a texture. You also still need the depth buffer for your depth test. Unfortunately, this cannot be done yet, and the only solution is to copy the contents of the AUX buffer to a texture using glCopyTexImage.
This is that whole, “If you unbind the texture from the pbuffer, it loses it’s contents” problem, right? Well, as a general matter of basic sanity, if you can’t unbind a texture from an RC an dretain the data in that texture, you didn’t really render to a texture (despite the name of the extension), and the extension bites. It’s one of the reasons why I don’t consider WGL_ARB_RTT as real render-to-texture.

If ARB_FBO “works” like that, I’m never going to use OpenGL again.

The problem isn’t that you’ll lose the contents of your pbuffer when you unbind it. It’s that you can’t render to any of the pbuffer’s color buffers if any one of the pbuffer’s buffers are bound as a texture, even if you’re not rendering to the one that you’re texturing from.

In Nvidia’s MRT demo, they’re rendering four outputs to four components of a pbuffer, but then they render to a different buffer when they are texturing from those four.

MRT was supposed to provide support for deferred rendering methods – you render extra data to auxiliary buffers in one pass (such as a base material color or a world-space normal direction) to be used in the calculations for a subsequent pass. But there’s no way to render to the same output color buffer in both passes or to depth test from the same depth buffer in both passes, and still be able to texture from the auxiliary buffer(s), without doing at least one full-frame copy.

The problem isn’t that you’ll lose the contents of your pbuffer when you unbind it. It’s that you can’t render to any of the pbuffer’s color buffers if any one of the pbuffer’s buffers are bound as a texture, even if you’re not rendering to the one that you’re texturing from.
But the problem ultimately stems from not being able to unbind the texture and preserve the data. Not being able to use a bound texture from the current pbuffer wouldn’t be a limitation if you could unbind the texture and preserve its contents.

The logical way to think about rendering to a texture is to think of it as rendering to a texture. That is, you render data, and it is stored in that texture. The dependency on keeping that texture bound to the pbuffer is what is causing the problem.

Originally posted by Korval:
The logical way to think about rendering to a texture is to think of it as rendering to a texture. That is, you render data, and it is stored in that texture. The dependency on keeping that texture bound to the pbuffer is what is causing the problem.
I think Eric’s first post suggested a problem with preserving contents, but the problem IS NOT preserving contents.

Perhaps this example will help :

I want to render to 4 textures at once. Let’s see what ATI_draw_buffers offers us along with other extensions.

Step 1 : Create a p-buffer with 4 AUX buffers
Step 2 : Write your fragment program with 4 outputs. Load it into the p-buffer’s context.
Step 3 : Make p-buffer current
Step 4 : Setup the targets (glDrawBuffersATI)
The targets are AUX0, AUX1, AUX2, AUX3
Step 5: Render a teapot
Step 6 : So it’s time to use the 4 AUX buffers at once, but can we? No! A single AUX buffer can be bound as a texture at a time.
A single p-buffer was created in the begining. It represents a single RTT surface, although it has 4 AUX buffers.

What is needed is 4 p-buffers, and all 4 must be rendered at once. ATI_draw_buffers doesn’t allow this, probably because of p-buffer beeing a limitation.

Conclusion : I know that MRT means multiple render targets, but what I’m looking for is multiple render textures.

So it’s time to use the 4 AUX buffers at once, but can we? No! A single AUX buffer can be bound as a texture at a time.
A single p-buffer was created in the begining. It represents a single RTT surface, although it has 4 AUX buffers.
And there is the problem.

The idea with using rendered-textures ought not be, “bind an AUX buffer as a texture,” because that’s not what you want. When you rendered to those 4 AUX buffers, your data should have been permanantly stored in 4 bound texture objects. Once stored there, those objects could be used as normal textures, and there would then be no problem. So, ultimately, it is the fact that the texture cannot be unbound without losing data that causes the problem.

So it’s time to use the 4 AUX buffers at once, but can we? No! A single AUX buffer can be bound as a texture at a time.
A single p-buffer was created in the begining. It represents a single RTT surface, although it has 4 AUX buffers.

Why can’t you texture from all four AUX buffers at once? I don’t see any problem there – you just bind each one individually to different texture units. The spec even says that a pbuffer can be written to again once all of the color buffers have been released from being used as textures.

My big problem is that I can’t texture from an AUX buffer as I’m rendering to the primary color buffer of the same pbuffer. Here’s what I want to do:

  1. Create a pbuffer with a color buffer, depth/stencil buffer, and one AUX buffer.
  2. Render an ambient pass using a shader that writes a color to the color buffer, and a different color to the AUX buffer. The depth buffer is also filled in here. The color written to the color buffer is the typical ambient lighting equation, and the color written to the AUX buffer is the plain material color.
  3. Render standard lighting passes to the color buffer without affecting the depth buffer or AUX buffer.
  4. Render a deferred lighting effect using information from the depth buffer and the AUX buffer, and add the results to the color buffer. Also, keep depth testing against the same depth buffer that was rendered in the ambient pass.

Unfortunately, I currently have to copy both the depth and AUX buffers out to texture maps because I can’t render to the color buffer when either the depth buffer or AUX buffer is bound to a texture (not to mention depth testing from a depth buffer that is also bound as a texture).

If anyone has any ideas about improving this situation, I’d love to hear them. Right now, I’m just praying that FBO allows this to be handled efficiently.