EXT_render_target

Since nobody’s started a thread on it, I guess I will.

EXT_render_target, coupled with PBO, gives you 70-90 percent of the important functionality of the superbuffers extension. Which, therefore, makes superbuffers superfluous and unnecessary.

So, basically, we’re being asked to choose between PBO/EXT_render_target and superbuffers. Well, I’d choose PBO/EXT_render_target simply because I have extension specs I can read, and I don’t for superbuffers. It’s hard to compare two things when you can’t gain access to the alternative.

One thing that concerns me about the extension is issue #3: the requirement that all buffers in the texture drawable be the same size. It would be a really nice if this weren’t the case. If there is to be a separate extension to relax this, it should be available alongside EXT_render_target.

I, in particularly, like the way that the idea of a drawable object type is neatly ducked by using the state-based mechanism. This allows actual users to decide if objects are needed, and if they are, another extension can be provided. Although I’m don’t know if I entirely agree with the rationalle for not providing objects.

Should there be N MRT-style depth textures?

I would think no. To do so, the MRT shader would also need to have a depth value for each render target, which neither glslang nor the multiple-render-target extension to ARB_fp does. In general, when you start getting into MRT-like functionality, you’re going to be willing to create your own depth-like texture (as a luminance 32-bit fp or something like that) and do the comparison in the fragment program. As such, having, explicitly, multiple depth textures is not necessary.

Why is it that various commands to set the current render target state are not stored in display lists?

LOL… looks like you typed too much, I beat ya by 2 mins! :wink:

Lets keep it in this thread, since you did a bit more typeing than me. :smiley:

Originally posted by Korval:
[QB]
One thing that concerns me about the extension is issue #3: the requirement that all buffers in the texture drawable be the same size. It would be a really nice if this weren’t the case.
Korval, what would you use that functionality for? The difficulty is in defining what happens when you have say a depth-texture that is bigger (or smaller) than the color-texture bound to the drawable. Also, what happens when you re-use such a depth-texture with yet a different sized color-texture. Our initial idea was to keep it simple and not allow this. I would be interested in hearing otherwise.

Why is it that various commands to set the current render target state are not stored in display lists?
I’ll let Jeff or Cass talk to this.

Barthold

The reason for keeping the sizes the same was to provide simple rules that everyone could agree to and were not arduous for developers to follow.

I fully expect the rules to relax over time (some sooner than others) but the goal was to get to a spec that everyone could implement soon and without reservation.

Regarding the display list issue, I’m not crazy about aggrandizing display list functionality, but you can certainly make the case that it would be inconsistent to omit this support.

Being consistent, while sometimes annoying, almost always pays off. I’ll get Jeff to add an issue for this. It’ll either get changed or we’ll have documentation about why we spec’d it this way.

Thanks for the feedback!

Cass

I love PBO and EXT_render_target extensions ! Especially when they significantly increase the performance :smiley:

The superbuffer extension has some good functionality but it is too complicated. However i like the proxy stuff, mipmap levels etc.

As long as we can get the same API for binding rendering to vertex buffers, image buffers, stencil buffers ( i reallay want access to this), i think it will be great with this new extension.

I would really dislike to have both extensions

I like it, should make RTT alot simpler for alot of ppl.

Cant think of any questions that haven’t been answered by the spec.

I suppose its out of the question being able to render to multiple targets at once, or the frame-buffer and texture simultaneously ?

Nutty

from the first look (at the examples), it looks great, just the way i wanted it all the time

Some notes & doubts:

  • Why not STENCIL only textures? If the graphics card doesn’t support stencil only, it can always create internally a stencil depth of the minimum depth size. I guess the problem comes when the app specifies one drawable for STENCIL and another for DEPTH separately? Is it too much driver work to create a combined texture on the fly and then copy back to one and the other whenever the rendertarget is changed (only necessary when the hardware does not support separate stencil & depth addresses).

  • Interactions with textures with borders. In theory using textures with borders as rendertargets shouldn’t impose a problem.

  • Interactions with compressed textures. Probably you won’t be able to render to these.

  • Regarding issue 15, why not make it possible to use the same texture as drawable and texture source (as long as you don’t render and read from the same levels/faces/slices, in which case you just say that the result is undefined).
    This is very useful for doing programmable mipmap level generation (render to the lower-detail level reading from the higher-detail one). Allowing this shouldn’t be a problem even if the graphics card doesn’t support rendering to textures in hardware, i.e. the rendering is done in a temp buffer and then copied to the texture (the renderbuffer to texture copy happens when you switch to a new rendertarget either with glRenderTarget or glDrawable). A workaround to achieve this is by ping-ponging two textures, but is nasty.
    This cannot be trivially extended to say that you can render to arbitrary texels of the same level/face/slice if render to texture is not supported in hardware (also there’s no way to indicate the period of time where you can read a texel you’ve read - maybe with a glRenderTarget to the same texture so the data is flushed?).

  • What’s the interaction with SwapBuffers? In theory none (i.e. SwapBuffers always swaps the FRAMEBUFFER drawable), but note that this means that if you want to do things like triple buffering or offscreen rendering, whenever you want to present the results you need to render a full screen quad, is that desirable?

  • Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE.

  • Interactions of the texture format with the previous functions: what happens if you do a glReadPixels when the internal format of the texture is GL_RED? What about packed component textures (GL_R5G6B5…) ? Is any texture format supported as rendertarget? If not, how can the application know which formats are available, by trial and error?

  • Interactions with glGetTexImage over the same texture object used as drawable. Can you do any glGetTexImage at all? What results would you get?

  • Interactions with the current pixelformat: What happens if the current pixelformat has no alpha but the texture does, is the destination alpha available for rendering when the drawable is TEXTURE?. There’s some mention of this in the spec part, I think that the pixelformat should be changed to match the one of the texture when you change the drawable (so you can do destination alpha rendering even if your FRAMEBUFFER doesn’t have alpha).

  • Interactions with texture sharing (wglShareLists). Does wglMakeCurrent force the copy of the current render target to the texture (this would solve all the single-thread problems). Cases:

    • when the current rendertarget texture is used as source on another context. In the multithread case this should have the same limitations as when using the
    • when the given texture object is used as rendertarget in two different contexts. in the multithread case you have to resort to say that rendering to the same texture object from two different threads is undefined?
  • Do you really need glDrawable? Why not make that when glDrawableParameter for COLOR and DEPTH is zero the rendering is done to the FRAMEBUFFER? This would allow things like rendering to the color buffer of the FRAMEBUFFER but storing the depth in a texture (is that desirable?). I guess that the main reason to have glDrawable is for the future use of render to vertexarray as glDrawable parameter?.

Originally posted by Nutty:
[b]I like it, should make RTT alot simpler for alot of ppl.

Cant think of any questions that haven’t been answered by the spec.

I suppose its out of the question being able to render to multiple targets at once, or the frame-buffer and texture simultaneously ?

Nutty[/b]
Hi Nutty,

None of these questions is “out of the question” but they aren’t addressed by this spec.

We already know how we want to handle multiple color targets, but we want to provide that as an extension to this framework.

Rendering to fb and texture simultaneously is not something we’ve thought a great deal about. What do you mean exactly? (Or at least more specifically…)

Thanks -
Cass

yeah, MRT is mentoyed actually to be supported in future extensions…

for framebuffer/texture interchanges, it would be cool if the framebuffer(parts) get a simple texture, wich we can bind, and all. that way, espencially MRT would get fully transparent.

i’ve read trough it now, but it’s too big, my brain hurts:D i’ll have to read it again. hope to see soon (experimental) support for it on both nvidia and ati hw.

btw, cass… this is now something wich i mean with “cleaning up opengl”. if this ext is done, what use have all the WGL render texture exts, that are hell complicated, expose in the end the same, and are mostly useless then?. nobody will want to use them afterwards, even pbuffers get rather questionable (while still useful, possibly… but not without being able to create them without any real buffer… so it’s more of an os thing).

It would be nice if multi sample support (issue 23) was added in the base extension and not layered on top. Having multi sampling forced on by the driver control panel can really screw with render to texture effects and having it in the base extension will encourage proper handling of multi sampling and make it easier for those wanting to do it the right way. Besides, multi sampling is a core feature, isn’t it?

I also think it would be good to be able to source from and render to the same texture (issue 15). Of course, this probably won’t work in all cases due to conccurrency issues, but it would be nice to see certain conditions specified where it will work rather than leaving the results undefined. Like Evan said, rendering to one mip level while reading from another or rendering to a part of one level while sourcing from a disjoint part. I don’t know how much commonality there is between hw, but at least specifying behaviour in the lowest common denominatior of cases where it will work would be useful. Any more esoteric cases can be handled in a separate extension. If the results are undefined we lose out on a whoe lot of useful functionality that some hw might support, since we can’t rely on undefined behaviour.

Well, I haven’t absorbed everything yet, but from what I’ve read it removes any context related issues associated with render to texture and pbuffers. It looks as though it will make shadow mapping and similar algorithms much cleaner to implement.

I want to add that EXT_render_target replaces not only ARB_pbuffer and ARB_render_texture, but NV_render_depth_texture too.

Very much like what I said here: http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=7;t=000412

This cannot be trivially extended to say that you can render to arbitrary texels of the same level/face/slice if render to texture is not supported in hardware
The extension provides for specifying which level to bind to easily enough. However, what it does not do is create a means to prevent texture reads from any particular level. I’m not sure if such a way exists in OpenGL, and LOD biasing doesn’t count. Unless such a way exists, there’s nothing to prevent the user from accidentally fetching a texel on the bound texture.

Like Cass said, first get it implemented and working, then extend it later to relax various restrictions.

Is it too much driver work to create a combined texture on the fly and then copy back to one and the other whenever the rendertarget is changed (only necessary when the hardware does not support separate stencil & depth addresses).
Well, consider that most hardware combines the two as standard operation procedure, as well as the fact that it’s much easier (and faster) to for hardware that doesn’t to hide separate depth and stencil textures behind a single texture object than it is for hardware that uses combined ones to have separate textures.

Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE
Already laid out in the spec. “When <drawable> is FRAMEBUFFER_EXT the normal framebuffer is used as the sinc of fragment operations and as the source of pixel reads such as ReadPixels, as described in chapter 4. When <drawable> is TEXTURE the texture drawable is used instead for these operations.”

Interactions with the current pixelformat
The pixel format isn’t even something that OpenGL defines; it’s an OS-binding thing more than anything else. As such, I don’t think there should be any interactions with it. If the texture supports alpha, then there can be alpha. If it doesn’t, then there isn’t.

Can’t seem to find it…
Did you not notice the top news item on the main page?

First of all : Nice work!
Both with the extension spec, and the fact that you share it among developers to get feedback, more extension should be presented like this first ( atleast ARB and EXT ones).

As i understand, this spec is here becuase superbuffers has a bit more to fix, and existing harware may not be able to implement everything that may be in that spec, correct?

I like the fact that it seems to be MUCH easier to use render to texture, and that you still utilize the standard texture interface ( GenTextures, texImage and so on) so it will be a breeze to implement ontop of existing engines.

I would rather see a unified buffer that could be bound as a texture or VBO or something else with no restrictions ( but ill be happy to wait for that until new HW if we get this very fast in current one.)

I guess that rect targets, and float targets will work if the corresponding extension for support of those texturetypes exists?

I hope that you figure out the MRT binding and fix some extension for that in GLSLANG very fast after this is implemented ( or even at the same time, i totally lack patience when i hear about new fun stuff :slight_smile: )

[add:] the GenerateMipmapEXT, does that works on normal uploaded (glteximage) textures as well? I like consistent handle of things…

This really looks like a nice extension. Easy to use and there are some great features (mipmap-generation, …)

As Mazy said I also like the idea of sharing the specification draft with developers.

What about non-power of 2 textures like NV_texture_rectangle or ARB_texture_non_power_of_two? Are they supported too? I didn’t find anything about them in the specification.

According MRT:
Will this be added in a way like ATI_draw_buffers? Simple add a AUXi enum to <target> in gRenderTargetEXT?

I would rather see a unified buffer that could be bound as a texture or VBO or something else with no restrictions ( but ill be happy to wait for that until new HW if we get this very fast in current one.)

This would be a great idea! Why don’t add three interal attributes, width, height and format, to each buffer object and use directly buffer objects instead of textures?
If this, MRT and render to vertex array is supported then I think the all superbuffer extension features are covered by VBO, PBO, EXT_render_target. So where’s then the advantage of superbuffers?

hi, here a little cosmetic question

Why not use the standard opengl function shape to create / bind rendertarget; i mean:

*GenRenderTarget(1, &id)
*BindRenderTarget(id)
*RenderTarget(<target>, …) // actually this one would be the same as in the current spec

and then to switch between rendertarget simply call :

*BindRenderTarget(id) // id = 0 mean framebuffer

I really think it would be more intuitive but perhaps i didn’t see some drawback in this proposition.

Simple, pretty, makes you want to render to textures even if you don’t need to :slight_smile: