updated EXT_render_target spec (rev 2)

Creating a thread for feedback on the latest public draft of the EXT_render_target spec.

This looks great and I can’t wait to try it.

Wow. So clear, so simple, so why did it take so long?
Cass, please can you tell us if this extension will be supported on nvidia hardware that currently supports standard RTT? Just juggle the driver around and bob’s yer uncle?

Looks great. Just one thing – don’t forget to change

void glDeleteRenderTargetsEXT(GLsizei n, GLuint *renderTargets);

to

void glDeleteRenderTargetsEXT(GLsizei n, const GLuint *renderTargets);

(since the array pointed to by renderTargets is not modified).

– Eric

Originally posted by knackered:
Wow. So clear, so simple, so why did it take so long?
Cass, please can you tell us if this extension will be supported on nvidia hardware that currently supports standard RTT? Just juggle the driver around and bob’s yer uncle?

Hi Knackered,

The plan is for this extension to be supported on all NVIDIA hardware. Certainly any hardware that supports standard RTT.

Thanks -
Cass

Originally posted by Eric Lengyel:
Looks great. Just one thing – don’t forget to change…
– Eric

Good point, Eric, thanks! I’ll pass that along to Jeff.

Cass

Originally posted by cass:
[b] [quote]Originally posted by knackered:
Wow. So clear, so simple, so why did it take so long?
Cass, please can you tell us if this extension will be supported on nvidia hardware that currently supports standard RTT? Just juggle the driver around and bob’s yer uncle?

Hi Knackered,

The plan is for this extension to be supported on all NVIDIA hardware. Certainly any hardware that supports standard RTT.

Thanks -
Cass[/b][/QUOTE]Besides the extension itself, this is the best news of the day :smiley:

Is there any timeframe, in which we can expect first implementations to show up, or is this extension still too much in development?

There is one thing i don´t quite understand:

(18) Should there be N MRT-style depth textures?

What exactly does that mean? Does this extension support multiple render targets? I did´t understand it this way.
And what does this have to do with depth-textures?

Anyway, i would really really be happy, if multiple render targets will be supported (at least on hardware, which supports it).

Jan.

i really hope we’ll see it on ati, but i dooupt it for the next time at least :frowning:

not having the money to buy a 6800 anytime soon at all… (and have to get one that works in an old xpc:D)

I really like the improvements of the second draft, mainly the framebuffer-objects. The extension didn’t get much extra-complexity but is much more general. Great work, hope that this extension will be released soon (maybe for ATI, etc. too?).

Originally posted by Jan:
Is there any timeframe, in which we can expect first implementations to show up, or is this extension still too much in development?

Unfortunately there’s no way to give a timeframe right now. Discussions with other ARB members may still affect the final form of this extension.

[b]
There is one thing i don´t quite understand:

(18) Should there be N MRT-style depth textures?

What exactly does that mean? Does this extension support multiple render targets? I did´t understand it this way.
And what does this have to do with depth-textures?
[/b]
You probably shouldn’t worry about this. We plan to allocate multiple sequential enumerants for color buffers for MRT (but a separate extension will enable using that functionality). The question was really about whether we should do the same for depth. The answer for now is no. You need something more than “MRT” for this to be useful, and no hardware that exists today supports multiple output depths and/or depth buffers. (At least none that I’m aware of.)

[b]
Anyway, i would really really be happy, if multiple render targets will be supported (at least on hardware, which supports it).

Jan.[/b]
The extension that enables MRT will be very simple, and you should expect support to happen at the same time or very shortly after real implementations of this spec show up.

Thanks -
Cass

Regaring issue #12:

(12) If a texture is bound for both rendering and texturing purposes, should the results of rendering be undefined or should INVALID_OPERATION be generated at glBegin()?

UNRESOLVED
Undefined results allow an application to render to a section of the texture that is not being sourced by normal texture operations. However it is highly desirable to define results in all cases, even if the result is defined as an error.

I realize that the results are undefined, but this kind of functionality is useful for things like building summed area tables, doing digital filtering, image processing, and other kinds of non-conventional usages for the GPU.

For game programming, I can understand that you might prefer an INVALID_OPERATION… but for other types of applications, we have a pretty good idea of when one area of the texture hasn’t been written to in a sufficiently long while that it can be used as a source for another area of the texture… because we are accessing different areas of the texture in some predictable algorithmic fashion rather than an arbitrary way like most game applications would.

Perhaps you could make the error behaviour toggleable… so applications could choose if they want to receive an INVALID_OPERATION error or not.

Originally posted by Stephen_H:
I realize that the results are undefined, but this kind of functionality is useful for things like building summed area tables, doing digital filtering, image processing, and other kinds of non-conventional usages for the GPU.
[…]

I’m not sure whether there’s precedent for it, but perhaps INVALID_OPERATION could be set, but it could still allow potentially useful undefined behavior via an NVEmulate kind of check-box.

I appreciate the desire to play with features like this, but using them reliably from one architecture to the next (or even one driver version to the next) is pretty much impossible.

Things like caching mess things up.

In general, while I would like to allow naughty behavior for experimentation and research, I (and pretty much all IHVs) worry that people will come to rely on behavior that is unsupported and fundamentally unreliable.

Thanks -
Cass

edit: clarify last paragraph

This blows the socks off the whole wgl mess, both in terms of the simplicty of the spec, and the clean usage model - very nice indeed.

Another idea is to add a render target parameter that magically adds an accumulation buffer to the render target object.

I’d be intersted to see if you can pull this one off. :slight_smile:

Issue 7:

An open question is what happens when both COLOR and DEPTH render
targets buffers are set to zero.

Wouldn’t it be sensible to just not produce any results? Just like what happens if you go

glDepthMask(0);
glColorMask(0,0,0,0);
glStencilMask(0);
glDisable(GL_DEPTH_TEST);
glDisable(GL_STENCIL_TEST);

Unless I misunderstood this issue, or there’s some interaction with other functionality (feedback, histogram?) I don’t see why there would be much debate over this.

edit:
added stencil state, added depth test - I know it’s redundant, it’s just supposed to be extra obvious

Good stuff. Objects out of the box.

While the lack of forced relaxation of the size restriction is unfortunate, it is also understandable… as long as there is a relaxed extension available from Day 1.

Oh, and if it matters, I support the

glRenderTargetBuffer(GL_FRAMEBUFFER, GL_COLOR, GL_BUFFER_OBJECT, buffer_obj);

syntax for using VBO’s as render targets. It looks like what should be happening: you’re binding the buffer object to the frame buffer as a render target.

BTW, this extension also nicely deals with copying data from/to various textures (and with VBO, buffer objects).

Build this, and superbuffers is utterly superfluous (pretty much. VBO provides the memory allocation ability, and EXT_RTVA provides everything else). Plus, we don’t need another way to bind vertex arrays :wink:

Do all this (and fix your GLSlang impl to be perfectly conformant :wink: ), and I could easily see myself picking up a nice NV4x card. I’ll never justify a $500 purchase, but a nice mid-range $200 card would work…

Perfection is reached when you can’t take away any more thing … good work!

Ah, and watch those const-correctness!

Originally posted by Korval:
[b]Oh, and if it matters, I support the

glRenderTargetBuffer(GL_FRAMEBUFFER, GL_COLOR, GL_BUFFER_OBJECT, buffer_obj);

syntax for using VBO’s as render targets. It looks like what should be happening: you’re binding the buffer object to the frame buffer as a render target.[/b]
This would be nice and easy but how does the driver know which internal-format you use and how many components per Pixel, … ?

Why not just making a buffer object, making a texture in it, rendering to it and then binding it as VBO?

Is rendering to floating-point textures supported, and is it vendor independent?

Questions according to Render-to-vertex-array:

(1) Which coordinates are you supposed to render to (x-coordinate, 0,1,2,3,…?)

(2) Can you render to integer render tagets and use them as vertex arrays (for instance, vertex color in RGBA32, or texture coords in 16-bit)