PDA

View Full Version : EXT_render_target



Korval
04-01-2004, 08:43 PM
Since nobody's started a thread on it, I guess I will.

EXT_render_target, coupled with PBO, gives you 70-90 percent of the important functionality of the superbuffers extension. Which, therefore, makes superbuffers superfluous and unnecessary.

So, basically, we're being asked to choose between PBO/EXT_render_target and superbuffers. Well, I'd choose PBO/EXT_render_target simply because I have extension specs I can read, and I don't for superbuffers. It's hard to compare two things when you can't gain access to the alternative.

One thing that concerns me about the extension is issue #3: the requirement that all buffers in the texture drawable be the same size. It would be a really nice if this weren't the case. If there is to be a separate extension to relax this, it should be available alongside EXT_render_target.

I, in particularly, like the way that the idea of a drawable object type is neatly ducked by using the state-based mechanism. This allows actual users to decide if objects are needed, and if they are, another extension can be provided. Although I'm don't know if I entirely agree with the rationalle for not providing objects.


Should there be N MRT-style depth textures?
I would think no. To do so, the MRT shader would also need to have a depth value for each render target, which neither glslang nor the multiple-render-target extension to ARB_fp does. In general, when you start getting into MRT-like functionality, you're going to be willing to create your own depth-like texture (as a luminance 32-bit fp or something like that) and do the comparison in the fragment program. As such, having, explicitly, multiple depth textures is not necessary.

Why is it that various commands to set the current render target state are not stored in display lists?

Elixer
04-01-2004, 08:46 PM
LOL... looks like you typed too much, I beat ya by 2 mins! ;)

Lets keep it in this thread, since you did a bit more typeing than me. :D

barthold
04-01-2004, 09:34 PM
Originally posted by Korval:
[QB]
One thing that concerns me about the extension is issue #3: the requirement that all buffers in the texture drawable be the same size. It would be a really nice if this weren't the case. Korval, what would you use that functionality for? The difficulty is in defining what happens when you have say a depth-texture that is bigger (or smaller) than the color-texture bound to the drawable. Also, what happens when you re-use such a depth-texture with yet a different sized color-texture. Our initial idea was to keep it simple and not allow this. I would be interested in hearing otherwise.



Why is it that various commands to set the current render target state are not stored in display lists?I'll let Jeff or Cass talk to this.

Barthold

cass
04-01-2004, 10:01 PM
The reason for keeping the sizes the same was to provide simple rules that everyone could agree to and were not arduous for developers to follow.

I fully expect the rules to relax over time (some sooner than others) but the goal was to get to a spec that everyone could implement soon and without reservation.

Regarding the display list issue, I'm not crazy about aggrandizing display list functionality, but you can certainly make the case that it would be inconsistent to omit this support.

Being consistent, while sometimes annoying, almost always pays off. I'll get Jeff to add an issue for this. It'll either get changed or we'll have documentation about why we spec'd it this way.

Thanks for the feedback!

Cass

Sancho
04-01-2004, 10:15 PM
I love PBO and EXT_render_target extensions ! Especially when they significantly increase the performance :D

ToolTech
04-01-2004, 11:00 PM
The superbuffer extension has some good functionality but it is too complicated. However i like the proxy stuff, mipmap levels etc.

As long as we can get the same API for binding rendering to vertex buffers, image buffers, stencil buffers ( i reallay want access to this), i think it will be great with this new extension.

I would really dislike to have both extensions

Nutty
04-01-2004, 11:17 PM
I like it, should make RTT alot simpler for alot of ppl.

Cant think of any questions that haven't been answered by the spec.

I suppose its out of the question being able to render to multiple targets at once, or the frame-buffer and texture simultaneously ?

Nutty

davepermen
04-01-2004, 11:20 PM
from the first look (at the examples), it looks great, just the way i wanted it all the time

evanGLizr
04-01-2004, 11:36 PM
Some notes & doubts:

- Why not STENCIL only textures? If the graphics card doesn't support stencil only, it can always create internally a stencil depth of the minimum depth size. I guess the problem comes when the app specifies one drawable for STENCIL and another for DEPTH separately? Is it too much driver work to create a combined texture on the fly and then copy back to one and the other whenever the rendertarget is changed (only necessary when the hardware does not support separate stencil & depth addresses).

- Interactions with textures with borders. In theory using textures with borders as rendertargets shouldn't impose a problem.

- Interactions with compressed textures. Probably you won't be able to render to these.

- Regarding issue 15, why not make it possible to use the same texture as drawable and texture source (as long as you don't render and read from the same levels/faces/slices, in which case you just say that the result is undefined).
This is very useful for doing programmable mipmap level generation (render to the lower-detail level reading from the higher-detail one). Allowing this shouldn't be a problem even if the graphics card doesn't support rendering to textures in hardware, i.e. the rendering is done in a temp buffer and then copied to the texture (the renderbuffer to texture copy happens when you switch to a new rendertarget either with glRenderTarget or glDrawable). A workaround to achieve this is by ping-ponging two textures, but is nasty.
This cannot be trivially extended to say that you can render to arbitrary texels of the same level/face/slice if render to texture is not supported in hardware (also there's no way to indicate the period of time where you can read a texel you've read - maybe with a glRenderTarget to the same texture so the data is flushed?).

- What's the interaction with SwapBuffers? In theory none (i.e. SwapBuffers always swaps the FRAMEBUFFER drawable), but note that this means that if you want to do things like triple buffering or offscreen rendering, whenever you want to present the results you need to render a full screen quad, is that desirable?

- Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE.

- Interactions of the texture format with the previous functions: what happens if you do a glReadPixels when the internal format of the texture is GL_RED? What about packed component textures (GL_R5G6B5...) ? Is any texture format supported as rendertarget? If not, how can the application know which formats are available, by trial and error?

- Interactions with glGetTexImage over the same texture object used as drawable. Can you do any glGetTexImage at all? What results would you get?

- Interactions with the current pixelformat: What happens if the current pixelformat has no alpha but the texture does, is the destination alpha available for rendering when the drawable is TEXTURE?. There's some mention of this in the spec part, I think that the pixelformat should be changed to match the one of the texture when you change the drawable (so you can do destination alpha rendering even if your FRAMEBUFFER doesn't have alpha).

- Interactions with texture sharing (wglShareLists). Does wglMakeCurrent force the copy of the current render target to the texture (this would solve all the single-thread problems). Cases:
- when the current rendertarget texture is used as source on another context. In the multithread case this should have the same limitations as when using the
- when the given texture object is used as rendertarget in two different contexts. in the multithread case you have to resort to say that rendering to the same texture object from two different threads is undefined?

- Do you really need glDrawable? Why not make that when glDrawableParameter for COLOR and DEPTH is zero the rendering is done to the FRAMEBUFFER? This would allow things like rendering to the color buffer of the FRAMEBUFFER but storing the depth in a texture (is that desirable?). I guess that the main reason to have glDrawable is for the future use of render to vertexarray as glDrawable parameter?.

cass
04-01-2004, 11:45 PM
Originally posted by Nutty:
I like it, should make RTT alot simpler for alot of ppl.

Cant think of any questions that haven't been answered by the spec.

I suppose its out of the question being able to render to multiple targets at once, or the frame-buffer and texture simultaneously ?

NuttyHi Nutty,

None of these questions is "out of the question" but they aren't addressed by this spec.

We already know how we want to handle multiple color targets, but we want to provide that as an extension to this framework.

Rendering to fb and texture simultaneously is not something we've thought a great deal about. What do you mean exactly? (Or at least more specifically...)

Thanks -
Cass

davepermen
04-01-2004, 11:56 PM
yeah, MRT is mentoyed actually to be supported in future extensions..

for framebuffer/texture interchanges, it would be cool if the framebuffer(parts) get a simple texture, wich we can bind, and all. that way, espencially MRT would get fully transparent.

i've read trough it now, but it's too big, my brain hurts:D i'll have to read it again. hope to see soon (experimental) support for it on both nvidia and ati hw.

btw, cass.. this is now something wich i mean with "cleaning up opengl". if this ext is done, what use have all the WGL render texture exts, that are hell complicated, expose in the end the same, and are mostly useless then?. nobody will want to use them afterwards, even pbuffers get rather questionable (while still useful, possibly.. but not without being able to create them without any real buffer.. so it's more of an os thing).

harsman
04-02-2004, 12:26 AM
It would be nice if multi sample support (issue 23) was added in the base extension and not layered on top. Having multi sampling forced on by the driver control panel can really screw with render to texture effects and having it in the base extension will encourage proper handling of multi sampling and make it easier for those wanting to do it the right way. Besides, multi sampling is a core feature, isn't it?

I also think it would be good to be able to source from and render to the same texture (issue 15). Of course, this probably won't work in all cases due to conccurrency issues, but it would be nice to see certain conditions specified where it will work rather than leaving the results undefined. Like Evan said, rendering to one mip level while reading from another or rendering to a part of one level while sourcing from a disjoint part. I don't know how much commonality there is between hw, but at least specifying behaviour in the lowest common denominatior of cases where it will work would be useful. Any more esoteric cases can be handled in a separate extension. If the results are undefined we lose out on a whoe lot of useful functionality that some hw might support, since we can't rely on undefined behaviour.

paulc
04-02-2004, 12:50 AM
Well, I haven't absorbed everything yet, but from what I've read it removes any context related issues associated with render to texture and pbuffers. It looks as though it will make shadow mapping and similar algorithms much cleaner to implement.

pashamol
04-02-2004, 12:51 AM
I want to add that EXT_render_target replaces not only ARB_pbuffer and ARB_render_texture, but NV_render_depth_texture too.

KRONOS
04-02-2004, 12:53 AM
Very much like what I said here: http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=7;t=000412

Korval
04-02-2004, 12:57 AM
This cannot be trivially extended to say that you can render to arbitrary texels of the same level/face/slice if render to texture is not supported in hardware The extension provides for specifying which level to bind to easily enough. However, what it does not do is create a means to prevent texture reads from any particular level. I'm not sure if such a way exists in OpenGL, and LOD biasing doesn't count. Unless such a way exists, there's nothing to prevent the user from accidentally fetching a texel on the bound texture.

Like Cass said, first get it implemented and working, then extend it later to relax various restrictions.


Is it too much driver work to create a combined texture on the fly and then copy back to one and the other whenever the rendertarget is changed (only necessary when the hardware does not support separate stencil & depth addresses).Well, consider that most hardware combines the two as standard operation procedure, as well as the fact that it's much easier (and faster) to for hardware that doesn't to hide separate depth and stencil textures behind a single texture object than it is for hardware that uses combined ones to have separate textures.


Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTUREAlready laid out in the spec. "When <drawable> is FRAMEBUFFER_EXT the normal framebuffer is used as the sinc of fragment operations and as the source of pixel reads such as ReadPixels, as described in chapter 4. When <drawable> is TEXTURE the texture drawable is used instead for these operations."


Interactions with the current pixelformatThe pixel format isn't even something that OpenGL defines; it's an OS-binding thing more than anything else. As such, I don't think there should be any interactions with it. If the texture supports alpha, then there can be alpha. If it doesn't, then there isn't.


Can't seem to find it... Did you not notice the top news item on the main page?

Mazy
04-02-2004, 01:44 AM
First of all : Nice work!
Both with the extension spec, and the fact that you share it among developers to get feedback, more extension should be presented like this first ( atleast ARB and EXT ones).

As i understand, this spec is here becuase superbuffers has a bit more to fix, and existing harware may not be able to implement everything that may be in that spec, correct?

I like the fact that it seems to be MUCH easier to use render to texture, and that you still utilize the standard texture interface ( GenTextures, texImage and so on) so it will be a breeze to implement ontop of existing engines.

I would rather see a unified buffer that could be bound as a texture or VBO or something else with no restrictions ( but ill be happy to wait for that until new HW if we get this very fast in current one.)

I guess that rect targets, and float targets will work if the corresponding extension for support of those texturetypes exists?

I hope that you figure out the MRT binding and fix some extension for that in GLSLANG very fast after this is implemented ( or even at the same time, i totally lack patience when i hear about new fun stuff :) )

[add:] the GenerateMipmapEXT, does that works on normal uploaded (glteximage) textures as well? I like consistent handle of things..

Corrail
04-02-2004, 02:06 AM
This really looks like a nice extension. Easy to use and there are some great features (mipmap-generation, ...)

As Mazy said I also like the idea of sharing the specification draft with developers.

What about non-power of 2 textures like NV_texture_rectangle or ARB_texture_non_power_of_two? Are they supported too? I didn't find anything about them in the specification.

According MRT:
Will this be added in a way like ATI_draw_buffers? Simple add a AUXi enum to <target> in gRenderTargetEXT?



I would rather see a unified buffer that could be bound as a texture or VBO or something else with no restrictions ( but ill be happy to wait for that until new HW if we get this very fast in current one.)
This would be a great idea! Why don't add three interal attributes, width, height and format, to each buffer object and use directly buffer objects instead of textures?
If this, MRT and render to vertex array is supported then I think the all superbuffer extension features are covered by VBO, PBO, EXT_render_target. So where's then the advantage of superbuffers?

glitch
04-02-2004, 03:09 AM
hi, here a little cosmetic question

Why not use the standard opengl function shape to create / bind rendertarget; i mean:

*GenRenderTarget(1, &id)
*BindRenderTarget(id)
*RenderTarget(<target>, ...) // actually this one would be the same as in the current spec

and then to switch between rendertarget simply call :

*BindRenderTarget(id) // id = 0 mean framebuffer

I really think it would be more intuitive but perhaps i didn't see some drawback in this proposition.

EG
04-02-2004, 04:28 AM
Simple, pretty, makes you want to render to textures even if you don't need to :)

Jens Scheddin
04-02-2004, 04:36 AM
It's really nice to finally see this long awaited extension showing up. There really IS a god :) . Here's what quickly came into my mind while flying over the spec:
I think the ability to create mip maps via glGenerateMipMapsEXT() should've been exposed as an independant ARB extension long time ago. It seems a bit strange to have all these high level ARB extensions but automatic mipmap generation by just an SGIS extension.
Another thing that would be great is a render target that behaves like the framebuffer where you can switch color/depth/stencil writes with gl*Mask(). So you just have to call glBindRenderTarget() and render to a texture or the framebuffer. I didn't think a lot about this, so there may be some issues that prevent this from beeing working.

cass
04-02-2004, 05:18 AM
...I wrote this last night, but the forums went down before I could post it.


Originally posted by evanGLizr:
Some notes & doubts:

- Why not STENCIL only textures? If the graphics card doesn't support stencil only, it can always create internally a stencil depth of the minimum depth size. I guess the problem comes when the app specifies one drawable for STENCIL and another for DEPTH separately? Is it too much driver work to create a combined texture on the fly and then copy back to one and the other whenever the rendertarget is changed (only necessary when the hardware does not support separate stencil & depth addresses). The goal here was to make the common case easy. There are no STENCIL textures today, but if that changed in the future, we could easily add that support.



- Interactions with textures with borders. In theory using textures with borders as rendertargets shouldn't impose a problem.
Agreed that there's no obvious inability to support borders.



- Interactions with compressed textures. Probably you won't be able to render to these. You can render to a texture whose internal format you've requested to be compressed. The *actual* internal format you get will almost certainly *not* be compressed.

This shouldn't be a big deal, because if you
want to render-to-texture, you probably want
a format that can be rendered to.



- Regarding issue 15, why not make it possible to use the same texture as drawable and texture source (as long as you don't render and read from the same levels/faces/slices, in which case you just say that the result is undefined).
This is very useful for doing programmable mipmap level generation (render to the lower-detail level reading from the higher-detail one).
...
I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?



- What's the interaction with SwapBuffers? In theory none (i.e. SwapBuffers always swaps the FRAMEBUFFER drawable), but note that this means that if you want to do things like triple buffering or offscreen rendering, whenever you want to present the results you need to render a full screen quad, is that desirable? Interaction with the swap chain will be provided by a layered extension. It was recognized that this was important, but not something that should delay the completion of this spec.



- Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE. What interactions? I think this is WYEIWYG (what-you-expect-is-what-you-get). That's the goal, at least.



- Interactions of the texture format with the previous functions: what happens if you do a glReadPixels when the internal format of the texture is GL_RED? What about packed component textures (GL_R5G6B5...) ? Is any texture format supported as rendertarget? If not, how can the application know which formats are available, by trial and error? The internal format you request is a hint. It's a hint when you do TexImage2D() and it's a hint when you render to it. The driver is supposed to do the best it can based on your hint.

Good bets for render-ability are not too hard to guess: RGBA8, RGBA16F, RGBA32F, and their RGB equivalents.



- Interactions with glGetTexImage over the same texture object used as drawable. Can you do any glGetTexImage at all? What results would you get? Why wouldn't this work? I'm not sure I understand.



- Interactions with the current pixelformat: What happens if the current pixelformat has no alpha but the texture does, is the destination alpha available for rendering when the drawable is TEXTURE?. There's some mention of this in the spec part, I think that the pixelformat should be changed to match the one of the texture when you change the drawable (so you can do destination alpha rendering even if your FRAMEBUFFER doesn't have alpha). If the texture you're rendering to has alpha,
then your TEXTURE drawable has dstalpha.

There's no interaction with the FRAMEBUFFER drawable. They're completely separate.



- Interactions with texture sharing (wglShareLists). Does wglMakeCurrent force the copy of the current render target to the texture (this would solve all the single-thread problems). Cases:
- when the current rendertarget texture is used as source on another context. In the multithread case this should have the same limitations as when using the
- when the given texture object is used as rendertarget in two different contexts. in the multithread case you have to resort to say that rendering to the same texture object from two different threads is undefined? I would defer to the way that Tex{Sub}Image*() behaves in these situations. Is there reason to do otherwise?



- Do you really need glDrawable? Why not make that when glDrawableParameter for COLOR and DEPTH is zero the rendering is done to the FRAMEBUFFER? This would allow things like rendering to the color buffer of the FRAMEBUFFER but storing the depth in a texture (is that desirable?). I guess that the main reason to have glDrawable is for the future use of render to vertexarray as glDrawable parameter?.There is a significant distinction between rendering to the framebuffer and rendering to offscreen. There are no shared resources, no pixel ownership tests, no window system and display peculiarities when you're just rendering to texture. That makes it simpler and "better" to just have a Big Switch. I expect we'll add that complexity when it's needed as a separate extension. In the mean time most people will be able to get along fine without it.

Thanks -
Cass

cass
04-02-2004, 05:55 AM
Originally posted by glitch:
hi, here a little cosmetic question

...

and then to switch between rendertarget simply call :

*BindRenderTarget(id) // id = 0 mean framebuffer

One of the nice simplicities of this spec (IMO) is that it allows rendering to regular old Texture Objects. There's no need to create a new object and associated API.

There will likely be desire to create Drawable objects in the future, but that was intentionally left out to keep from bogging down on issues that were not on the critical path.

Thanks -
Cass

Won
04-02-2004, 06:31 AM
Does this mean we can do all our offscreen rendering into textures assuming POT resolutions? Are pixel operations going to be slow? For example, glReadPixels performs pretty badly on RTT PBuffers right now.

Does it make sense to have an OFFSCREEN render target?

Much simpler API, avoid context switches, the ability to have simultaneous read/write access to the texture, the ability to render-to-3D-texture. This is pretty good, but Korval raises an interesting point wrt Superbuffers. With VBO, PBO and RenderTarget, we're pretty close to the functionality of Superbuffers. It is still missing the offscreen stuff and the swap chain stuff, but I take it this will be layered in the future. Is there a need for superbuffers if all it does is provide a unified API? Or are there some features of superbuffers that would still be missing?

-Won

harsman
04-02-2004, 06:44 AM
I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear? Well the spec says the following regarding issue 15:


15) If a texture is bound for both render and texturing purposes, should the results of rendering be undefined or should INVALID_OPERATION be generated at glBegin()?

UNRESOLVED

Undefined results allow an application to render to a section of the texture that is not being sourced by normal texture operations. That sounds like the behaviour is undefined, but maybe I missed something? Having undefined baehaviour isn't very good, because it doesn't allow you to rely on the functionality. It would be better to define results when the source and destination pixels/texels are disjoint and otherwise leave the results undefined.

Overall I think the proposal is great, very simple and elegant API.

davepermen
04-02-2004, 07:00 AM
Is there a need for superbuffers if all it does is provide a unified API? Or are there some features of superbuffers that would still be missing?

-Won[/QB]i think they plan on dropping superbuffers.. not sure, but it definitely looks that way.

zeckensack
04-02-2004, 07:46 AM
Originally posted by Won:
Is there a need for superbuffers if all it does is provide a unified API? Or are there some features of superbuffers that would still be missing?

-WonSuperbuffers would allow you to manage and attach sub-memories. You could take two "classic" mipmapped textures and mix and match individual mipmap levels to form a new mipmap pyramid, without doing copies.
This doesn't seem to be terribly useful.

barthold
04-02-2004, 07:53 AM
- Regarding issue 15, why not make it possible to use the same texture as drawable and texture source (as long as you don't render and read from the same levels/faces/slices, in which case you just say that the result is undefined).
This is very useful for doing programmable mipmap level generation (render to the lower-detail level reading from the higher-detail one).
...
I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?

Thanks -
CassThat is the intent indeed. The spec does specify that this is possible to render to a mip-map level of a texture object while sourcing from a different mip-map level. See section 4.4.4 and the custom mip-map generation example at the end. Looks to me we need to add something about cube-maps to the spec though!

Barthold

dorbie
04-02-2004, 08:50 AM
On rendering to texture with borders... the issue here is not whether there's some incompatability with the spec but what the actual outcome in the texture is. There are two possible outcomes, the border is rendered to, and the border isn't touched. I can imagine both outcomes desirable to different developers, however without rendering inclusive of the border some desirable things would be impossible. This is an observation, not a feature request, rendering to a texture border the way these things are sometimes laid out in memory potentially has some extremely serious implications for the design and complexity of an implementation so it would help if things were clear on the intended outcome when you render to a texture with a border image specified. Even the addressability of a texture border is unclear.

I assume by the 'not a problem' the inherent assumption is that texture border images are still entirely separate things and you can only render to the texture proper. I tend to think that given the useage borders get they ain't worth the hassle/complexity, especially with other features like multitexture available now, but that's just an opinion.

Zengar
04-02-2004, 09:15 AM
I would add glGenerateMipmapsEXT as another extension(it had to be done for a long time ago now).
I don't know if rendertarget object would be desirable. Maybe it could save some state if one does a lot of rt switching...

And:
Who and why did invented render_to_texture extension? Funny, I wanted to post a topic like : "Do you agree that ARB_rtt is crap?". But I could hold me back. That's what I call destiny.


Why are you coming with this stuff only now? :)

P.S. There is one more problem I would like to discuss... I writed a simple bloom effect demo some weeks ago. The bloom was done using a gaussian blur(via a fp - it was my first confrontation with glslang :) ). But - I had to repeat this for a lot of times. Like:

1. Bind Blurtex;
2. DoBlur;
3. CopyBlurTex;
4. goto 1.

It would be nice if this extension would provide a simple way to do such stuff(rendering to a texture based on this texture)? Like having ability to copy texture(glCopyTexture(tex, newtex)) or allowing texture read while it is bound as rendertarget.

Corrail
04-02-2004, 09:36 AM
Altough it is a little bit offtopic:
Something like a "Pixel-Shader" (NOT the same as DX Pixel Shader) would be nice too, a program which operates on each pixel stored in a render traget.

davepermen
04-02-2004, 09:42 AM
i think the blur would be bether done directly with pingpong, as you need 2 textures, too.. it would mean one copy less..

int i:1; // wraps around 1.. 1+1 = 0 a bool:D

i = 0;

textures blur[2];

drawto[blur[i]];

foreach(pass) {
readfrom(blur[i]);
writeblurto(blur[i+1]);
++i;
}

final blurred texture in blur[i];

for me, the clearest method.

davepermen
04-02-2004, 09:44 AM
Originally posted by Corrail:
Altough it is a little bit offtopic:
Something like a "Pixel-Shader" (NOT the same as DX Pixel Shader) would be nice too, a program which operates on each pixel stored in a render traget.would then be a pixel transform, right? theoretically should it be doable by simply drawing a huge quad over the buffer, and using it at the same time as texture.. dunno, the spec is relaxed, it should be possible. depends on hw, though..

cass
04-02-2004, 12:13 PM
Originally posted by dorbie:
On rendering to texture with borders... the issue here is not whether there's some incompatability with the spec but what the actual outcome in the texture is. There are two possible outcomes, the border is rendered to, and the border isn't touched.My expectation is that the border would be rendered to. I agree there could be performance consequences because some implementations might require render->copy.

I'm ok with that for the sake of keeping the common case simple.

Thanks -
Cass

Edit: Finish my thought.

rosasco
04-02-2004, 12:46 PM
Jens, good points on GenerateMipMaps and gl*Mask. Will
raise masking as an issue.

Thanks for the feedback.

JR


Originally posted by Jens Scheddin:
It's really nice to finally see this long awaited extension showing up. There really IS a god :) . Here's what quickly came into my mind while flying over the spec:
I think the ability to create mip maps via glGenerateMipMapsEXT() should've been exposed as an independant ARB extension long time ago. It seems a bit strange to have all these high level ARB extensions but automatic mipmap generation by just an SGIS extension.
Another thing that would be great is a render target that behaves like the framebuffer where you can switch color/depth/stencil writes with gl*Mask(). So you just have to call glBindRenderTarget() and render to a texture or the framebuffer. I didn't think a lot about this, so there may be some issues that prevent this from beeing working.

rosasco
04-02-2004, 12:55 PM
Jens, good points on GenerateMipMaps and gl*Mask. Will
raise masking as an issue.

Thanks for the feedback.

JR


Originally posted by Jens Scheddin:
It's really nice to finally see this long awaited extension showing up. There really IS a god :) . Here's what quickly came into my mind while flying over the spec:
I think the ability to create mip maps via glGenerateMipMapsEXT() should've been exposed as an independant ARB extension long time ago. It seems a bit strange to have all these high level ARB extensions but automatic mipmap generation by just an SGIS extension.
Another thing that would be great is a render target that behaves like the framebuffer where you can switch color/depth/stencil writes with gl*Mask(). So you just have to call glBindRenderTarget() and render to a texture or the framebuffer. I didn't think a lot about this, so there may be some issues that prevent this from beeing working.

Korval
04-02-2004, 01:47 PM
One of the nice simplicities of this spec (IMO) is that it allows rendering to regular old Texture Objects. There's no need to create a new object and associated API.Absolutely agreed.

I looked at one of ATi's presentations on superbuffers, and I found the API to be very involved. If I recall correctly, it went something like this. First, you have to allocate a memory buffer. To fill it, you have to bind it to a texture and use glTexSubImage. To render to it, you unbind it as a texture and rebind it to a framebuffer object (probably one you create), then do your rendering.

This one... I allocate a texture as normal, bind it as the render target, and render with it. Simple.

When the time comes to allow VBO's to be used directly as render targets, the API doesn't change. Instead of specifying a texture object, I specify a VBO object. It doesn't get any easier than that.

Superbuffers might offer a bit more control over the memory buffers of textures and so forth, but I think EXT_render_target is a better abstraction of the functionality that we need.

In a way, superbuffers compared to EXT_render_target reminds me of how VAO compares to VBO. VAO is very complicated, forcing the use of an entirely new API for vertex array binding. VBO simply overloads the normal conventions that we're used to.

This extension, outside of clarifying the few lingering issues being discussed, is just clearly the best way to expose this kind of functionality. We don't really need any new functions to create memory buffers; glTexImage and glBufferData are both perfectly acceptable. All that was really needed was a way to bind the texture to a conceptual framebuffer. Now, we have that (or will soon enough, once the spec is finalized and implemented).


Why are you coming with this stuff only now?I wondered that about VBO for the longest time. It seems so obvious in hindsight.

Zengar
04-02-2004, 02:31 PM
If I am allowed to summarize ;) : yes, give it to us! We don't want any superbuffers more :p !

@dave:
I also came up with this solution, but: changing the drawable could be more expencive then copying.
Who knows, thought...
Two buffers are also nice, howewer one virual would be kind of more... elegant

castano
04-02-2004, 04:27 PM
I'd like to thank the contributors of the extension being discussed for the initiative to release the extension spec as an RFC. I think this will be really useful for developers, that will be able to learn about the new features sooner and to get used with the new apis. I also hope it will be useful for IHVs that will learn earlier what the developers think. I hope this won't be a singular event, but a common practice in the future.

That said, I think that the extension still lacks some functionality. However, as someone mentioned I agree that it's better to have a clean and minimal spec that *works*, to extend it later with the requiered extensions.

evanGLizr
04-02-2004, 04:42 PM
Originally posted by cass:


- Interactions with textures with borders. In theory using textures with borders as rendertargets shouldn't impose a problem.
Agreed that there's no obvious inability to support borders.


Ok, so one can use scissor/stencil to render only to the interior of the texture and stencil to render to the border. For this stencil usage, maybe it would be desirable to be able to use the framebuffer depth/stencil instead of having to create a depthstencil texture for each texture size?
This is one of the cases where it would be simpler to be able to use heterogeneous rendertarget sizes, when - for example - you are only interested on the color buffer texture, but you still want to have depth/stencil testing without having to create one depthstencil texture for each texture size you have).





- Interactions with compressed textures. Probably you won't be able to render to these. You can render to a texture whose internal format you've requested to be compressed. The *actual* internal format you get will almost certainly *not* be compressed.

This shouldn't be a big deal, because if you want to render-to-texture, you probably want a format that can be rendered to.


Sure, but the problem is "which formats can be rendered to"? From what you say, there are no restrictions (other than "color-formats" must be rendered as COLOR rendertargets and "depth-formats" as DEPTH/DEPTHSTENCIL). More on this below.





- Regarding issue 15, why not make it possible to use the same texture as drawable and texture source
...
I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?

As some other people already pointed out, issue nr. 15 says "UNRESOLVED" and 4.4.4 doesn't say anything about rendering to different faces/slices.





- Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE. What interactions? I think this is WYEIWYG (what-you-expect-is-what-you-get). That's the goal, at least.

The interactions come with the texture format, see below.



The internal format you request is a hint. It's a hint when you do TexImage2D() and it's a hint when you render to it. The driver is supposed to do the best it can based on your hint.

Good bets for render-ability are not too hard to guess: RGBA8, RGBA16F, RGBA32F, and their RGB equivalents.
When it comes to specs and standards, I don't like guessing. The issue here is if GL_LUMINANCE is a valid color rendertarget (as spec'ed in 4.4.6), what's the value of glGetInteger(GL_GREEN_BITS) when that texture is set as a rendertarget? Should they follow table 3.15 on OpenGL 1.5 spec (where single component is mapped to R)? Or should they follow the convention described in the "Pixel Transfer Operations" in pg. 192 of OpenGL 1.5 (for example, Luminance pixels are R+G+B).

I know the internal format is a hint, but the driver will have already allocated the texture in a given internal format and if that internal format doesn't match a renderable format, it can do three things:
a) Fail to set it as a rendertarget (if this is valid, it should be noted in the spec).
b) Reallocate the memory for that texture in a renderable internalformat. This will probably cause an expansion of the texture.
c) Use a temp rendertarget and reformat it at rendertarget flush.
The other bottom line is are you exposing the internal format of a texture with this extension? If you set a texture as a rendertarget and do a glGetInteger(GL_RED_BITS), is it obliged to return the internal format? (so option b) before wouldn't be valid).

Ok, I overlooked the following paragraph from the spec


When a texture is first bound for rendering the internal format of the
texture might change to a format that is compatible as a rendering
destination. If the format changes the new format will be guided by
the texture's requested format, and the existing contents of the
texture will be converted to the new format. Queries with glGet of
GL_DEPTH_BITS, GL_RED_BITS, etc. can be used to determine the actual
precision provided.
So it looks like alternative b) is the one the spec favors, although it should be noted that this goes against p. 128 of OpenGL 1.5 spec:


A GL implementation may vary its allocation of internal component resolution
or compressed internal format based on any TexImage3D, TexImage2D (see below),
or TexImage1D (see below) parameter (except target), but the allocation and
chosen compressed image format must not be a function of any other state and cannot
be changed once they are established. In addition, the choice of a compressed
image format may not be affected by the data parameter. Allocations must be invariant;
the same allocation and compressed image format must be chosen each
time a texture image is specified with the same parameter values. These allocation
rules also apply to proxy textures, which are described in section 3.8.11.




- Interactions with glGetTexImage over the same texture object used as drawable. Can you do any glGetTexImage at all? What results would you get? Why wouldn't this work? I'm not sure I understand.
What happens if you do a glGetTexImage on the same texture that is currently bound as rendertarget? Is that allowed at all? If it is, Will that get the current texture values or the ones before setting it as a rendertarget? (i.e. will a glGetTexImage cause a flush of the drawable?).

[Removed the issues on pixelformats, they get answered with the paragraph I overlooked from the spec]





- Interactions with texture sharing (wglShareLists). Does wglMakeCurrent force the copy of the current render target to the texture (this would solve all the single-thread problems). Cases:
- when the current rendertarget texture is used as source on another context. In the multithread case this should have the same limitations as when using the
- when the given texture object is used as rendertarget in two different contexts. in the multithread case you have to resort to say that rendering to the same texture object from two different threads is undefined? I would defer to the way that Tex{Sub}Image*() behaves in these situations. Is there reason to do otherwise?


Good point, too bad that is not specified anywhere :/. So from what you say wglMakeCurrent doesn't cause a flush of the rendertarget.
Would this be a good time to "de facto" relax the condition that wglShareLists only works if all the contexts share the same pixelformat? What the MSDN says is that if they don't, the result is "implementation dependent", so the implementation could make that you can share among different pixelformats as long as all the renderers are the same.

Humus
04-03-2004, 01:07 AM
I must say that the extension looks very good. :) I've wanted something like this all the time. I hoped for the super_buffers extension to replace the old WGL_ARB_render_texture stuff. But after hearing "soon" about it for over a year now I think I can safely declare it dead. If EXT_render_target is what I get I'm more than happy. It has all the functionality I need. I'm even doubtful that render to vertex array will be all that useful when hardware will have access to textures in the vertex shader.

V-man
04-03-2004, 06:47 AM
The parameter IMAGE_EXT determines the active image layer of
3-dimensional textures attached to the texture drawable. If an
attached texture is not 3-dimensional, then the value of IMAGE_EXT is
ignored for that texture.Maybe this part should mention the orientation of the layers, as in parallel to the s,t-plane.

A few things I didn't understand :

It's not clear to me the interaction between color and depth texture. How do you make them both drawable and the current render target?

If you have many color and depth textures, how do you pick the color and depth to render to?

glDrawableEXT(GL_TEXTURE_EXT); doesn't convey this information and glBindTexture(GL_TEXTURE_2D, 0); says what here?

cass
04-03-2004, 07:23 AM
Originally posted by V-man:
Maybe this part should mention the orientation of the layers, as in parallel to the s,t-plane.
This is intended to work effectively in the same way that rendering to the back buffer followed by CopyTexSubImage3D(). Do you feel we need an issue or spec language to make this more explicit?



A few things I didn't understand :

It's not clear to me the interaction between color and depth texture. How do you make them both drawable and the current render target?

If you have many color and depth textures, how do you pick the color and depth to render to?

[quote][qb]
glDrawableEXT(GL_TEXTURE_EXT); doesn't convey this information and glBindTexture(GL_TEXTURE_2D, 0); says what here?RenderTarget() is how you specify which textures are the current render targets for color and depth. I'm not sure what you mean by interaction.

Thanks for the feedback -
Cass

glitch
04-03-2004, 12:27 PM
hi,

i've had some some conceptual reflexion about opengl texture/pbo/vbo and rendertarget and how can it be the most intuitive for all developer and easy for us to integrate in current state-of-the-art 3D engine.

IMHO, the only 2 low level object in opengl seems to be pbo and vbo; texture is a subset of pbo (don't think it's wired this way in opengl).

So can't it be simplest to treat pbo and vbo as the only potential rendertargets (the <target> parameter of glRenderTarget) and after that having the ability to use a pbo as a texture.

This is a pure conceptual brain storming so i don't expect any feedback on this approach (as technical specs aim technical advices :-) ). I was just trying to find the nicest / most intuitive way for all pbo/vbo/texture/rendertarget stuff.

++

Korval
04-03-2004, 12:39 PM
IMHO, the only 2 low level object in opengl seems to be pbo and vbo; texture is a subset of pbo (don't think it's wired this way in opengl). Well, since VBO's and PBO are the same thing (buffer objects. Different uses, but the same kind of memory, and can be used interchangably), this would really provide only one thing.

Also, since buffer objects have no intrinsic concept of dimentionality (they are all flat arrays), it's kind of difficult to bind them as a render target/source image directly. There's a difference between using glTexSubImage with a buffer object to fill it, and actually saying that the primary location of the texture data is in the buffer object (which PBO doesn't provide). That functionality doesn't exist. PBO's are used to copy pixel data, as a means of transfering pixel data to/from various textures/framebuffer, while still providing the other functionality for buffer object (that is, source vertex data).

glitch
04-03-2004, 01:59 PM
I just thought that making opengl full pbo/vbo (say buffer object) centric would have been really interesting ... but its not the question here and that all about my dreams ;-)

Anyway, thx for your answer Korval

cheers

V-man
04-04-2004, 09:35 AM
Originally posted by cass:
This is intended to work effectively in the same way that rendering to the back buffer followed by CopyTexSubImage3D(). Do you feel we need an issue or spec language to make this more explicit?Sure, there is something to be said for clarity.
Someone might want to have an extension that allows you to render in other orientations (t,r or s,r)

glRenderTargetEXT! OK, that answers my question.

Then in the "New Tokens" section would be nice to see what the functions take.



New Tokens

Accepted by <drawable> parameter of Drawable, RenderTarget,
DrawableParameter, and GetDrawableParameter:

FRAMEBUFFER_EXT 0x????

Accepted by <drawable> parameter of RenderTarget
DrawableParameter, and GetDrawableParameter
GL_TEXTURE
Yes, other specs do this and they don't mention the hex value for the old tokens.

=======other stuff
don't some vendors provide 16 bit stencil, 32 bit stencil? Maybe there is a need to have target GL_STENCIL besides GL_COLOR, GL_DEPTH

=======other stuff2
It seems as if not all vendors can provide render to depth(texture) alone. So what does this mean when it comes to this extension?

Korval
04-04-2004, 10:16 AM
Then in the "New Tokens" section would be nice to see what the functions take.Well, as Cass pointed out, the spec isn't complete yet.


don't some vendors provide 16 bit stencil, 32 bit stencil?Not that I'm aware of. Certainly no consumer hardware does.


It seems as if not all vendors can provide render to depth(texture) alone.Who? If they supported color mask, and rendering to a depth texture at all, then they can support this.

Dirk
04-04-2004, 01:36 PM
I might have missed something, but just to make sure I didn't: ;)

The text says GenerateMipmapEXT() is applied to the currently bound texture. The texture used as the render target is generally not bound when it's being rendered to (exception: last example, building MM levels from the base level). So to use GenerateMipmapEXT() after finishing my texture rendering I have to bind it, even if I'm not going to use it as a texture here. Correct?

Given that texture binds are not free, what was the reason for not allowing the GENERATE_MIPMAP texture parameter to take effect? There is no clearly defined "done drawing" point, but RenderTarget and Drawable look like reasonable candidates.

The separate GenerateMipmap function has its uses, but being able to generate the pyramid at the point the base level is created seems to be an easier design for generic systems. Alternatively you could promise me the BindTexture is free at that point, but I'm a little sceptic about that.

Thanks

Dirk

cass
04-04-2004, 02:47 PM
Originally posted by dirk:
Alternatively you could promise me the BindTexture is free at that point, but I'm a little sceptic about that.
Hi Dirk,

We talked about this, and one idea was to have a dummy binding point, just used for modifying objects but was not rendering state. Something like

ActiveTexture( DUMMY );
BindTexture(...);
GenerateMipmap(...);

This binding texture objects to the "DUMMY" slot would be cheap. I'm ambivalent about this approach though. I like it because it avoids modifying render state just to modify an object, but it is does nothing to solve the problem of too much indirection in the texture API.

Thanks -
Cass

MZ
04-04-2004, 09:33 PM
I'm concerned with 2 things:

Missing Feature #1: (issue 3) Ability to use depth-texture of size larger than have bigger size than bound color-render-target. (mentioned by Korval)

Missing Feature #2: Ability to use framebuffer's backbuffer bound to some targets (example: DEPTH and STENCIL) at the same time with textures bound to remaining targets (example: COLOR). (mentioned by evanGLizer)

(castano) That said, I think that the extension still lacks some functionality. However, as someone mentioned I agree that it's better to have a clean and minimal spec that *works*, to extend it later with the requiered extensions.I could agree, if we were talking about some new, unproven, experimental functionality. But this isn't so.

If anyone happens to still have the DirectX 7 SDK, see chapter "Common techniques and special effects / Cubic environment mapping", or code sample "envcube". The described technique explicitly reuses framebuffer's depth buffer when rendering to each cube face. No need to mention that framebuffer almost always has different size than face of cube map texture. These are examples of the #1 and #2 in use. In DirectX 8 and 9 this abilities remained, only interface was changed, making usage much more obvious. (Actually, DirectX 6 has SetRenderTarget too, but lacking this version's full SDK, I can't tell anything about it's render-target flexiblity)

So, we are not talking about anything exotic. Both #1 and #2 are actually bread and butter features of DirectX, about 4 years old, being in use till today since introduction of DirectX 7 and GeForce 256. If the #1 or #2 require any specific HW support, then it is effectively a requirement for any HW which exposes Cube Mapping under DirectX.

Should we refrain from including into ext the features which are de facto standard and which were proven to be useful? I think all RTT methods in OpenGL were lagging behind D3D long enough. If the intention for developing minimal spec was to provide granularity in exposing features, in order to implement render-target for pre-GeForce 256 HW too, then I'd understand this. But in that case, I hope there will be no big delay between the 'minimal' and 'full' render target version releases.

(cass) There is a significant distinction between rendering to the framebuffer and rendering to offscreen. There are no shared resources, no pixel ownership tests, no window system and display peculiarities when you're just rendering to texture.As I wrote above, DirectX has no problem with that. I've tried to imagine what sort of problems you actually mean (does it apply only to color (displayable) data (thus not to depth/stencil)? or only single-buffered PF? or overlapping GL windows?). I think in most pesimistic case any such problem could be solved by allocation of additional color buffer and a copy operation once per frame. It is only matter of decision which side should be supposed to do the job: driver or user (with use of the new render_target extension). If my guesses are correct, I'd vote for driver side, because in fullscreen mode the pixel ownership and related stuff ceases to exist, so driver could enter more effective path automatically.

(barthold) Korval, what would you use that functionality for? The difficulty is in defining what happens when you have say a depth-texture that is bigger (or smaller) than the color-texture bound to the drawable. (...) Our initial idea was to keep it simple and not allow this. I would be interested in hearing otherwise.I tried to provide example of usefullness in this thread: (http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=010198)
Also, what happens when you re-use such a depth-texture with yet a different sized color-texture. As in DirectX, contents of depth texture are invalid when sizes dont match. However it would be useful to retain contents of depth texture if COLOR were bound to 0 ("NULL" render target), because color-texture which was bound while rendering-to the depth-texure might be later used in texturing when rendering to depth-only render-target with our depth-texure still bound.

(evanGLizer) Do you really need glDrawable? Why not make that when glDrawableParameter for COLOR and DEPTH is zero the rendering is done to the FRAMEBUFFER? This would allow things like rendering to the color buffer of the FRAMEBUFFER but storing the depth in a texture (is that desirable?). I guess that the main reason to have glDrawable is for the future use of render to vertexarray as glDrawable parameter?. If the Missing Feature #1 is going to be included in the spec, then zero will be needed for a "NULL" render target, just as it is proposed in issue 9. It is necessary to have ability to bind 'something' that doesn't have dimensions, because otherwise they might interfere with other bound targets.

If the Missing Feature #2 is going to be included in the spec, ability to bind framebuffer to individual targets would require rewriting the interface, which might look like this:
glEnable/Disable(RENDER_TARGET) replaces glDrawable(TEXTURE/FRAMEBUFFER)
RenderTarget(COLOR/DEPTH/etc, GL_NONE, 0 /*ignore me*/)
RenderTarget(COLOR/DEPTH/etc, GL_FRAMEBUFFER, 0 /*ignore me*/)
RenderTarget(COLOR/DEPTH/etc, GL_TEXTURE, my_texture_object)

Dr^Nick
04-04-2004, 09:34 PM
Looks like by the time I got around to reading the spec and the messages, a lot of things had already been discussed. As such I'm just going to say the extension is a good step forward, and I hope this sort of cleanup continues.

Thanks to those that are making it happen.
DN.

V-man
04-04-2004, 09:45 PM
Not that I'm aware of. Certainly no consumer hardware does.I thought some 3Dlabs cards could give 16 bit stencil. If we had a database of pixelformats for every card, it would be useful.


Who? If they supported color mask, and rendering to a depth texture at all, then they can support this.ATI (R3xx). In D3D, you can't do render to depth on ATI and also, is there a ARB extension for render to depth texture?

This extension is curious because you can choose a color and depth to go together, as opposed to the old p-buffer approach where they are paired always.
I'm sure that some vendors will have issues with this extension.

And if that is the case, a special internal format for glTexImage GL_RGB8_DEPTH24_STENCIL8 ...

and allow glTexImage to fail if implementation can't handle certain formats.

Luis Perenna
04-04-2004, 11:14 PM
Originally posted by zeckensack:
Superbuffers would allow you to manage and attach sub-memories. You could take two "classic" mipmapped textures and mix and match individual mipmap levels to form a new mipmap pyramid, without doing copies.
This doesn't seem to be terribly useful.Maybe it can be very usefull for doing some kind of SGI's "ClipMapping" which is very interesting for "continuous" terrain texture.
ClipMapping Paper from SGI (http://www.cs.virginia.edu/~gfx/Courses/2002/ BigData/papers/Texturing/Clipmap.pdf )
Wishes,
Luis

Licu
04-05-2004, 03:29 AM
I really hope this extension will be modified to permit binding rendering targets separable for color and depth/stencil. Like MZ said, this is already possible in DirectX and it is very useful. A simple example, I want to apply a filter only to a subset of scene objects. I render the non filtered objects to the frame buffer color altering its depth buffer. Then I set only the color drawable to a texture and render the filtered objects while using and altering the framebuffer depth (depth drawable set to zero or other specific value). On this texture I will apply the specific filters and finally I will add it to the frame color etc. With the extension as it is proposed now I cannot do this without an additional copy of the depth.

Eric Lengyel
04-05-2004, 04:29 AM
Missing Feature #2: Ability to use framebuffer's backbuffer bound to some targets (example: DEPTH and STENCIL) at the same time with textures bound to remaining targets (example: COLOR). (mentioned by evanGLizer)
I agree that this feature is important. I think a clean API for all of this would get rid of the glDrawableEXT() and glDrawableParameter{if}EXT() functions and define a single glRenderTargetiEXT() function like the following.

void glRenderTargetiEXT(GLenum target, GLenum pname, GLint param);

<target> is either GL_COLOR or GL_DEPTH.

<pname> is GL_TARGET_DRAWABLE_EXT, GL_TARGET_TEXTURE_EXT, GL_TEXTURE_FACE_EXT, GL_TEXTURE_IMAGE_EXT, or GL_TEXTURE_LEVEL_EXT.

If <pname> is GL_TARGET_DRAWABLE_EXT, then <param> is GL_FRAMEBUFFER_EXT or GL_TEXTURE and determines where writes go for the buffer named by the <target> parameter. This allows situations such as rendering color to a texture while still rendering depth to the back buffer. The pixel ownership test should only apply to the depth buffer when both the color and depth targets are the framebuffer. (In my opinion, the pixel ownership test really ought to only apply if rendering directly to the front buffer. The spec seems to be ambiguous about this.)

If <pname> is GL_TARGET_TEXTURE_EXT, then <param> specifies a texture object as the target for writes to the buffer named by the <target> parameter when the drawable is GL_TEXTURE.

If <pname> is GL_TEXTURE_FACE_EXT, GL_TEXTURE_IMAGE_EXT, or GL_TEXTURE_LEVEL_EXT, then <pname> specifies the face, image, or mipmap level, respectively, for the texture drawable corresponding to the buffer named by the <target> parameter.

A glGetRenderTarget{if}vEXT() function would retrieve state in the expected way.

Is the glGenerateMipmapsEXT() function really necessary? It seems like a texture with GL_GENERATE_MIPMAP turned on should have its mipmaps implicitly generated as soon as it is no longer the active rendering target for either the color or depth buffers.

Korval
04-05-2004, 10:47 AM
I think something Cass pointed out bears repeating.

Regardless of whether Direct3D supported a feature(s) or not, this extension does not have to expose everything. It can be restrictive to allow for easier initial implementation, and relax those restrictions in other extensions. Otherwise, you may have a situation where it is implemented on some cards, but not universally due to the lack of restriction.

And the D3D implementation could easily be hiding copying and so forth as well, though D3D's design makes this somewhat difficult.

Dirk
04-07-2004, 01:26 PM
Hi Cass,


Originally posted by cass:
Hi Dirk,

We talked about this, and one idea was to have a dummy binding point, just used for modifying objects but was not rendering state. Something like

ActiveTexture( DUMMY );
BindTexture(...);
GenerateMipmap(...);

This binding texture objects to the "DUMMY" slot would be cheap. I'm ambivalent about this approach though. I like it because it avoids modifying render state just to modify an object, but it is does nothing to solve the problem of too much indirection in the texture API.

Thanks -
Cass[/QB]Hmm, ok, that would work, too. What were the feelings about this, is there a chance this would get added?

Admitted, it's not the cleanest solution, but usable. I suppose the main reason for not honoring the automatic mipmap generation was the problem of defining when to do it. Wouldn't it be cleaner to define either implicit or explicit flush conditions, and regenerate the mipmaps at those points? You could use a glFlush or glFinish when the Drawable is not bound to FRAMEBUFFER, or the calls to RenderTarget and Drawable.

Thanks

Dirk

c0ff
04-08-2004, 02:05 AM
I think that glGenerateMipmaps() is redundant. It's much better and opengl'ish to reuse SGIS_generate_mipmap texture property and regenerate mipmaps of higher level on glFinish().

If, for some reason someone needs glGenerateMipmaps() call it's better to add integer argument to indicate which mipmap levels should be generated.

But my in strong opinion it's MUCH better to reuse SGIS_generate_mipmap and define glFinish/glFlush behaviour for rendering to texture.

---
oops, it seems to be already answered:

Also, please, relax dimensions constraint. As it was said in DirectX it is already possible to use one big Z-buffer for rendering to textures of different sizes. It means that hardware already can do it. If ARB want to support some hardware, which can't do such trick, it will be good if this constraint will be removed by some extension immediately available with EXT_render_target.
---
Thanks,
Dmitry.

Waiting impatiently for first implementations.

marco_dup1
04-08-2004, 02:35 AM
Originally posted by c0ff:
I think that glGenerateMipmaps() is redundant. It's much better and opengl'ish to reuse SGIS_generate_mipmap texture property and regenerate mipmaps of higher level on glFinish().
Isn't mipmap generation a part of OpenGL since 1.3?

idr
04-08-2004, 08:24 AM
What happens if you do a glGetTexImage on the same texture that is currently bound as rendertarget? Is that allowed at all? If it is, Will that get the current texture values or the ones before setting it as a rendertarget? (i.e. will a glGetTexImage cause a flush of the drawable?). evanGLizer,

I would expect this to work the same as doing glReadPixels on the current drawable. After all pending drawing operations are completed, the pixel data is read. You can think of rendering to a texture as being analogous to calling TexSubImage on a texture.

idr
04-08-2004, 08:36 AM
(MZ) As I wrote above, DirectX has no problem with that. I've tried to imagine what sort of problems you actually mean (does it apply only to color (displayable) data (thus not to depth/stencil)? or only single-buffered PF? or overlapping GL windows?). The problem is with a double-buffered display that has a single back-buffer for all windows. If two windows overlap, then, in the overlapping region, the back-buffer only gets rendered to for the front window. If a window is completley obscured by another window, rendering to the obscured window's back-buffer becomes a no-op.

I don't know (or care, really) how D3D is specified to work, but that's how OpenGL is specified to work. I think that would cause nasty surprises for people if rendering to a texture did nothing because their window was obscured. :)

c0ff
04-08-2004, 10:28 AM
Originally posted by Marco Bubke:
Isn't mipmap generation a part of OpenGL since 1.3?You are absolutely right. I'm too lazy to update my code and as a result - my spelling :)

Surely, I meant standard mipmap generation feature but used wrong name.

MZ
04-08-2004, 01:04 PM
Originally posted by idr:
The problem is with a double-buffered display that has a single back-buffer for all windows. If two windows overlap, then, in the overlapping region, the back-buffer only gets rendered to for the front window. If a window is completley obscured by another window, rendering to the obscured window's back-buffer becomes a no-op.

I don't know (or care, really) how D3D is specified to work, but that's how OpenGL is specified to work. I think that would cause nasty surprises for people if rendering to a texture did nothing because their window was obscured. :) Using glCopyTexImage as RTT method in such conditions might cause the surprise as well...

I take the point in that GL rules reduce usablity of the M.F.#2, but I think they don't remove it completely. It depends on nature of particular RTT effect in use, whether you might ever access texels originating from non-owned pixels (thus undefined) or not.

evanGLizr
04-08-2004, 01:07 PM
Originally posted by idr:

What happens if you do a glGetTexImage on the same texture that is currently bound as rendertarget? Is that allowed at all? If it is, Will that get the current texture values or the ones before setting it as a rendertarget? (i.e. will a glGetTexImage cause a flush of the drawable?). evanGLizer,

I would expect this to work the same as doing glReadPixels on the current drawable. After all pending drawing operations are completed, the pixel data is read. You can think of rendering to a texture as being analogous to calling TexSubImage on a texture.Well, if we are talking about expectations, I would expect that to be implementation dependent, because on some implementations the texture memory may not be the same as the surface you render to, so unless there's an explicit flush of the drawable back to the texture memory, you will get the stale data.

I'm not argueing whether it will work or not, my point is that the spec has to state one or the other unambiguosly, the same way it states how glTexSubImage, glTexImage work (or, as currently worded, stop working) when the texture is bound as drawable.

evanGLizr
04-08-2004, 01:16 PM
Originally posted by idr:

(MZ) As I wrote above, DirectX has no problem with that. I've tried to imagine what sort of problems you actually mean (does it apply only to color (displayable) data (thus not to depth/stencil)? or only single-buffered PF? or overlapping GL windows?). The problem is with a double-buffered display that has a single back-buffer for all windows. If two windows overlap, then, in the overlapping region, the back-buffer only gets rendered to for the front window. If a window is completley obscured by another window, rendering to the obscured window's back-buffer becomes a no-op.

I don't know (or care, really) how D3D is specified to work, but that's how OpenGL is specified to work. I think that would cause nasty surprises for people if rendering to a texture did nothing because their window was obscured. :) Actually, OpenGL doesn't specify anything about rendering to occluded surfaces being no-ops. It just says that
a) reading from occluded regions is undefined: If any of these pixels lies outside of the window allocated to the current GL context, the values obtained for those pixels are undefined. (OpenGL 1.5, pg 188).
b) rendering to occluded regions is discarded at the pixel ownership level: The first test is to determine if the pixel at location (xw, yw) in the framebuffer is currently owned by the GL (more precisely, by this GL context). If it is not, the window system decides the fate the incoming fragment. (OpenGL 1.5, pg 171).

In fact, I believe that on MacOS X each window has its own backbuffer, as it's needed for the compositor. In Windows, the "unified backbuffer" approach (which is how NVIDIA calls sharing the back & z buffers) used to be a "workstation class" feature (so on non-workstation cards you would get a backbuffer per window) because it allowed workstation apps (with lots of contexts and windows) to run with a smaller memory footprint.

So to recap, the fact that the driver discards certain OpenGL calls if the window is not visible, is a driver optimization and is not implied nor promoted by OpenGL spec (in fact OpenGL still needs to react to things like glGetFloatv(GL_MODELVIEW_MATRIX), for example).

(huh, is there anyway to do italics without that pedantic red tint?).

idr
04-08-2004, 01:39 PM
So it's actually worse than no-ops. Some drivers will give one behavior and some with give another. Both are perfectly valid. The result is that an app could be written on Mac OS X and expect the data to be written, but would get very different behavior on Windows or Linux (where the data may be discarded). That's a very compelling argument to not allow sharing of the window system owned backbuffer with the OpenGL owned texture drawable, huh?