PDA

View Full Version : framebuffer_object?



ffish
11-15-2004, 04:45 PM
Still waiting ... any ARB members want to update us on a timeline? I know NVIDIA has provisional (hidden) support in their recent drivers. I know most of the names of the new functions, just not how to use them. There must be some sort of agreement on a spec out there for most 65 series drivers to have it. Any news? :(

Korval
11-15-2004, 07:06 PM
What are the names of these functions? Maybe we can figure out what they do?

MZ
11-15-2004, 08:06 PM
glGenFramebuffersEXT
glDeleteFramebuffersEXT
glBindFramebufferEXT
glGetFramebufferBufferParameterfvEXT
glGetFramebufferBufferParameterivEXT
glFramebufferStorageEXT
glFramebufferTextureEXT
glValidateFramebufferEXT
glGenerateMipmapEXT

ffish
11-15-2004, 08:28 PM
^ What he said.

I mistyped. I pretty much know what they do in a vague sense. Most of those functions are probably analogous to render_target, just with different names, so render_target probably describes the functionality pretty well. What I want is the spec, so I know exactly what they do, and driver support. If those function stubs exist in many recent driver versions, why can't we use them? And why no spec? I've forgotten now which driver versions have had the strings, but pretty much all recent ones I've tried have. I'd suspect all 65 series drivers do. I'd prefer NVIDIA to either obfuscate the function strings or not put them in the drivers at all. Or maybe it's just that I shouldn't be reading the dlls - who knows. Oh well, at least they're obviously testing it out so full marks to them. It's the ARB I'm upset with. It'd be kinda funny if after all this time framebuffer_object == render_target. Kinda funny in an "extremely annoying" way.

Bah, just ranting at still waiting ...

ffish
11-15-2004, 08:45 PM
There's also the hint in ARB_texture_float that says:

"Internal format names have been updated to the same convention as the EXT_framebuffer_object extension."

so we know pretty much (with the non-extended OpenGL texture internal formats) all the texture internal formats that can be used with it. Most of the info is there - we just can't use it. :mad:

Korval
11-16-2004, 08:02 AM
My guess would be that "glFramebufferStorageEXT" allocates the framebuffer object (much like glTexImage allocates a texture object). The kinds of parameters that get passed in would define the specifics of the framebuffer (width, height, etc).

"glFramebufferTextureEXT" would likely be how one binds and unbinds a texture object to a framebuffer.

I'm not sure what "glValidateFramebufferEXT" would really do, though. What kind of validation does a framebuffer need? What would render it invalid?

I'm not sure why this extension would even expose a "glGenerateMipmapEXT".

Given some of the discussions we have had on this board, it is likely that there are some new internal texture formats for textures that will give the driver hints that these textures will be used as render targets. It is even possible that only textures that are created with these internal formats can be used as render targets. That might be what the "glValidateFramebufferEXT" function is for.

nrg
11-16-2004, 09:30 AM
Does somebody know the date(s) for december ARB meeting? :)

Korval
11-16-2004, 12:07 PM
I did a little research on how VBO came into existance. There had been rumors (and a mention in Carmack's .plan file) that the ARB was either finished or nearly finished with some kind of cross-platform server-side vertex array extension in January of last year. However, the spec itself wasn't released until GDC. The spec was released on around 3/17/03. We got a functioning (if not fast) implementation out of nVidia almost immediately. By contrast, it took about 2 months for ATi to expose it, though it could be said that ATi's initial implementation was much better/faster than nVidia's exposed beta versions.

Basically, the fact that we see these entrypoints in the drivers could be taken to be a positive sign. Or, it could be nothing.

bobvodka
11-16-2004, 01:37 PM
Originally posted by Korval:
Basically, the fact that we see these entrypoints in the drivers could be taken to be a positive sign. Or, it could be nothing.Well, I hope this means something will appear soon (although, waiting for GDC05 would be a pain...), that said, I'll have to wait for ATI to add it anyways so even if the spec was released tomorrow I wouldnt count on seeing it in ATI's drivers pre-feb anyways (4.11s are out, 4.12s are in 'beta', 5.1 will be jan and probably already in the pipe, so 5.2 probably the most likely time), unless ofcourse the spec is complete and they are holding off a release until NV/ATI/3DLabs all have something in place... well, I can hope cant I ;)

zeckensack
11-16-2004, 02:19 PM
ATI seems to have something in the works, too.
Catalyst 4.11 final (not the beta) contains these:
glGetFramebufferParameterivATI
glGetFramebufferParameterfvATI
glFramebufferParameterivATI
glFramebufferParameteriATI
glFramebufferParameterfvATI
glFramebufferParameterfATI
glIsFramebufferATI
glGetFramebufferATI
glBindFramebufferATI
glDeleteFramebufferATI
glCreateFramebufferATI

Notable differences are the ATI suffix instead of EXT, Create instead of Gen and the IsFramebuffer entry point. Nothing in the extension string.

While I was at it, I stumbled across a whole lot of functions that look to be related to super buffers. AttachMem, DetachMem, GetSubMemImage{1|2|3}D etc.

bobvodka
11-16-2004, 02:26 PM
ah, in which case disregard my above ponderings as things could well be on target for it appearing in a newer release :)

Korval
11-16-2004, 04:27 PM
Notable differences are the ATI suffix instead of EXT, Create instead of Gen and the IsFramebuffer entry point. Nothing in the extension string.Hmm, these differences could signal the shift from render_target to the actual, more finalized, FBO. After all, the render_target was an EXT extension, and nVidia was one of the principle developers pushing the idea. As such, it makes sense that some version of EXT_render_target might show up in their drivers. As for the ATI extension, they're probably not claiming ownership so much as making sure nobody suspects an incomplete implementation for the full ARB extension.

There is a Gen, just like for other texture objects, but the Create function is there to specifically to initialize the framebuffer (width, height, pixelformat, etc). But there is a distinct lack of an analog for glFramebufferTextureEXT (which would do the binding of a texture to the framebuffer). Maybe one of the framebuffer parameter functions would work, but it would be kinda ugly to set something as important as the render target though a generic API.


While I was at it, I stumbled across a whole lot of functions that look to be related to super buffers. AttachMem, DetachMem, GetSubMemImage{1|2|3}D etc.Not surprising. ATi's had pseudo-implementations of superbuffers in their drivers since last year. I wonder if the above functions are super-buffers related, and not ARB_FBO related.

ffish
11-16-2004, 10:43 PM
In the superbuffers thread, Barthold said the new extension would be EXT_fbo, not ARB_fbo. So I'd be banking on NVIDIA's implementation being closer to the final spec. Dunno why ATI would introduce an ATI_fbo version into their drivers (if that's what it is).

Korval, you don't have to guess very hard at what the functions do. Just read the EXT_render_target spec (http://www.opengl.org/resources/features/GL_EXT_render_target.txt) and I'd imagine strong parallels, seeing the function names are almost the same. That spec probably describes pretty well what we're going to be getting. I'd be surprised if there were any major differences. Of course, that's pure speculation on my part based solely on EXT_fbo replacing EXT_rt and the function name similarities.

V-man
11-18-2004, 08:42 AM
Originally posted by ffish:
Dunno why ATI would introduce an ATI_fbo version into their drivers (if that's what it is).
This is old. Maybe 2003, as part of ATI's work on superbuffers.

Superbuffers is serving as a model for this EXT_fbo.

idr
11-18-2004, 09:19 AM
I'm not sure why this extension would even expose a "glGenerateMipmapEXT".I can answer that. ;)

There is a problematic interaction with rendering to a texture and enabling GENERATE_MIPMAP. If an application wants automatic mipmap generation on a texture that is being rendered to, how do you define when the mipmaps are generated? This is especially quirky since you can render to the level-0 (the base) mipmap while the other mipmap levels are being sourced as textures (by using LOD clamping).

Truthfully, I wish that SGIS_generate_mipmap hadn't been made part of the core for this very reason. Alas, hindsight is 20/20. :(

Our resolution was to create an explicit function to do the generation. With textures that are being rendered to, don't use GENERATE_MIPMAP. Explicitly use glGenerateMipmapEXT.

KRONOS
11-18-2004, 09:28 AM
idr: Date of release?! Just the spec! Please...... :D

Korval
11-18-2004, 09:34 AM
I can answer that.As such, I take it that nVidia's entrypoints are more likely to correlate to the actual FBO extension than ATi's?

Korval
11-18-2004, 02:39 PM
So I take it that glGenerateMipMaps is more of a "general-purpose" function, and doesn't have to be used specifically on render targets?

zeckensack
11-18-2004, 05:09 PM
Originally posted by idr:
There is a problematic interaction with rendering to a texture and enabling GENERATE_MIPMAP. If an application wants automatic mipmap generation on a texture that is being rendered to, how do you define when the mipmaps are generated? This is especially quirky since you can render to the level-0 (the base) mipmap while the other mipmap levels are being sourced as textures (by using LOD clamping).

Truthfully, I wish that SGIS_generate_mipmap hadn't been made part of the core for this very reason. Alas, hindsight is 20/20. :( It may be too late now, but anyway ... I would have preferred a change in SGIS_generate_mipmap language.

Instead of "If GENERATE_MIPMAP_SGIS is enabled, [generation of mipmaps] occurs whenever any change is made to the interior or edge image values of the base level texture array", I'd prefer
a)mark the texture with a "mipgen pending" flag when the base level is changed. Alongside the flag, if the flag is clear, store the current base level. If the flag is already set, store min(current base level,stored base level).

b)check this flag when the texture is used as a source (glBegin or GetTexImage). If it's set, generate the mipmaps and reset the flag before carrying on.

This change would not modify results. It's a pure performance/implementability tweak.
_____________________
Issues:
#1 - Should the mipmaps be generated if the base level array was modified while GENERATE_MIPMAPS_SGIS was true, but GENERATE_MIPMPAS_SGIS has been set to false afterwards?

Resolution: Yes. Once. Always check the flag and act accordingly, even if the texture no longer has the GENERATE_MIPMAP_SGIS attribute set at the time of glBegin/glGetTexImage.

#2 - Will the mipmaps be generated if the base level array was modified, but the base level selection changed before the check? Which levels will be generated?

Resolution: Yes. All levels from the modified initial base level downwards.
_______________________

*shrug*

Korval
11-18-2004, 07:49 PM
Admittedly, I prefer to have an explicit glGenerateMipMaps call, rather than the implicit stuff. It lets me know exactly where my performance is going, as opposed to trying to guess when the driver might decide to do a generate (which, almost certainly, constitutes a state change).

V-man
11-19-2004, 07:51 AM
Agreed, glGenerateMipMaps is better.

With SGIS, what you could have done is automatically turn off the flag when the texture is made available to render to.
After the user is done, he would have to turn on the flag.

Still, glGenerateMipMaps looks cleaner.

idr
11-19-2004, 08:13 AM
So I take it that glGenerateMipMaps is more of a "general-purpose" function, and doesn't have to be used specifically on render targets?GenerateMipmapEXT is general. It can be used with any texture.


Instead of "If GENERATE_MIPMAP_SGIS is enabled, [generation of mipmaps] occurs whenever any change is made to the interior or edge image values of the base level texture array", I'd prefer
a)mark the texture with a "mipgen pending" flag when the base level is changed. Alongside the flag, if the flag is clear, store the current base level. If the flag is already set, store min(current base level,stored base level).

b)check this flag when the texture is used as a source (glBegin or GetTexImage). If it's set, generate the mipmaps and reset the flag before carrying on.

This change would not modify results. It's a pure performance/implementability tweak.As long as the result seen by the application / user is the same, the driver is free to do whatever it likes. A number of implementations are possible, and I think some drivers do what you describe. There are a couple other corner cases that need to be handled, though. For example, you have to handle the case where the base level is modified, then a non-base-level is modified (that's the easy one). You also have to deal with the case where the base-level is modified, GENERATE_MIPMAP is set to FALSE, and the base-level is modified again.

zeckensack
11-19-2004, 06:33 PM
Originally posted by idr:
As long as the result seen by the application / user is the same, the driver is free to do whatever it likes. A number of implementations are possible, and I think some drivers do what you describe.I don't know if any drivers do, but I'm sure they could.

I can't get rid of the feeling that technically there's no need to ditch/change SGIS_generate_mipmap. The language of the spec may raise a red flag, people might think that this absolutely must kill performance during render-to-texture. I believe there's a way to implement the behaviour/results exactly as if "whenever any change is made", but without the performance implications.

Perhaps it would have been sufficient to amend SGIS_generate_mipmap with another issue and not change its actual spec at all. Something like
_____
Q: Won't this kill performance if the texture object in question is used for render-to-texture?

A: No. Trust us.
______

Much easier than describing the implementation logic. Besides, describing implementation "tricks" isn't really appropriate for a spec anyway.

NV_texture_shader has some "precedent":

How is the mipmap lambda parameter computed for dependent texture fetches?

RESOLUTION: Very carefully. NVIDIA's implementation details are NVIDIA proprietary, but mipmapping of dependent texture fetches is supported.Instead of speccing it out exactly, the authors just state that this is taken care of.


There are a couple other corner cases that need to be handled, though. For example, you have to handle the case where the base level is modified, then a non-base-level is modified (that's the easy one). You also have to deal with the case where the base-level is modified, GENERATE_MIPMAP is set to FALSE, and the base-level is modified again.I didn't think of these. Thanks for pointing out :)

zeckensack
11-19-2004, 06:51 PM
Originally posted by V-man:
Agreed, glGenerateMipMaps is better.

With SGIS, what you could have done is automatically turn off the flag when the texture is made available to render to.
After the user is done, he would have to turn on the flag.I think user visible state should be persistent. State changing side effects are without precedent, and IMO with good reason.

Besides, it wouldn't work. If mipgen is forced off during rtt and you turn it back on after you modified the base level, you still won't get mipmaps. The generation is triggered by modifications while mipgen is on.

Originally posted by V-man:
Still, glGenerateMipMaps looks cleaner.It certainly offers a lot more control.
But then, depending on what you do, you might have to start tracking a lot of things to find the right moment to call GenerateMipmap. The goal is obviously to call it a minimum number of times.

l_belev
11-20-2004, 06:36 AM
What would be the driver behaviour when rendering to a texture while the GENERATE_MIPMAP flag fo that texture is true?
One possible solution is: while a texture is bound to a render target, it is as if the GENERATE_MIPMAP is false.
i.e. the auto mipgen behavious is in effect when (GEN_MIPMAPS == true && !texture_is_current_render_target)

V-man
11-20-2004, 11:17 AM
Originally posted by zeckensack:
I think user visible state should be persistent. State changing side effects are without precedent, and IMO with good reason.Yes


Besides, it wouldn't work. If mipgen is forced off during rtt and you turn it back on after you modified the base level, you still won't get mipmaps. The generation is triggered by modifications while mipgen is on.I think you are right. It's a shame the way it is defined.


But then, depending on what you do, you might have to start tracking a lot of things to find the right moment to call GenerateMipmap. The goal is obviously to call it a minimum number of times.The purpose of SGIS would be to not introduce another entry point, not to make our lives easier. If it's going to be supported, then the driver has to track when it is going to be used for the sake of good performance.
GenerateMipmap adds an entry point, but now the user is reponsible for calling it only once and at the right moment.

With SGIS, the user may try accessing a texture in their vs or fs, so the driver would have to check on those as well. It adds a lot of interaction with other features.

ffish
11-23-2004, 04:37 PM
*bump*

Anyone know when the next ARB meeting is? Maybe we'll get something after that, because at the moment all I'm hearing is silence on this topic :( .

bobvodka
11-23-2004, 06:00 PM
the meeting notes for the March 04 meetings says december, assuming ofcourse someone agreed to host it ;)

Hopefully we'll hear something soon after that, then its just a matter of the IHVs getting their drivers together (hopefully with MRT support in GLSL via this entension) and then I'll be a happy bunny (and more inclined to get my arse moving on a project i've been stalling for the last year or so coz i didnt want to deal with pbuffers ;) )

ffish
11-23-2004, 06:58 PM
I know it's in December, but I'm desperate enough to want a date :D . Early December vs. late December is a big difference to me. I too am stalling on a project due to not liking pbuffers (more importantly not liking the context-switch penalty).

KRONOS
11-24-2004, 01:12 AM
I am stalling a project also because of framebuffer_object... :mad: :(

And in my opinion, someone from the working group (more like idr) should be telling us more about this particular extension. At least an early spec or something! Not everyone can sign in and attend ARB meetings half way across the world...

bobvodka
11-24-2004, 04:36 AM
Originally posted by ffish:
I know it's in December, but I'm desperate enough to want a date :D . Early December vs. late December is a big difference to me. I'd guess it will be early december, what with Xmas and all that placing it late december would be a silly idea indeed :p

I mean, putting it Dec28th, having the ARB members turn up drunk and desiding to turn OpenGL into D3D wouldnt be my idea of fun ;)

ffish
11-24-2004, 04:07 PM
Originally posted by KRONOS:
I am stalling a project also because of framebuffer_object...I find this interesting. That's three of us - the last three posters on this thread. Wonder how many other projects are stalled waiting for this extension? I've even considered moving to D3D because of it - like I said, I'm desperate ;) .

davepermen
11-24-2004, 10:38 PM
i stalled opengl dev some while ago because my work would have used render to (fp) textures and i never wanted to touch pbuffers as well..

Korval
11-24-2004, 11:50 PM
Aren't they behind on 2 ARB meeting notes by now? Maybe we should try e-mailing the guy responsible for these notes.

Adrian
11-25-2004, 12:05 AM
Originally posted by ffish:
[QUOTE]Originally posted by KRONOS:
[qb]That's three of us - the last three posters on this thread. Wonder how many other projects are stalled waiting for this extension?I have a project waiting for this extension to.

kehziah
11-25-2004, 01:48 AM
I also need RTT for a project and I refuse to go through pbuffers. There is no point spending time learning a cumbersome extension when a replacement has been in the works for months (not counting the superbuffers fiasco).
I have a fallback path which uses the back buffer, but it is neither fast nor flexible enough for my needs. I won't release the app until I can include a framebuffer_object path. And this is annoying, to keep it polite.

Jan
11-25-2004, 01:57 AM
I am working in other areas, since i also refuse to use PBuffers. Fortunataly, itīs only a fun-project.

Jan.

Overmind
11-25-2004, 05:16 AM
Same for me...

knackered
11-25-2004, 05:22 AM
Stalling a project for want of an opengl extension?
Err...don't know if you're aware of this, but it would take you a couple of days to write an inline'd generic render interface over opengl and direct3d, then you could use direct3d to *unstall* your project, and leave the opengl implementation with a stub function doing bugger all....I think after a while you'd probably abandon opengl all together as your direct3d implementation screams ahead doing all sorts of cool things, while opengl's proudly showing off it's shiny new(!) 'vertex buffer' ability.
OpenGL's dead - face it. The semantics might be nicer, but the functionality's ancient.
Unless of course you're developing for linux/irix/mac, in which case good luck to you!

BTW, isn't Half Life 2 great!? Direct3d, don't y'know.

Oops, perhaps I'm airing this opinion on the wrong site?!!?

Ffelagund
11-25-2004, 05:36 AM
Oops, perhaps I'm airing this opinion on the wrong site?!!? I think that: yes, you are.

Also Doom 3 is a nice OpenGL game, don't you know? and which one has better graphics?

Well, I think that this kind of opinions are not good. Most of us are hesitated about blamers.

marco_dup1
11-25-2004, 06:11 AM
Thanks for your luck wishes? But the problem with pbuffer are IMHO not the interface, its the context switch. I use GLX pbuffer and it was surprisly easy.

Korval
11-25-2004, 11:45 AM
Oops, perhaps I'm airing this opinion on the wrong site?!!?Ignore him. He's just being his usual troll self. Why he hasn't been banned is beyond me.


But the problem with pbuffer are IMHO not the interface, its the context switch.To me, the biggest problem with pbuffers and RTT is that you're not actually rendering to the texture. If you unbind the texture from the pbuffer, the data gets completely lost. Instead, you're rendering to some conglomerate surface that gets created when you bind a texture, and you get an interface to bind that conglomerate surface as a source texture.

This is not really rendering to a texture. Rendering to a texture would be rendering such that the resultant image data is stored in a texture. The pbuffer RTT doesn't do this.

knackered
11-25-2004, 02:13 PM
Ignore him. He's just being his usual troll self. Why he hasn't been banned is beyond me.It'd all be different if you ruled the world, eh korval?


this is not really rendering to a texture. Rendering to a texture would be rendering such that the resultant image data is stored in a texture. The pbuffer RTT doesn't do this.That's a naive view of a rendering pipeline. You could never have a pipeline like that, korval. I would have thought you'd have known that, for all your sound and fury. There must always be a copy - and there must always be a mechanism for telling the driver when a copy will be neccessary. It's the same in d3d's render targets. Render to texture is just an expression for simpletons.

rgpc
11-25-2004, 03:32 PM
There must always be a copyWhy? If you can render directly to the front/back buffer, and you can render directly to a pbuffer, why couldn't you render directly to a texture? Afterall, they're just memory allocations on the card or in AGP...

V-man
11-25-2004, 03:54 PM
Originally posted by rgpc:

There must always be a copyWhy? If you can render directly to the front/back buffer, and you can render directly to a pbuffer, why couldn't you render directly to a texture? Afterall, they're just memory allocations on the card or in AGP...I think it has to do with how they are stored in memory and textures can have formats the hw can't render to directly. There must be a few other technical issues.

ffish
11-25-2004, 03:59 PM
Originally posted by knackered:
Err...don't know if you're aware of this, but it would take you a couple of days to write an inline'd generic render interface over opengl and direct3dYeah, I've considered this. I'm stubborn though, and I've invested significant time into learning OpenGL and GLSL. I'm not an expert, but I'm pretty well advanced at both. At this stage I don't want to learn DirectX 9.0c. That might change with WGF (or whatever it's called). Plus I'm doing GPGPU stuff to which OpenGL seems to be more suited for a couple of (maybe invalid?) reasons. Every time I sit down to read an online D3D tutorial I find it very hard to follow. Blah, several reasons.

Korval
11-25-2004, 04:17 PM
I think it has to do with how they are stored in memory and textures can have formats the hw can't render to directly. There must be a few other technical issues.While this is true (that framebuffers aren't laid out in memory like most textures), many cards can still render to these textures (how do you think they implement mip map generation?). They're just slower than rendering to linear formats. Plus, if you can tag a texture as being frequently used as a render target, you can tell the driver to make it a linear texture.

knackered
11-26-2004, 02:00 AM
You think they use the rendering pipeline to generate mipmaps, korval!?
Why would they do that?? It's a simple downsample operation (fancy memcpy).
Do you consider they draw a texture-sized quad like they were doing a DOF effect? Transforms and all? :)

I would have thought that rendering to a texture block of memory would do a little more than stall the pipeline, wouldn't you?

knackered
11-26-2004, 02:07 AM
Originally posted by ffish:

Originally posted by knackered:
Err...don't know if you're aware of this, but it would take you a couple of days to write an inline'd generic render interface over opengl and direct3dYeah, I've considered this. I'm stubborn though, and I've invested significant time into learning OpenGL and GLSL. I'm not an expert, but I'm pretty well advanced at both. At this stage I don't want to learn DirectX 9.0c. That might change with WGF (or whatever it's called). Plus I'm doing GPGPU stuff to which OpenGL seems to be more suited for a couple of (maybe invalid?) reasons. Every time I sit down to read an online D3D tutorial I find it very hard to follow. Blah, several reasons.D3D9's docs are a good bit easier to digest than previous versions. In most cases you can dig out the one-to-one mapping from GL to D3D. They both have to communicate with the same driver layer, so the differences are never going to be that large. The vertex declaration bit is a little tricky to map across - but that's more to do with us being spoilt by OpenGL, and shielded into thinking that certain vertex mappings in GL are efficient when in d3d it would be obvious they weren't just because of the hoops you have to jump through.
I would certainly recommend you sit for maybe a couple of days reading through the whole d3d9Graphics docs to get a full grasp on what's going on.
BTW, ignore the FVF (flexible vertex format) stuff, as it is now obsolete, even though it gets mentioned an awful lot in the docs, that's more to do with laziness on microsofts side than anything else. Just stick with the new mechanism called "vertex declarations".

knackered
11-26-2004, 02:15 AM
Originally posted by Korval:
They're just slower than rendering to linear formats. Plus, if you can tag a texture as being frequently used as a render target, you can tell the driver to make it a linear texture.I love this - korval what do you mean by "linear formats"? It's like the overuse of the word "normalize"....I've heard "normalize" used in some bizarre contexts recently. It's like it's the word of the year or something.
So korval, in this context, and for the benefit of all of us outside your beautifully formed brain - what do you mean by "linear"?

harsman
11-26-2004, 09:37 AM
I thought it was pretty obvious, a linear format stores texels/pixels in linear order, row- or column major, as opposed to some sort of swizzling to improve locality of reference. You know, like the address bit twiddle nvidia used to do to swizzle textures? Heck, they probably still do it.

Zengar
11-26-2004, 03:49 PM
lol

i ask myself when this korval vs. knackered war actually started ;)

as for the extension: copy may or may be not requied, depending on the hardware&software implementation. Maybe some cards can effectively write to cacheable texture space(which i doubt) but you never know.
It's being too slow it's true. Well, usual ARB procedure :(

bobvodka
11-26-2004, 05:19 PM
the point is, the app programmer shouldnt have to care, they should be able to say 'render this image and link it with this texture id' so that everytime afterwards the texture ID will refer to that rendered image.
Dont care about copies.
Dont care how it does it (fast is better ofcourse).
Just care that thats what happens, instead of currently either having to keep a pbuffer hanging around or to copy the data yourself if you want the data later and need to render more stuff.

Korval
11-26-2004, 07:15 PM
i ask myself when this korval vs. knackered war actually started As far as I'm concerned, he's just being a troll. He rarely contributes anything more than venom to a discussion on these boards. While I've been something of an outspoken critic of the ARB, it's always with the purpose of improving OpenGL or bringing a problem to light. Knackered spews venom for, apparently, his own amusement.

Take his behavior here, for example. I used a (very commonly used) term that he didn't understand. Rather than simply asking for clarification on the specific meaning for that term, he accuses me, effectively, of making up the term. Rather than treating me with some form of respect, rather than starting with the assumption that I know what I'm talking about, he assumes that I'm just making up a term.


copy may or may be not requied, depending on the hardware&software implementation. Maybe some cards can effectively write to cacheable texture space(which i doubt) but you never know.There's no need to have the user tell the driver when to perform a copy (assuming that the RTT operation required a copy at all). A copy is needed when a texture is about to be used. It's that simple. This is why we have client-side driver code.

V-man
11-26-2004, 08:29 PM
Originally posted by harsman:
I thought it was pretty obvious, a linear format stores texels/pixels in linear order, row- or column major, as opposed to some sort of swizzling to improve locality of reference. You know, like the address bit twiddle nvidia used to do to swizzle textures? Heck, they probably still do it.If I remember correctly, I read something about J. Carmack telling vendors to store their backbuffer in a non-swizzled form. Sorry, I don't remember the details at all, but it's possible the backbuffer is just as swizzled as textures.

But GPUs should be great at copying blocks of memory. The pipeline would probably need to be flushed before doing anything, which should be costly. A dedicated GPU might help.
Maybe NVidia's SLI already helps in this department.

knackered
11-27-2004, 03:35 AM
So by linear korval meant unswizzled. Right, ok - wrong wording but a fair enough mistake to make through the dribble of a typical pompous korval rant.
Swizzle...linear....still not getting the connection, and english is my 1st language.
There's no war between myself and korval - I'm just the latest poor sap to accidentally end up in a conversation with him.
Korval, just out of interest - what do you do for a living? In what area do you work?
You've probably told us before, but I don't keep as close an eye on these forums as I used to - unlike you.

al_bob
11-27-2004, 11:41 AM
Originally posted by knackered:
So by linear korval meant unswizzled. Right, ok - wrong wording but a fair enough mistake to make through the dribble of a typical pompous korval rant.What's wrong with the linear/swizzled terminology again? Some of us actually use the term 'linear' to mean 'unswizzled'.

Ffelagund
11-27-2004, 12:20 PM
What's wrong with the linear/swizzled terminology again? Some of us actually use the term 'linear' to mean 'unswizzled'. There is no problem with this. It's only another excuse to blame, because he is bored. Ignore their messages. Linear is perfectly valid word for that description.

knackered
11-27-2004, 02:21 PM
Says the spaniard.

KRONOS
11-27-2004, 04:03 PM
Originally posted by knackered:
Says the spaniard.You're an idiot. -> says the portuguese...

Ffelagund
11-27-2004, 11:36 PM
XD

Obli
11-29-2004, 01:05 AM
It's quite some time I consider starting D3D 9. Luckly enough, I don't need RTT and advanced stuff yet so I can pull this away from me but this is just to say it's a shame this functionality is so late.

I choosed GL for portability. I have an app which goes on win32 and linux and it could go on mac. When this is the environment, there's no choice.
It's unclear to me why the ARB is leaving this issue behind. While various render-texture "patches" have been included, I still don't understand how they managed to NOT get a final solution.

I see however why vendors leave this behind. Market pays money for speed after all.

Korval
11-29-2004, 01:30 AM
It's unclear to me why the ARB is leaving this issue behind. While various render-texture "patches" have been included, I still don't understand how they managed to NOT get a final solution.The ARB's problems in this regard are a mixture of the fundamental problems with the ARB as well as some bad decisions that they made in the past.

The ARB is a committee. That leads immediately to two problems:

Committees react slower than a single authority. Microsoft can dictate what D3D is by themselves. In order for a committee to come up with a solution to a problem, there must be meetings, statements, arguments, and resolutions. This means that the authoritarian method can react faster to changing conditions. However, the committee method is more likely to arrive at the better solution.

Secondly, committees can fall prey to politics. Look at the irrationality surrounding the 3DLabs glslang proposal vs. the nVidia Cg one. Neither language was significantly better than the other, but the Cg syntax was already in use. It really made no sense to create a competing syntax, but that's what the 3DLabs ARB coalition did (a coalition of everyone who competed with nVidia). They made the wrong choice because nVidia was the one who suggested the right choice.

These factors, in the case of RTT, are combined with a rather aggregious mistake the ARB made. For a good year and a half, they were working on an extension called "superbuffers". It would allow for RTT, among other pieces of functionality.

However, for some reason, the superbuffers workgroup didn't get the level of interaction it needed to out of ARB members for a while (it seems that ATi was the primary one interested in it, so they were designing the extension in a near vacuum). When the other members started to notice superbuffers, and began to actually discuss it and make arguments for it, they began to discover that the entire extension itself was too far-reaching. It was trying to abstract so much that no real implementation would be possible.

As such, they abandonded superbuffers, and moved to working on an extension that nVidia, 3DLabs, and Apple proposed called EXT_render_target. The current name for this WIP extension is ARB_framebuffer_object.

So, basically, you have the ARB acting as a committee coupled with a rediculously aggregious mistake. That's why this functionality isn't really available.

Cab
11-29-2004, 02:27 AM
Originally posted by knackered:
Says the spaniard.:cool:

Carlos Abril
Madrid - Spain :)

ffish
11-29-2004, 03:11 AM
Korval, surely superbuffers weren't too far-reaching. Not sure how much of it was implemented at the time, but this paper (http://www.ati.com/developer/Eurographics/Kipfer04_UberFlow_eghw.pdf) , which I'm sure you've probably come across, details superbuffers usage on 9800 gpus. IIRC there are other papers out there that used early (ATI hardware) superbuffers implementations too.

Korval
11-29-2004, 09:48 AM
Korval, surely superbuffers weren't too far-reaching.Hey, don't ask me. I'm just repeating what was said in another thread by someone on the ARB. In this thread, to be precise: http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=011406;p=1

ffish
11-29-2004, 04:45 PM
Oh yeah, I remember that. I was kind of surprised though, since the functionality is/was there in 9800 + hardware and 6800 (and less?) hardware and in DirectX. Maybe he meant the spec was too ambitious and should be broken up. I mean, surely supporting RTT, RTVA, buffer swaps, etc could be supported now? A naive RTT/fbo implementation could wrap pbuffers and the current RTT. It might not be optimal in terms of speed, but at least it would work. RTVA could wrap pbo (like the example in the pbo spec). Buffer swaps could wrap current tech too. I'm guessing it was just a matter of too much in one spec. Still, you'd think a consensus could've been reached on the 1/2 dozen or so functions in fbo in a relatively short time, considering it's sooo similar to a subset of superbuffers (supported by ATI) and render_target (supported by NVIDIA). I realise there are other players, but two's enough for an EXT.

jwatte
11-29-2004, 04:55 PM
I believe there's also a question of just how much extra work we can expect the vendors to put into their drivers. If a spec requires a lot of re-work of some very fundamental parts of a driver, and those same parts also have to interface with a muddy and under-specified GDI driver interface, then I, if I were a vendor, would be very hesitant in endorsing such an extension. Instead, I'd be looking for a way to get 80% of the benefit with 20% of the work and risk involved.

ffish
11-29-2004, 11:05 PM
Hmm. NVIDIA reg devs can search for the EXT_framebuffer_object string in the most recently published document on the dev download page. I'm probably not allowed to comment, so I won't. Doesn't make me any happier, though.

Toni
11-29-2004, 11:25 PM
Originally posted by Cab:

Originally posted by knackered:
Says the spaniard.:cool:

Carlos Abril
Madrid - Spain :) Hehehe, Spaniards are cool ;)

Toni,
Barcelona - Spain

knackered
11-30-2004, 12:47 AM
At the end of the day, all the features that are supposedly stalling projects can be implemented using existing functionality....you effectively create your own API. Any features that simply don't exist in OpenGL force you to use the D3D implementation of your API.
I understand that it p*sses you off that the ARB are dragging their heals on finalising the mechanisms, but I don't see why it should have a major impact on your development schedules. There are ways around it.
This isn't a paraphrase of something I've read in another thread, by the way - I wouldn't see the point in doing that....except to make myself appear more widely read than I actually am.

ffish
11-30-2004, 03:04 AM
I'm lazy. Learning D3D might be good for me, but it's not the goal of my research. Same goes for writing an API wrapper. I already know OpenGL. I prefer the GLSL to Cg or HLSL. Lots of reasons. Mostly I'm just being stubborn. Hey, stubborn people are what will keep OpenGL alive, right?

Anyway, yeah, I'm giving up on fbo. But as you can see in my wglShareLists question thread, I've got 4 pbuffers that I have to ping-pong between, so context switches are gunna kill my app.

V-man
11-30-2004, 08:26 AM
Super buffers was absolutly beautiful. I never got to try it.

If I understood correctly, it was complex and there was technical issues to implement them.
Vendors don't want to spend time on something that won't get implemented immediatly.

I hope they will bring it back.