framebuffer_object?

Still waiting … any ARB members want to update us on a timeline? I know NVIDIA has provisional (hidden) support in their recent drivers. I know most of the names of the new functions, just not how to use them. There must be some sort of agreement on a spec out there for most 65 series drivers to have it. Any news? :frowning:

What are the names of these functions? Maybe we can figure out what they do?

glGenFramebuffersEXT    
glDeleteFramebuffersEXT 
glBindFramebufferEXT
glGetFramebufferBufferParameterfvEXT    
glGetFramebufferBufferParameterivEXT    
glFramebufferStorageEXT 
glFramebufferTextureEXT 
glValidateFramebufferEXT    
glGenerateMipmapEXT

^ What he said.

I mistyped. I pretty much know what they do in a vague sense. Most of those functions are probably analogous to render_target, just with different names, so render_target probably describes the functionality pretty well. What I want is the spec, so I know exactly what they do, and driver support. If those function stubs exist in many recent driver versions, why can’t we use them? And why no spec? I’ve forgotten now which driver versions have had the strings, but pretty much all recent ones I’ve tried have. I’d suspect all 65 series drivers do. I’d prefer NVIDIA to either obfuscate the function strings or not put them in the drivers at all. Or maybe it’s just that I shouldn’t be reading the dlls - who knows. Oh well, at least they’re obviously testing it out so full marks to them. It’s the ARB I’m upset with. It’d be kinda funny if after all this time framebuffer_object == render_target. Kinda funny in an “extremely annoying” way.

Bah, just ranting at still waiting …

There’s also the hint in ARB_texture_float that says:

“Internal format names have been updated to the same convention as the EXT_framebuffer_object extension.”

so we know pretty much (with the non-extended OpenGL texture internal formats) all the texture internal formats that can be used with it. Most of the info is there - we just can’t use it. :mad:

My guess would be that “glFramebufferStorageEXT” allocates the framebuffer object (much like glTexImage allocates a texture object). The kinds of parameters that get passed in would define the specifics of the framebuffer (width, height, etc).

“glFramebufferTextureEXT” would likely be how one binds and unbinds a texture object to a framebuffer.

I’m not sure what “glValidateFramebufferEXT” would really do, though. What kind of validation does a framebuffer need? What would render it invalid?

I’m not sure why this extension would even expose a “glGenerateMipmapEXT”.

Given some of the discussions we have had on this board, it is likely that there are some new internal texture formats for textures that will give the driver hints that these textures will be used as render targets. It is even possible that only textures that are created with these internal formats can be used as render targets. That might be what the “glValidateFramebufferEXT” function is for.

Does somebody know the date(s) for december ARB meeting? :slight_smile:

I did a little research on how VBO came into existance. There had been rumors (and a mention in Carmack’s .plan file) that the ARB was either finished or nearly finished with some kind of cross-platform server-side vertex array extension in January of last year. However, the spec itself wasn’t released until GDC. The spec was released on around 3/17/03. We got a functioning (if not fast) implementation out of nVidia almost immediately. By contrast, it took about 2 months for ATi to expose it, though it could be said that ATi’s initial implementation was much better/faster than nVidia’s exposed beta versions.

Basically, the fact that we see these entrypoints in the drivers could be taken to be a positive sign. Or, it could be nothing.

Originally posted by Korval:
Basically, the fact that we see these entrypoints in the drivers could be taken to be a positive sign. Or, it could be nothing.
Well, I hope this means something will appear soon (although, waiting for GDC05 would be a pain…), that said, I’ll have to wait for ATI to add it anyways so even if the spec was released tomorrow I wouldnt count on seeing it in ATI’s drivers pre-feb anyways (4.11s are out, 4.12s are in ‘beta’, 5.1 will be jan and probably already in the pipe, so 5.2 probably the most likely time), unless ofcourse the spec is complete and they are holding off a release until NV/ATI/3DLabs all have something in place… well, I can hope cant I :wink:

ATI seems to have something in the works, too.
Catalyst 4.11 final (not the beta) contains these:
glGetFramebufferParameterivATI
glGetFramebufferParameterfvATI
glFramebufferParameterivATI
glFramebufferParameteriATI
glFramebufferParameterfvATI
glFramebufferParameterfATI
glIsFramebufferATI
glGetFramebufferATI
glBindFramebufferATI
glDeleteFramebufferATI
glCreateFramebufferATI

Notable differences are the ATI suffix instead of EXT, Create instead of Gen and the IsFramebuffer entry point. Nothing in the extension string.

While I was at it, I stumbled across a whole lot of functions that look to be related to super buffers. AttachMem, DetachMem, GetSubMemImage{1|2|3}D etc.

ah, in which case disregard my above ponderings as things could well be on target for it appearing in a newer release :slight_smile:

Notable differences are the ATI suffix instead of EXT, Create instead of Gen and the IsFramebuffer entry point. Nothing in the extension string.
Hmm, these differences could signal the shift from render_target to the actual, more finalized, FBO. After all, the render_target was an EXT extension, and nVidia was one of the principle developers pushing the idea. As such, it makes sense that some version of EXT_render_target might show up in their drivers. As for the ATI extension, they’re probably not claiming ownership so much as making sure nobody suspects an incomplete implementation for the full ARB extension.

There is a Gen, just like for other texture objects, but the Create function is there to specifically to initialize the framebuffer (width, height, pixelformat, etc). But there is a distinct lack of an analog for glFramebufferTextureEXT (which would do the binding of a texture to the framebuffer). Maybe one of the framebuffer parameter functions would work, but it would be kinda ugly to set something as important as the render target though a generic API.

While I was at it, I stumbled across a whole lot of functions that look to be related to super buffers. AttachMem, DetachMem, GetSubMemImage{1|2|3}D etc.
Not surprising. ATi’s had pseudo-implementations of superbuffers in their drivers since last year. I wonder if the above functions are super-buffers related, and not ARB_FBO related.

In the superbuffers thread, Barthold said the new extension would be EXT_fbo, not ARB_fbo. So I’d be banking on NVIDIA’s implementation being closer to the final spec. Dunno why ATI would introduce an ATI_fbo version into their drivers (if that’s what it is).

Korval, you don’t have to guess very hard at what the functions do. Just read the EXT_render_target spec and I’d imagine strong parallels, seeing the function names are almost the same. That spec probably describes pretty well what we’re going to be getting. I’d be surprised if there were any major differences. Of course, that’s pure speculation on my part based solely on EXT_fbo replacing EXT_rt and the function name similarities.

Originally posted by ffish:
Dunno why ATI would introduce an ATI_fbo version into their drivers (if that’s what it is).

This is old. Maybe 2003, as part of ATI’s work on superbuffers.

Superbuffers is serving as a model for this EXT_fbo.

I’m not sure why this extension would even expose a “glGenerateMipmapEXT”.
I can answer that. :wink:

There is a problematic interaction with rendering to a texture and enabling GENERATE_MIPMAP. If an application wants automatic mipmap generation on a texture that is being rendered to, how do you define when the mipmaps are generated? This is especially quirky since you can render to the level-0 (the base) mipmap while the other mipmap levels are being sourced as textures (by using LOD clamping).

Truthfully, I wish that SGIS_generate_mipmap hadn’t been made part of the core for this very reason. Alas, hindsight is 20/20. :frowning:

Our resolution was to create an explicit function to do the generation. With textures that are being rendered to, don’t use GENERATE_MIPMAP. Explicitly use glGenerateMipmapEXT.

idr: Date of release?! Just the spec! Please… :smiley:

I can answer that.
As such, I take it that nVidia’s entrypoints are more likely to correlate to the actual FBO extension than ATi’s?

So I take it that glGenerateMipMaps is more of a “general-purpose” function, and doesn’t have to be used specifically on render targets?

Originally posted by idr:
[b]There is a problematic interaction with rendering to a texture and enabling GENERATE_MIPMAP. If an application wants automatic mipmap generation on a texture that is being rendered to, how do you define when the mipmaps are generated? This is especially quirky since you can render to the level-0 (the base) mipmap while the other mipmap levels are being sourced as textures (by using LOD clamping).

Truthfully, I wish that SGIS_generate_mipmap hadn’t been made part of the core for this very reason. Alas, hindsight is 20/20. :frowning: [/b]
It may be too late now, but anyway … I would have preferred a change in SGIS_generate_mipmap language.

Instead of “If GENERATE_MIPMAP_SGIS is enabled, [generation of mipmaps] occurs whenever any change is made to the interior or edge image values of the base level texture array”, I’d prefer
a)mark the texture with a “mipgen pending” flag when the base level is changed. Alongside the flag, if the flag is clear, store the current base level. If the flag is already set, store min(current base level,stored base level).

b)check this flag when the texture is used as a source (glBegin or GetTexImage). If it’s set, generate the mipmaps and reset the flag before carrying on.

This change would not modify results. It’s a pure performance/implementability tweak.


Issues:
#1 - Should the mipmaps be generated if the base level array was modified while GENERATE_MIPMAPS_SGIS was true, but GENERATE_MIPMPAS_SGIS has been set to false afterwards?

Resolution: Yes. Once. Always check the flag and act accordingly, even if the texture no longer has the GENERATE_MIPMAP_SGIS attribute set at the time of glBegin/glGetTexImage.

#2 - Will the mipmaps be generated if the base level array was modified, but the base level selection changed before the check? Which levels will be generated?

Resolution: Yes. All levels from the modified initial base level downwards.


shrug

Admittedly, I prefer to have an explicit glGenerateMipMaps call, rather than the implicit stuff. It lets me know exactly where my performance is going, as opposed to trying to guess when the driver might decide to do a generate (which, almost certainly, constitutes a state change).