PDA

View Full Version : FBO spec posted



cass
01-17-2005, 12:58 PM
Well it will be shortly anyway. It's been sent to the OpenGL webmaster for posting on the main page.

zeckensack
01-17-2005, 01:05 PM
Argh! Just a teaser :(

But thanks for the heads up. I'll make sure to reload the front page and the registry every five seconds until it's there :p

Corrail
01-17-2005, 01:47 PM
They're finally online! ;)
Big thanks to the FBO team!!

Let's hope that it will be available in drivers soon!

glitch
01-17-2005, 02:22 PM
OMG they did it !
The new fbo interface seems to be quite intuitive !

Good Job guys !!! :D :D :D

ffish
01-17-2005, 02:58 PM
So Cass, I heard a rumour these would be available in 75 series Forcewares. Any timelines for us? Or even better, how about a registry key that enables the extension (which has been present in all drivers since 65.xx)?

3k0j
01-17-2005, 02:59 PM
Before it's too late, let me please suggest something: It would be great if we refrained ourselves from posting three dozens messages saying nothing more than like "OMG!! I can't believe this happened!!! I've been waiting for this for so long!! Thanks ARB, we love you!!! This is the happiest day of my life!! Good job ARB, God bless you!!".

Thanks In Advance (tm).

kehziah
01-17-2005, 03:03 PM
Amazing! (100+ revisions :eek: )

While I have no doubt NVIDIA and ATI will support this rather quickly, I hope Intel (and others who have a significant market share in the industry) will do the same.

edit:
Status

Ubercomplete:D

SirKnight
01-17-2005, 03:05 PM
mmm beefy. :D

-SirKnight

cass
01-17-2005, 04:02 PM
Originally posted by ffish:
So Cass, I heard a rumour these would be available in 75 series Forcewares. Any timelines for us? Or even better, how about a registry key that enables the extension (which has been present in all drivers since 65.xx)?Hi ffish,

Obviously I can't respond to rumors, but with 100+ revisions, you can imagine that we've been working on this for a while. We'll expose the extension as soon as we're confident it's stable, fast, and correct.

Thanks -
Cass

ffish
01-17-2005, 04:19 PM
(i) I like Issue 1 :D .

(ii) I like the way issues were moved to the end of the spec. This makes sense to me since it's the last information I read. I like to read the spec and how to use it then if I'm still unclear go through the issues which sometimes explain things in a little more depth (the reasons why, not just how). I hope this precedent is followed in future extensions.

(iii) There are a bunch of new procedures and functions that I didn't expect. Some have been present for a long time in Forcewares, but some are completely new to me.

(iv) I like CheckFramebufferStatusEXT().

(v) Issues take up 2/3 of the spec :eek: . No wonder it took so long.

(vi) Any comments from vendors on issue 44? Will "undefined" mean "it works" on your hardware?

(vii) Issue 58 "(besides attempt ..." :D . I can see how this would suck for spec-writers.

(viii) I can't think that Issue 76 would be a major concern. Off the top of my head, I can't see why you'd need multiple RCs with this extension (at least I can see why I'd need them).

(ix) I can see that Issue 79 would be a bit hairy - different texture and framebuffer <internal formats>.

(x) I hope Issue 84 gets cleared up. I personally prefer "lower-left" over "semi-undefined".

(xi) Revision #1: it actually was called "EXT_compromise_buffers" as per Issue 1 :eek: .

All in all, I'm happy. Just not with how long it took.

ffish
01-17-2005, 04:23 PM
Originally posted by cass:
We'll expose the extension as soon as we're confident it's stable, fast, and correct.Thanks cass. I guess I'm sticking with D3D and SetRenderTarget for a little while then :( . BTW, I posted pretty much the same question on your dev rel forum if you want to let any more information slip there ;) .

Zengar
01-17-2005, 05:07 PM
Oh boy :)

We obviously won't call this "ARB_compromise_buffers", so what name should we use?


The workgroup also seems to be happy about it :-)

I didn't quite got the renderbuffers idea. They are there simply to provide some intermediate buffer like depth/stencil? What about performance? I mean, will renderbuffers be faster in render ( :rolleyes: sry for style) then textures? If so, some copy command would be nice to have(if we do layered multipass for example) to move the contents of the renderbuffer to a texture.

Nice work guys. :-D
Althought it would never take SO long if the workgroup would be reduced to 2-3 persons. Well, politics is hard stuff(diplomacy also)

P.S. The issues list...
:eek:

Korval
01-17-2005, 05:45 PM
Off the top of my head, I can't see why you'd need multiple RCs with this extension (at least I can see why I'd need them).Multiple windows. Some people do use them. Now that pbuffers are officially dead (and good riddance), the only real reason to open up an entirely new context is to deal with rendering to multiple actual windows.


We'll expose the extension as soon as we're confident it's stable, fast, and correct.So this isn't going to be one of those magic nVidia launches where you deploy a fully functional, reasonably bug-free implementation of a spec on the day it releases?

You guys are slacking off. And I was that close to buying a 6600GT... ;)


I didn't quite got the renderbuffers idea. They are there simply to provide some intermediate buffer like depth/stencil?Textures are still restricted to powers of two (unless you have NPOT to relieve this). Renderbuffers are not. So, if memory is of a great concern, and performance is of lesser concern, you can still render to an off-screen renderbuffer and read it back with a CopyTexSubImage or a ReadPixels.

Also, if you are doing RTT, and you need depth tests, but you don't want to keep the depth data around, it's probably best to use a Renderbuffer for the depth buffer.


If so, some copy command would be nice to have(if we do layered multipass for example) to move the contents of the renderbuffer to a texture.We have one. glCopyTexSubImage. When you bind a framebuffer, all operations, to or from, the framebuffer go to the bound buffer.

BTW, I found the logic for issue #79 to be dubious at best. What, specifically, would be "too problematic to introduce this type of invariance?" It seems pretty simple to specify: when a texture is bound to a framebuffer, the implementation is allowed to screw with the internal format as it wishes. These changes should be persistent after the texture is unbound. And it makes it much more difficult to build a framebuffer that comes up "FRAMEBUFFER_UNSUPPORTED_EXT", as unsupported formats can be converted into supported ones.

At the very least, we should have hints that allow us to say, "framebuffer-compatible RGBA texture", as our internal format.

I've notice that a whole lot of stuff is being left to layers. Not truly external (and large) functionality like RTVA, but fairly important stuff like enumerating formats/selecting formats and binding buffers of different sizes. Is there any real ETA on these guys? Are they in active discussion?

ffish
01-17-2005, 07:19 PM
Originally posted by Korval:
Multiple windows. Some people do use them. Now that pbuffers are officially dead (and good riddance), the only real reason to open up an entirely new context is to deal with rendering to multiple actual windows.Fair enough. I've just never come across a situation where I might need separate RCs and shared fbos, apart from the whole current pbuffer mess where sharing resources is totally necessary. But I can see where people might use them.

Originally posted by Korval:
I've notice that a whole lot of stuff is being left to layers. Not truly external (and large) functionality like RTVA, but fairly important stuff like enumerating formats/selecting formats and binding buffers of different sizes. Is there any real ETA on these guys? Are they in active discussion?I can't see how binding buffers of different sizes would work. How do you simultaneously write to pixel (x,y) when the buffer sizes are different? Or do you mean bind 2 buffers A & B and write to A then write to B in separate passes? I guess on this (and the multiple RCs issue) I'm hoping that fbos are lightweight enough that creating multiple fbos won't impact _that_ heavily on performance over one main fbo. Multiples don't seem to be so bad with D3D RenderTargets and I'm using about 12 RTs or so per frame.

On the formats issue, I guess that's up to the programmer. I have a pretty good idea of what's happening in my work so this wouldn't be an issue for me. I can see how a really big shared project might have to worry about it a bit more, but IMHO it's not a major concern. Just requires a consistent framework in the project for texture management.

idr
01-17-2005, 08:50 PM
Off the top of my head, I can't see why you'd need multiple RCs with this extension (at least I can see why I'd need them).Multiple rendering contexts are used in multithreaded and / or multiwindowed code.


I can't see how binding buffers of different sizes would work.It doesn't. Everything bound to a framebuffer object must have the same dimensions.


I've notice that a whole lot of stuff is being left to layers. You noticed. :) Basically, everyone involved in the working group is exhausted. Some of us have been on this for two years. The "107 revisions" happened in the last 6 months. We're going to take a break, make some language tweaks to the spec, then launch into layered functionality. Wish us luck. :)

zed
01-17-2005, 09:17 PM
any ballpark figures on what speedup we will expect to see with this vs copytexsubimage. 10% 100% 1000%.
my app (near alpha release btw) does a large (1-100) number of texture updates per frame

Korval
01-17-2005, 09:49 PM
I can't see how binding buffers of different sizes would work. How do you simultaneously write to pixel (x,y) when the buffer sizes are different?The same way you do in D3D: the smallest buffer defines the largest defined rendering region.


On the formats issue, I guess that's up to the programmer.No, it isn't. Which formats are available (in which combinations) is implementation defined. While you can assume that 32bit color and depth buffers will work (to some degree), what about combinations of 16-bit color and 32-bit depth? On some hardware, it works, and on some hardware it doesn't. FBO gives us no way to know what bound buffer (or combination thereof) caused the system to reject the framebuffer. The proposed extension would give us that ability.


It doesn't. Everything bound to a framebuffer object must have the same dimensions.No, EXT_FBO requires this. That doesn't mean that you can't define such behavior; only that the spec doesn't allow for it.

CrazyButcher
01-18-2005, 12:03 AM
I hope this runs on older hardware, too. would be nice to have an "easy" way for rtt on non-state-of-the-art engines.

ffish
01-18-2005, 01:48 AM
Korval, from the "Multiple Render Targets" help page from the December 2004 DX SDK update:

"All surfaces of a multiple render target should have the same width and height."

I can't see how anything else would make sense.

Re the formats, I don't need portability and I have a 6800GT so the issue wouldn't come up for me. Plus I never use depth buffers. But yeah, in hindsight I see how it might get tricky managing things. I guess you could conceivably come up with a wrapper to create framebuffers effectively, but that might get a bit complex.

KRONOS
01-18-2005, 04:54 AM
It is like the second coming of Christ! :D
Is there any Forceware leaked driver with an implementation? Please.... :)

V-man
01-18-2005, 05:13 AM
I just cruised through the document. I will have to re-read.

There are errors in the examples.
Look at #4, where the for loop starts. color_tex_array[N] instead of color_tex_array[i]

The examples following it also have the same error.

I'm guessing you guys are not experimenting with a sample implementation while you write these specs, right? It would be good, cause as a bonus you can release a software renderer for us.

An important question for me. When we use glFramebufferTexture2DEXT (or the others), are the previous contents of the buffer considered undefined, or they are preserved?
What about the other buffers like depth and stencil? Preserved or not?

Another question---------
Do you think it would be an interesting feature to be able to render to 2 faces of a cubemap at the same time?

zeckensack
01-18-2005, 05:24 AM
Originally posted by V-man:
An important question for me. When we use glFramebufferTexture2DEXT (or the others), are the previous contents of the buffer considered undefined, or they are preserved?
What about the other buffers like depth and stencil? Preserved or not?From my first pass of reading, the contents of the previously attached image, if any, are preserved.
Wouldn't make much sense otherwise.

Won
01-18-2005, 05:44 AM
Got about halfway through reading the spec. Good job, guys. I eagerly wait an implementation. Korval might complain about ARB slowness, but I'm pretty glad you guys are as thorough as you are.

As for rendering to multiple cube-map faces, I believe this falls (in spirit) under issue 44, which resolves as "undefined behavior." (This is my interpretation, so grain of salt):

Issue 44 deals with potential read/write hazards, and the potential for rendering to multiple cube-map faces is kind of a write/write hazard. Of course, rendering to different faces of a cube-map guarantees you to be hazard free, but the language about valid render texture is always regarding the texture objects and not texture targets. This is the "concern" in the issue about binding one level of a mipmap as texture and and another as target for custom mipmap generation.

My guess is that it is technically undefined, but probably safe (like custom mipmap generation). Certainly, it seems alot less dodgy than rendering to a bound texture to avoid ping-pong rendering in GPGPU applications. AFAIK, the latter behavior has been unofficially "blessed" by ATI and NVIDIA.

But, why do you want to render to multiple cubemap faces at once, anyway? To do useful stuff you probably need multiple post-transform vertex streams!

-Won

Won
01-18-2005, 05:47 AM
Thought I'd bring up a number of typos in the "Issues" section. Don't know if anybody on the ARB cares.

Issue 8: "of absense of" ==> "or absence of"

Issue 37: "combersome" ==> "cumbersome", also glDrawElement is redundant with glDraw{Array|Element}

Issue 41: "realted" ==> "related"

Issue 41, B3: (capitalization) "STENCIl" ==> "STENCIL"

Issue 55: "ont" ==> "not"

And that's as far as I got.

zeckensack
01-18-2005, 06:03 AM
More errors:
Issue 37: glMultiDrawElements and glMultiDrawArrays are missing.

Issue 28: "This parameter could be have been called <...>"
=> "This parameter could have been called <...>"

Korval
01-18-2005, 07:57 AM
Korval, from the "Multiple Render Targets" help page from the December 2004 DX SDK update:

"All surfaces of a multiple render target should have the same width and height."

I can't see how anything else would make sense.From this MSDN page (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_c/directx/graphics/reference/d3d/interfaces/idirect3ddevice9/SetRenderTarget.asp) , you can clearly see that the size of the depth/stencil buffer is not bound to the size of the color buffer(s). It only needs to be bigger than the color buffer(s). While, yes, the size of color buffers must all be the same, they are not bound to the size of depth buffers.

Won
01-18-2005, 08:47 AM
One last error from me:

Issue 62: "Contxext" ==> "Context"

As Korval points out, alot of things in this extension are deferred to layers or to ARB/Core promotion, but after going through all the issues I realize even the decision of which issues to defer (and how to defer them) is pretty involved.

Was there really such thing as EXT_Compromise_Buffers?

-Won

Korval
01-18-2005, 09:48 AM
Was there really such thing as EXT_Compromise_Buffers?My guess is that this was just a working name after they abandoned Superbuffers. No need to waste time debating an actual name when more substantive issues are on the table.

Oh, one question about ARB_FBO. It seems like one can only bind textures and so forth to application created framebuffers (as opposed to the default one in the context). If so, why was this put into the spec? It seems rather limitting, though I can kinda see how interacting with the default framebuffer can be somewhat... difficult to specify.

cass
01-18-2005, 12:37 PM
Originally posted by Korval:
Oh, one question about ARB_FBO. It seems like one can only bind textures and so forth to application created framebuffers (as opposed to the default one in the context). If so, why was this put into the spec? It seems rather limitting, though I can kinda see how interacting with the default framebuffer can be somewhat... difficult to specify.Hi Korval,

Note this is EXT_FBO. It was developed by the ARB, but I'm glad we didn't rush to put the ARB stamp on it without putting some miles on the odometer. OpenGL is better served by proving extensions before carving them in stone.

On your specific question, interaction with the window-system framebuffer was a hornet's nest. It has issues like pixel ownership test, multisample, and other stuff that would only have slowed things down.

My philosophy on this was "the simpler, the sooner" and even trying to keep things simple, there were tons of issues to work out.

Thanks -
Cass

Overmind
01-18-2005, 12:43 PM
My guess is the problem is the pixel ownership. What should happen if you bind for example a color texture to the default framebuffer, leaving the depth buffer alone, and you don't own some pixels because of overlapping windows?

Just don't own the pixels when you don't own them in some buffers? This is contraproductive, you want the whole image in the texture, not just a part of it. On the other hand, the ownership test can't possibly be positive because in the depth buffer some pixels just don't exist...

EDIT: Too slow... :D

ffish
01-18-2005, 02:44 PM
Korval, my mistake. I thought you were talking about multiple render targets with different sized colour buffers. Dunno enough about depth buffers to discuss them.

Corrail
01-18-2005, 03:23 PM
Just a short question about Color attachment points: Why don't you stick to the AUX Buffers and added these new atachment points?

ffish
01-18-2005, 03:26 PM
The rationale is discussed in one of the issues.

Corrail
01-18-2005, 03:41 PM
Thanks, I'll take a look at that.


When a texture object is deleted while one or more of its images is
attached to one or more framebuffer object attachment points, the
texture images are first detached from all attachment points in all
framebuffer objects and then the texture is deleted.

If a texture object is deleted while its image is attached to one or
more attachment points in the currently bound framebuffer, then it
is as if FramebufferTexture{1D|2D|3D}EXT() had been called, with a
<texture> of 0, for each attachment point to which this image was
attached in the currently bound framebuffer. In other words, this
texture image is first detached from all attachment points in the
currently bound framebuffer. Note that the texture image is
specifically *not* detached from any non-bound framebuffers.
Detaching the texture image from any non-bound framebuffers is the
responsibility of the application.
this is a little bit confusing...

Is there any other way to get the data out of a Renderbuffer (like glGetRenderBufferData) than attaching the Renderbuffer to a Framebuffer object, binding the framebuffer, call glReadBuffer an call glReadPixels? If not, why?

Typos:
section 4.4.3
Doing so could lead to the creation of of a "feedback loop"...

Korval
01-18-2005, 06:04 PM
Is there any other way to get the data out of a Renderbuffer (like glGetRenderBufferData) than attaching the Renderbuffer to a Framebuffer object, binding the framebuffer, call glReadBuffer an call glReadPixels? If not, why?No, you can't. They probably the same rationale as for not having a RenderbufferImage call. See issues 9 and 10.

ffish
01-18-2005, 06:58 PM
Corrail, it's Issue 54.

supagu
01-18-2005, 09:14 PM
- EXT_framebuffer_object

is this meant to replace pbuffers?
or is there some key difference between fbo and pbuffer im not seeing?

Lurker_pas
01-19-2005, 12:22 AM
I haven't read the whole spec yet, but from early presentations I assumed they should be more versatile(eg. detaching/attaching depth_buffers - am I correct?) and faster.
I hope ATI will implement them before the end of my end-of-term examinations:)

Jan
01-19-2005, 01:28 AM
Originally posted by supagu:
- EXT_framebuffer_object

is this meant to replace pbuffers?
or is there some key difference between fbo and pbuffer im not seeing?See issue 2:



RESOLUTION: This extension should fully replace the pbuffer API.
Jan.

Corrail
01-19-2005, 01:52 AM
Originally posted by Korval:
No, you can't. They probably the same rationale as for not having a RenderbufferImage call. See issues 9 and 10.I see why there's no glRenderbufferImage but getting out the data of a render buffer in that way seems to be a little bit to complex to me. What about GPGPU applications which need the rendered data for further processing? I think a glGetRenderbufferData function would be handy.


Originally posted by ffish:
Corrail, it's Issue 54.Thanks, now its clear

idr
01-19-2005, 07:08 AM
I see why there's no glRenderbufferImage but getting out the data of a render buffer in that way seems to be a little bit to complex to me.Part of the rationale was that an application has to do all that work to data in the renderbuffer, so it should be able to get the data out at the same time. Afterall, you can't read from a pbuffer or a window that isn't current either.

Of course, little things like this are all the more reason we made it GL_EXT_ instead of going directly for GL_ARB_. ;)

V-man
01-19-2005, 07:30 AM
It's looks like ARB_draw_buffers will have to be updated for this extension and we have assume if the driver says it supports EXT_FBO, and also has ATI_draw_buffers, it has been updated for EXT_FBO.

Ditto for ARB_draw_buffers and shaders too.

Korval
01-19-2005, 09:57 AM
What about GPGPU applications which need the rendered data for further processing? I think a glGetRenderbufferData function would be handy.Simply call glReadPixels after you're done doing your GPGPU rendering stuff. You do it before you unbind the buffer. You can even use ARB_PBO to make it asynchronous.

michael.bauer
01-20-2005, 02:05 AM
Korval,

is it (or will it be) possible to use FBO and PBO to do simultaneous upload of textures, rendering, and readback. I would like to do streaming (pipelined) image processing for lots of images. This would only be efficient if upload, rendering and readback could be done at the same time. (needless to say: for different images)

If not all, which parts of the described application could be done simultaeously?

Korval
01-20-2005, 09:20 AM
It depends far more on hardware then the API. But I wouldn't count on being able to have any of the specified functionality asynchronously overlap with any of the others.

qzm
01-20-2005, 12:34 PM
Does anyone have any sign on where we will see this implemented first?
Getting started on using this API is a large priority for me, so I am quite willing to purchase (just about) any hardware that supports it ASAP.

Of course, it would suit me the most if that happened to be NVidia, but hey.

Regards.

SirKnight
01-20-2005, 04:48 PM
Well...wouldn't all hardware that supports pbuffers be able to support this? It does just about the same thing but doesn't have the problems like pbuffers have with the context stuff. It's just a better interface for rendering to a texture and whatnot.

-SirKnight

Korval
01-20-2005, 05:57 PM
Well...wouldn't all hardware that supports pbuffers be able to support this? It does just about the same thing but doesn't have the problems like pbuffers have with the context stuff. It's just a better interface for rendering to a texture and whatnot.In theory, yes. But this is a substantially different extension from pbuffers. The API is different, as are a number of other things. Internally, the biggest difference is that there is no context associated with the framebuffer. Plus, they have to actually be able to unbind the texture from the pbuffer and have it retain its data. What they've created is not trivial to implement. But 2-3 months ought to do it, including testing.

GKW
01-21-2005, 09:05 AM
I was hoping that this spec would allow for offscreen buffers without the creation of an onscreen context. I use opengl for image processing which I then display in a Java application. Opening a OS window which I never use is a waste.

Korval
01-21-2005, 09:37 AM
I was hoping that this spec would allow for offscreen buffers without the creation of an onscreen context.And precisely how would it be able to do that without being window-system dependent (and therefore a WGL spec)? Besides, it isn't that much of a waste to just spawn a small, hidden window and leave it at that.

MZ
01-21-2005, 09:44 AM
Originally posted by GKW:
I was hoping that this spec would allow for offscreen buffers without the creation of an onscreen context. I use opengl for image processing which I then display in a Java application. Opening a OS window which I never use is a waste. http://www.opengl.org/about/arb/notes/glP_presentation.pdf

V-man
01-21-2005, 11:59 AM
Originally posted by Korval:
In theory, yes. But this is a substantially different extension from pbuffers. The API is different, as are a number of other things. Internally, the biggest difference is that there is no context associated with the framebuffer. Plus, they have to actually be able to unbind the texture from the pbuffer and have it retain its data. What they've created is not trivial to implement. But 2-3 months ought to do it, including testing.There are some features that may prove to perform really bad like rendering to 3D texture slices.
It should be worst when mipmaps need to be generated for 3D textures.

Preserving the contents of textures would involve copying and de-swizzling when you will be rendering to it.

CheckFrameBufferStatus might make it easier on the implementors.

Christian Schüler
01-21-2005, 12:47 PM
As far as I know, the only thing you need to have to create a GL rendering context is a device context. You could get a windows device context from an offscreen bitmap. I don't know if this works though.

Korval
01-21-2005, 02:38 PM
If your device context isn't an actual window, I'm pretty sure the system gives you an unaccelerated GL context.

SirKnight
01-21-2005, 02:42 PM
A few months eh, yeah I can see that. The simpler the interface the more difficult the implementation. :D

Well, on the bright side this will give me enough time to actually read the WHOLE spec. ;)

Not only that I already have a header/source I made with everything defined so that when drivers do come out with support I'll be instantly ready to go. :D

This is a pretty exciting addition to GL imo.

-SirKnight

GKW
01-21-2005, 09:39 PM
Rendering to a bitmap is a MCD only option so no hardware acceleration or any functions added after 1.2. It isn't a big deal just annoying.

Humus
01-22-2005, 12:17 PM
glGetFramebufferAttachmentParameterivEXT <-- Possibly the longest entry point name in GL history. :)

3B
01-24-2005, 03:34 AM
Originally posted by Humus:
glGetFramebufferAttachmentParameterivEXT <-- Possibly the longest entry point name in GL history. :) A quick look at glext.h shows SUN_vertex (http://oss.sgi.com/projects/ogl-sample/registry/SUN/vertex.txt) is still in the lead with glReplacementCodeuiTexCoord2fColor4fNormal3fVertex 3fvSUN :eek:

Korval
01-24-2005, 08:06 AM
A quick look at glext.h shows SUN_vertex is still in the lead with glReplacementCodeuiTexCoord2fColor4fNormal3fVertex 3fvSUN I can't believe that Sun actually added this nonsense API as an extension. Do even they support this thing anymore? After GL 1.1's addition of vertex arrays, didn't they realize that immediate mode wasn't that important?

idr
01-24-2005, 09:03 AM
I can't believe that Sun actually added this nonsense API as an extension. What's even more surprising is the number of workstation oriented CAD apps and the like that actually use that extension. :(

Korval
01-24-2005, 03:48 PM
The more I think about it, the more concerned I am about renderbuffers. Specifically, the lack of a specific upload call to initialize them. I'm not concerned to the point of calling it a bad idea; it just seems... shortsighted. If renderbuffers become useful for something other than storing the output of a rendered image, being able to upload to them without binding and calling glDrawPixels will be quite useful.

ffish
01-24-2005, 08:30 PM
What do you want to upload? If you want to initialise them with procedural data, bind the renderbuffer as the current target and render whatever you want into it. Isn't that the whole purpose of them? If you want to upload a texture, bind the renderbuffer and render a textured line/quad/slices of 3D image into it.

Korval
01-24-2005, 11:19 PM
If you want to initialise them with procedural data, bind the renderbuffer as the current target and render whatever you want into itUm, there's a non-trivial performance hit for doing that. You're tying up the entire render pipeline (for no reason. Wasting tons of cycles of fragment and vertex shaders), when all you really need is a simple DMA transfer.

Right now, the expressed purpose of renderbuffers is as destinations for rendered data. If we decide to add functionality to renderbuffers that makes doing uploads of data more critical/important, we will want the ability to upload via PBO and standard gl*Image calls, rather than tying up the entire rendering pipeline.

bobvodka
01-25-2005, 03:07 AM
well, such functionality could well be added in another extension, following the ARB's trend of 'get the basics down then extend' method of doing things

As you say, it might well be a nice addition but isnt critical to the stated use of the thing (and what I dare say the majority of ppl will be using it for in the first place) and I dare say other things will crop up over time which might need it to be extended in some way and i'd rather have a working extension which does the basics now vs all the whistles and bells and having to wait Xmonths longer for a spec then then X+Nmonths for driver to impliment it.

cyclone
01-25-2005, 05:39 AM
>I can't believe that Sun actually added this
>nonsense API as an extension. Do even they
>support this thing anymore? After GL 1.1's
>addition of vertex arrays, didn't they realize
>that immediate mode wasn't that important?

Your logic is very limited ...

What about a very very big lot of vertex values that are dynamically generated "on the fly" and that cannot be stored in RAM (or VRAM) ???

Here, only immediate mode can really help you ...

Now, the only real problem with immediate mode is the number of funcs calls, so why would you fight with this wonderfull extension ???

zeckensack
01-25-2005, 06:36 AM
Originally posted by cyclone:
>I can't believe that Sun actually added this
>nonsense API as an extension. Do even they
>support this thing anymore? After GL 1.1's
>addition of vertex arrays, didn't they realize
>that immediate mode wasn't that important?

Your logic is very limited ...

What about a very very big lot of vertex values that are dynamically generated "on the fly" and that cannot be stored in RAM (or VRAM) ???

Here, only immediate mode can really help you ...

Now, the only real problem with immediate mode is the number of funcs calls, so why would you fight with this wonderfull extension ???ArrayElement(0)

idr
01-25-2005, 06:48 AM
...the lack of a specific upload call to initialize them. Are there cases that you have in mind where a texture (initialized using TexImage[123]D) couldn't be used? The only thing currently supported by the spec that can't be initialized that way (that I can think of off the top of my head) is a renderbuffer used for stencil data.

We didn't define a method to initialize a renderbuffer, but we knew it might be desired. Basically, we took a wait-and-see approach.

cyclone
01-25-2005, 07:40 AM
Could you explain why this ArrayElement(0) ?

You want to redefine vertex arrays for each component of each vertex ???

In this case, one good simple glColor*Normal*TexCoord*Vertex*SUN seem me more elegant (and really more optimized) that to use glVertexPointer/glColorPointer/glNormalPointer/glTexCoordPointer/glArrayElement at each vertex :)

Korval
01-25-2005, 10:06 AM
Are there cases that you have in mind where a texture (initialized using TexImage[123]D) couldn't be used? The only thing currently supported by the spec that can't be initialized that way (that I can think of off the top of my head) is a renderbuffer used for stencil data.Well, using a texture can be done, but we currently have no way of creating a texture that we can guarentee is compatible with what the hardware wants. We can't hint to the card that, "This texture's primary purpose is to be used as a render target." The only kind of surface we can create that is virtually guarenteed to be framebuffer compatible is the renderbuffer.

Also, textures are likely swizzled, while renderbuffers are probably in a format that is best used for framebuffer rendering. Thus, renderbuffers will likely be faster than textures. But this is a minor point.

Plus, renderbuffers don't have power of 2 restrictions, and textures do (without the NPOT extension, of course).

I agree that it isn't a problem yet. But I believe in being pro-active with problem solving; better to not have a problem than for it to become one in the future.

OT:

What about a very very big lot of vertex values that are dynamically generated "on the fly" and that cannot be stored in RAM (or VRAM) ???

Here, only immediate mode can really help you ...If you have so many vertices that you can't store them in RAM, then don't. Double buffer them and send them in short(er) batches with glDrawArray calls.


Now, the only real problem with immediate mode is the number of funcs calls, so why would you fight with this wonderfull extension ???It is not a "wonderfull" extension. It is an extension that is useful for a very rare case that, ultimately, can be solved in other ways (see above).

I noticed the lack of basic multitexture support as well (let alone generic vertex attributes for ARB_vp or glslang), which makes it far too limitted for any modern graphics use.

zeckensack
01-25-2005, 10:35 AM
Originally posted by cyclone:
Could you explain why this ArrayElement(0) ?

You want to redefine vertex arrays for each component of each vertex ???No. I suggested setting aside the space required for a single vertex, and initializing the attribute pointers once.
In this case, one good simple glColor*Normal*TexCoord*Vertex*SUN seem me more elegant (and really more optimized) that to use glVertexPointer/glColorPointer/glNormalPointer/glTexCoordPointer/glArrayElement at each vertex :) Elegant? A matter of taste, obviously. I don't think it's elegant.
I also strongly doubt that it's faster. That largely depends on the machine and compiler you're using and how efficient stack operations are, in comparison to direct moves. My guess is that the performance difference will be minimal at best.

Humus
01-25-2005, 01:26 PM
Plus, I don't think performance of immediate mode rendering depends primarily on function call overhead, but the overhead for the driver to assemble things into a vertex array under the hood and update current states.

cyclone
01-25-2005, 02:18 PM
>Plus, I don't think performance of immediate mode
>rendering depends primarily on function call >overhead,but the overhead for the driver to
>assemble things into a vertex array under the hood
>and update current states.

In another thread, I give that to convert alls glVertex/glNormal/glColor/glTexCoord into one vertex

cyclone
01-25-2005, 03:00 PM
Escuse-me for to have involontary push the Add Reply before to finish my last message :)

In another thread, I have give some code about an hypothetic gl2999 API, and this seem clear to me that convert a lot of glColor / glNormal / glTexCoord / glVertex to one vertex array isn't very difficult (and make this multi-indexed too, but this is another story ...)

And call one glDraw* when you reach the glEnd isn't very hard too :)

So convert from immediate mode to indexed mode isn't really a problem ...

About the glVertex/Color/Normal/TexCoord call overhead, this can be inlined or #define for to hide the push/pop of funcs parameters, but ok this can be certainly problematic for to make one external library with this ...

And yes, glVertex/Color/Normal/TexCoord calls overhead can really be a limit ...
=> why do you thing that they want that we use always the vertex array mechanism ?

And about to call an glArrayElement(0) on each vertex, I haven't alreally tested, but effectively this seem me a good idea :)

Dez
02-01-2005, 04:59 AM
It seems that it finally got to the registry:
http://oss.sgi.com/projects/ogl-sample/registry/EXT/framebuffer_object.txt
So let the best and earliest implementation contest begin ;) .