PDA

View Full Version : GL_ARB_pixel_buffer_object



husakm
10-25-2003, 08:23 AM
In the doc to lates nVidia 52.16 drivers is written, they support GL_ARB_pixel_buffer_object. After the drivesrs instalation, this extesnion is not visible. Documentation for this extensions is non-accesible as well. Anybody have some information ?

Adrian
10-25-2003, 08:37 AM
It could be the ARB version of NV_pixel_data_range. I hope it is.

[This message has been edited by Adrian (edited 10-25-2003).]

Ostsol
10-25-2003, 02:28 PM
Either that or a redifining of how textures are used/definded. Upon noticing that none of the functions for ARB_vbo uses the word vertex, I thought it'd be interesting if the same functionality were used for -any- data buffered on the video card. Also, it's hard not to notice that textures often store data other than colourmaps, which is what by definition 'texture' generally refers to.

I'm probably entirely wrong, though. I don't really think that there's any reason to redifine textures in OpenGL 1.x.

Zengar
10-25-2003, 02:31 PM
Originally posted by Ostsol:
Upon noticing that none of the functions for ARB_vbo uses the word vertex, I thought it'd be interesting if the same functionality were used for -any- data buffered on the video card.

= supperbuffers

ARB_pixel_buffer object would be nice, yes

Korval
10-26-2003, 05:33 PM
ARB_pixel_buffer object would be nice, yes

This was something of a minor controversy when ATi's presentation on the progress of the superbuffers extension came out. The API for using a buffer as source vertex data was very different from the VBO API. The best reasoning for this that we were able to come up with was the fact that, at the time, VBO was still an extension, and not part of the eventual 1.5 core. As such, they probably felt that binding the superbuffers extension to another extension would be a bad idea.

In any case, whether they use the VBO API or their own, ARB_superbuffers should be able to handle NV_pixel_data_range just fine. Unless there's some functionality there that I don't know about.

Zengar
10-27-2003, 12:32 AM
I guess pixel_buffer_object could be the long awaited superbuffers extension. Moreover, in the release notes of 5x.xx NVidia claims to support floating-point textures under DirectX(currently disabled). If that is true, ABR_pixel_buffer_object could provide same functionality, maybe eliminating the need of pbuffers.

MZ
10-27-2003, 06:44 AM
Moreover, in the release notes of 5x.xx NVidia claims to support floating-point textures under DirectX(currently disabled)
GL_FALSE (http://discuss.microsoft.com/SCRIPTS/WA-MSD.EXE?A2=ind0310b&L=directxdev&D=1&F=&S=&P=15610) http://www.opengl.org/discussion_boards/ubb/wink.gif

+ I installed 5216 and DX Caps Viewer showed no FP formats



[This message has been edited by MZ (edited 10-27-2003).]

cass
10-27-2003, 08:26 AM
VBO is to VAR as PBO is to PDR.

Superbuffers is a whole other thing with a much broader scope (and some inevitable overlap as well).

Thanks -
Cass

Zengar
10-27-2003, 09:08 AM
Yuhuu! Nice to hear from you, cass. You come always when one has already given up the hope http://www.opengl.org/discussion_boards/ubb/biggrin.gif BTW, I find your nvidia presentations simply excelent! When will we have the spec(if it's not a secret)?

cass
10-27-2003, 09:35 AM
Hi Zengar,

Thanks! http://www.opengl.org/discussion_boards/ubb/smile.gif

I can't really say when specs will be available, as they depend on how long the standardization process takes. It's possible that the ARB could supply a public draft as was done with the GLSL effort.

Thanks -
Cass

husakm
10-27-2003, 11:28 AM
I was a bit expecting that the ARB_pixel_buffer_object is a bit more standardized NV_pixel_data_range (PDR) extension ... Does anybody know whatever this extesnions will be supported by ATI or whatever it will give some more benefits in compariosn to PDR (e.g. support for 8-bit data, PDR works only with 16-bit and 32-bit now) ?

cass
10-27-2003, 12:39 PM
I should also point out (thanks to MrBill@ATI for bringing this to my attention http://www.opengl.org/discussion_boards/ubb/smile.gif ) that ARB_pixel_buffer_object is only a proposal at this point, and it was an error to have included it in the release notes.

It is an extension that I hope gets specified and implemented, because it is a logical extension of VBO. In fact, when we were working on VBO, we almost called it BO http://www.opengl.org/discussion_boards/ubb/smile.gif just to indicate that the Buffer Object mechanism was not limited to Vertexes.

Sorry if my post was misleading on the existence/completeness of PBO.

Thanks -
Cass


[This message has been edited by cass (edited 10-27-2003).]

pbrown
10-27-2003, 12:41 PM
This is an error in the release notes for the 52.16 drivers. The ARB has not approved (or even reviewed) a GL_ARB_pixel_buffer_object extension, and the 52.16 drivers don't support it any any form (as an ARB, EXT, or NV extension).

Cass' "VBO is to VAR as PBO is to PDR" above describes in a nutshell what a PBO extension would do -- it would work just like ARB_vertex_buffer_object, but for pixel operations.

Sorry for the confusion -- I'll see about getting the release notes fixed.

mrbill
10-27-2003, 02:05 PM
Originally posted by cass:
I should also point out (thanks to MrBill@ATI for bringing this to my attention http://www.opengl.org/discussion_boards/ubb/smile.gif ) that ARB_pixel_buffer_object is only a proposal at this point, and it was an error to have included it in the release notes.

I was reluctant to submit a reply since I wasn't sure if my memory was fogged (quite likely) or Homer fixed another toaster (one never knows). (Episode 606, "Treehouse of Horror V", 1994-10-30)

(insert silly smiley face)

-mr. bill

SirKnight
10-27-2003, 05:20 PM
Just wondering, why is it called pixel_buffer_object and not fragment_buffer_object? Wouldn't it make more sense for it to be fragment_...? I don't know, maybe I'm just weird. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

Korval
10-27-2003, 07:36 PM
Just wondering, why is it called pixel_buffer_object and not fragment_buffer_object? Wouldn't it make more sense for it to be fragment_...?

Well, ARB_pixel_buffer_object is clearly storing/copying pixels, not fragments. A fragment is just the set of state that goes into creating an on-screen sample. Any fragment programs/blending/antialiasing/etc has already happened.

cass
10-28-2003, 05:14 AM
Originally posted by SirKnight:
Just wondering, why is it called pixel_buffer_object and not fragment_buffer_object? Wouldn't it make more sense for it to be fragment_...? I don't know, maybe I'm just weird. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


Korval's got the right idea. This extension is about accelerating (and making asynchronous) glReadPixels, glDrawPixels, glTex{Sub}Image2D, etc. It's not about fragments. Fragments are a pipeline entity that correspond to pixel locations and contain auxilliary information for texturing, fogging, depth test, etc.

You shade fragments, you read pixels.

Of course, if you supported a 1st class f-buffer, you might also support ReadFragments and DrawFragments. But the extension that supported that asynchronously would probably be called FBO. http://www.opengl.org/discussion_boards/ubb/wink.gif

Thanks -
Cass


[This message has been edited by cass (edited 10-28-2003).]

SirKnight
10-28-2003, 05:29 AM
Ok...I thought it had something to do with fragments instead of pixels. I havn't really read much about it so I was just guessing here. http://www.opengl.org/discussion_boards/ubb/biggrin.gif And yeah I know what a fragment is. http://www.opengl.org/discussion_boards/ubb/wink.gif Since it was said pixel_buffer_object is like what vertex_buffer_object is to vertices, I just figured it had something to do with fragments instead of pixels. So in this case, this extension sounds quite nifty. Is it only going to be for NV40+ and whatever ati's next chip is?

Tom Nuydens
10-28-2003, 07:57 AM
Originally posted by SirKnight:
Is it only going to be for NV40+ and whatever ati's next chip is?

There's no reason it should be. Look at VBO for instance: it's available even on TNT2.

-- Tom

cass
10-28-2003, 08:19 AM
Originally posted by Tom Nuydens:
There's no reason it should be. Look at VBO for instance: it's available even on TNT2.

-- Tom

Right - this is just like VBO in the sense that it provide asychronous IO and the ability to transparently support driver-managed-memory and goodies like hardware DMA support. Even if the hardware doesn't support those things, there are numerous potential performance advantages that can be exploited.

Thanks -
Cass

Ostsol
10-28-2003, 08:19 AM
Originally posted by Tom Nuydens:
There's no reason it should be. Look at VBO for instance: it's available even on TNT2.

-- Tom
That doesn't necessarily mean that it's hardware accelerated.

cass
10-28-2003, 03:14 PM
Originally posted by Ostsol:
That doesn't necessarily mean that it's hardware accelerated.

One good thing about VBO is it's a better interface for software T&L than compiled vertex arrays, and pretty much all the other attempts to solve the "vertex buffer" problem before.

So while VBO isn't necessarily "hardware accelerated" on all platforms, it should still provide opportunity for "better acceleration" over other APIs for vertex array management.

Thanks -
Cass




[This message has been edited by cass (edited 10-28-2003).]

Ostsol
10-28-2003, 04:58 PM
True, though my point was simply that just because a given extension is supported in drivers for a given video card, it doesn't mean that the feature is hardware accelerated. Granted, software support is certainly better than no support at all (as you have shown in your example).

cass
10-28-2003, 08:54 PM
Originally posted by Ostsol:
True, though my point was simply that just because a given extension is supported in drivers for a given video card, it doesn't mean that the feature is hardware accelerated. Granted, software support is certainly better than no support at all (as you have shown in your example).

Software support isn't necessarily better. You don't want GeForce2 claiming to support ARB_fragment_program only to fall back to software when you use one. http://www.opengl.org/discussion_boards/ubb/smile.gif

In general, an extension is supported when it can be supported without hurting performance.

davepermen
10-29-2003, 12:07 AM
why don't you want?

it would be cool to have some sort of switch in gl:
full gl
fast gl

http://www.opengl.org/discussion_boards/ubb/biggrin.gif

well, on nvidia cards, you can always enable some emulator of some other card.. but thats a hacky solution.. i would like to be able to use ARB_fp as a general backend for performing some tasks in a raytracer. it would not be realtime only, but, too.. in a non-realtime solution, it would be great to just use ARB_fp.. in hw, if supported, in software, if not.

i think a good installed opengl should support all..

it could be a driver option..

oh, and ps2.0 runs rather fast in sw-shader http://www.opengl.org/discussion_boards/ubb/biggrin.gif i know why you wanted to hire nick http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Christian Schüler
10-29-2003, 12:19 AM
Originally posted by davepermen:
why don't you want?


You demand a general shift of workload from the application programmer to the driver programmer. Guess who doesn't like this :-)