PDA

View Full Version : Radeon / GeForce performance question



heeb
05-12-2001, 12:23 AM
Hi,

I have a demo app that creates a reflection on water by the rendering the scene into a texture (using glCopyTexSubImage) and then blending with the water surface using a texture projection to make the reflection distort as the water ripples. The frame rates on 3 different systems are shown below, which brings me to my question:

Why is there a massive performance difference between the GeForce & Radeon cards? The bottleneck on the Radeon is the glCopyTexSubImage call (see Radeon performance with and without render to texture).

Why is the s/w renderers performance unaffected by the call to glCopyTexSubImage, but the Radeon's is?

Source and/or exe can be downloaded here if anyone wants to try it. www.futurenation.net/glbase (http://www.futurenation.net/glbase)
Both downloads are less than 300k.

Performance stats:

AMD ThunderBird 800 / Geforce 2 MX (by BigB)
Running at 1024x768/32 bits i got between 90fps and 140 fps.., very smooth
Running at 1024x768/16 bits i got between 30fps and 25 fps...

Celeron 466 / Radeon 64DDR (my system)
1024x768/32 bits - 2 fps with render to texture on (250 fps with render to texture off)

Pentium 266 / no OpenGL acceleration (my laptop)
1024x768/32 bits - 1 fps with render to texture on (1 fps with render to texture off)

Thanks
Adrian

mcraighead
05-12-2001, 10:08 AM
I can't speak for anyone else, but I do have a suggestion for why 16-bit performance might drop below 32-bit performance.

Make sure to match the texture internal format to the framebuffer format. So, for a 16-bit mode, use GL_RGB5, and for a 32-bit mode, use GL_RGB8 (or GL_RGBA8 if necessary).

Note that if you use GL_RGB, our default way of interpreting that is to go with the current color depth, i.e., you'll automatically get GL_RGB5 and GL_RGB8. Unfortunately, someone can change this behavior with the registry, so you can't necessarily rely on it.

If you use GL_RGB5, you should get faster performance in 16-bit than in 32-bit...

- Matt

Sylvain
05-12-2001, 10:23 AM
Well, i've bought a radeon SDR 32 coz i've already taken advantage of VAR extensions on Nvidia boards and i was interested to test another 'claimed' T&L board.. http://www.opengl.org/discussion_boards/ubb/smile.gif) At the moment i've only done a few tests and of course with the latests drivers (using non particular ATI's extensions) it's awfull another joke from ATI!! http://www.opengl.org/discussion_boards/ubb/smile.gif) 3 times (or more) slower than a TnT2 board !! So what?? what could be the reasons?
1.) The board is a **** like previous ATI boards!!
2.) The people who are writting drivers @Ati are silly persons or (i hope for them) have no time to develop clean releases for their own products.
3.) The people who are writting what they certainly call 'code' are silly for sure when you must install DX8 before installing a 11Mb driver archive!! really nice stuff men!! congratulations and see ya in hell with 3dfx!!

Oopps sorry, of course 3dfx now is in purgatory with Nvidia.. http://www.opengl.org/discussion_boards/ubb/smile.gif)

heeb
05-12-2001, 10:28 AM
My fault I'm afraid. To try and fix the performance problems running the app on my Radeon I tried forcing the texture internal format to GL_RGB8. Forgot to change it back to GL_RGB before releasing the code. Thanks for pointing it out.

heeb
05-12-2001, 10:47 AM
Oops, that last post of mine was meant as a reply to Matt of course, not to Sylvain.

Sylvain:

I totally disagree (except on the DX8 thing). When looking at which board to buy I thought that the Radeon offered better value for money than the GeForce 2, and I still think thats the case. ATI are still developing the drivers so hopefully all the issues will eventually be resolved.

Sylvain
05-12-2001, 10:57 AM
eheh!! just jocking!! but perhaps ATI will eventually solve their problems (it's not a brand new card now..) but for sure they are certainly crediting their bank account with our money!! http://www.opengl.org/discussion_boards/ubb/wink.gif
As a conclusion for performances & reliability reasons GeForce2 is the best card ever -> money / quality! http://www.opengl.org/discussion_boards/ubb/wink.gif
that's my point of view.. http://www.opengl.org/discussion_boards/ubb/smile.gif
Just a silly question:
what is the equivalent of Vertex Array Range extension from Nvidia on Radeon??

heeb
05-12-2001, 11:20 AM
ATI_vertex_array_store ?

ET3D
05-12-2001, 01:43 PM
I thought that maybe Radeon changes completely to software rendering. But after checking your code, and assuming that you're just using the 'updtRefl' flag to disable the copy, and this speeds up the Radeon, then I can only assume that it's only this particular operation that's done in software. This would mean copying the display to main memory, changing it, and copying back to the card - very slow.

I can't speculate exactly when the Radeon resorts to this, but I suggest that you use the same texture format as the display. It may not even be your program's fault - go to the Radeon's OpenGL tab in the display settings. I think that there's a switch saying to always use 16 bit textures, and that it's on by default. See if turning it off helps. See if any of the other switches help.

[This message has been edited by ET3D (edited 05-12-2001).]

[This message has been edited by ET3D (edited 05-12-2001).]

Humus
05-12-2001, 02:54 PM
Originally posted by Sylvain:
As a conclusion for performances & reliability reasons GeForce2 is the best card ever -> money / quality! http://www.opengl.org/discussion_boards/ubb/wink.gif

Bah, I wouldn't trade my Radeon for a GF2. It's both faster and more feature rich than GF2.


Originally posted by Sylvain:
what is the equivalent of Vertex Array Range extension from Nvidia on Radeon??

There used to be a GL_EXT_vertex_array_range extension that were like a copy of GL_NV_vertex_array_range, but it has been removed with newer drivers. I think nVidia has some patents or something on it and didn't like it. Which brings me to something off topic but really annoying: While most really useful extensions that spring out from ATi and most other major players usually ends up as GL_ARB or GL_EXT, every useful extesion that spring out from nVidia in 95% of the cases ends up as GL_NV and proprietary, thus reducing it's usefulness to below half. I think nVidia have understood what the "Open" part of OpenGL means ...

j
05-12-2001, 03:11 PM
Well, it's true that many of nVidia's extensions are proprietary. It isn't very nice of them, but the truth is that graphics card people are in the business to make money, and having proprietary extensions combined with a large market share helps to lock people into using your hardware.

The reason that the non-proprietary ones aren't promoted to EXT or ARB status is because the ARB decided that promoting extensions was wasteful. Many times they were ending up with 3 different extensions that all had the same functionality, but different enumerants and function names (SGIX_multitexture, EXT_multitexture, ARB_multitexture). Why bother, when the only real reason the extensions were promoted was because nVidia didn't want to have a 3DFX extension on their card, or ATI didn't want an SGIX extension on theirs?

Additionally, multiple extensions for the same functionality makes things harder for driver writers who have to be able to handle code for all 3 extensions, and it makes things harder for developers who are using an extension that is promoted to EXT status and is no longer supported on cards where the driver writers never thought of backwards compatibility.

So, instead of promoting, just use the original extension. Who cares where it came from?

j

Sylvain
05-13-2001, 12:00 AM
Well Humus i don't know what you're using to test the Radeon performances Vs GeForce2 but for sure i've really bad stats on my Celeron 333Mhz with the app i workin on.. I've tested both primitives types & both != calls like classics gl commands , vertex arrays, display lists etc.. http://www.opengl.org/discussion_boards/ubb/frown.gif Moreover Radeon drivers are not finished!! there are big probs with alpha blended prims & lighting!! plenty of bugs with rendering , specular lighting is ugly!! thus, Radeon is NOT a brand new board, i don't like the Nvidia way of dev either!! but i think it's a shame that months after the board has been delivered into the market the drivers are still buggy & unfinished!! Of course we are far from the drivers optimisations i think!! http://www.opengl.org/discussion_boards/ubb/frown.gif and please, finally remember that's this driver development problem is just the same problem with ATI graphics since the begining.. Things where easier before with 2D boards only!! http://www.opengl.org/discussion_boards/ubb/smile.gif

heeb
05-13-2001, 12:29 AM
ET3D: I think I tried all that and got no improvement, but I'll try again in case I missed anything.

Humus
05-13-2001, 04:07 AM
Originally posted by j:
Well, it's true that many of nVidia's extensions are proprietary. It isn't very nice of them, but the truth is that graphics card people are in the business to make money, and having proprietary extensions combined with a large market share helps to lock people into using your hardware.

The reason that the non-proprietary ones aren't promoted to EXT or ARB status is because the ARB decided that promoting extensions was wasteful. Many times they were ending up with 3 different extensions that all had the same functionality, but different enumerants and function names (SGIX_multitexture, EXT_multitexture, ARB_multitexture). Why bother, when the only real reason the extensions were promoted was because nVidia didn't want to have a 3DFX extension on their card, or ATI didn't want an SGIX extension on theirs?

Additionally, multiple extensions for the same functionality makes things harder for driver writers who have to be able to handle code for all 3 extensions, and it makes things harder for developers who are using an extension that is promoted to EXT status and is no longer supported on cards where the driver writers never thought of backwards compatibility.

So, instead of promoting, just use the original extension. Who cares where it came from?

j

Oh well, it's all about money as usual ... if I could live in a world without money, I'd be a happy man ...
You're right, it really doesn't matter who though up an idea first, but I just don't like when they lock extensions like that. If ATi could have a GL_NV_vertex_array_range in their driver it would be nice and nVidia could have all those ATi extensions in theirs too. Why call extensions after who though up them anyway, they could just be called like GL_vertex_array_range and noone would need to feel bad about having someelses name in their extension string. I want OpenGL to live up to it's name and be open. When they lock an extension it's not like the competitors just gonna drop that functionality, they will come up with their own extension and it'll end up as a pain in the ass for the developers to have many extensions they need to support.

Humus
05-13-2001, 04:28 AM
Originally posted by Sylvain:
Well Humus i don't know what you're using to test the Radeon performances Vs GeForce2 but for sure i've really bad stats on my Celeron 333Mhz with the app i workin on.. I've tested both primitives types & both != calls like classics gl commands , vertex arrays, display lists etc.. http://www.opengl.org/discussion_boards/ubb/frown.gif Moreover Radeon drivers are not finished!! there are big probs with alpha blended prims & lighting!! plenty of bugs with rendering , specular lighting is ugly!! thus, Radeon is NOT a brand new board, i don't like the Nvidia way of dev either!! but i think it's a shame that months after the board has been delivered into the market the drivers are still buggy & unfinished!! Of course we are far from the drivers optimisations i think!! http://www.opengl.org/discussion_boards/ubb/frown.gif and please, finally remember that's this driver development problem is just the same problem with ATI graphics since the begining.. Things where easier before with 2D boards only!! http://www.opengl.org/discussion_boards/ubb/smile.gif

Well, when I judge the performance I try games that are shipping, not some beta code I've come up with myself and "done a few tests" with and without taking any advantage of the features it supporting.
I could just aswell write an app that uses 3d textures and claim my Radeon is 250 times as fast as a GF3.
There's way too much whining about "ATi cannot write drivers ... ". I'm pretty sure that if I buy a GF3 today many of my old OpenGL apps would not work. Why? Since it's only been tested on a Radeon. I've been using a Radeon for half a year now, and I've been satisfied and never had any of the problems you're talking about.

Also, tried my benchmark GL_EXT_reme? ( http://hem.passagen.se/emiper/3d.html )
Interesting to note is that Radeons are actually beating a GF3 in T&L performance with static displaylists and 8 lights.


[This message has been edited by Humus (edited 05-13-2001).]

HFAFiend
05-13-2001, 11:35 AM
Originally posted by Sylvain:
"Well, i've bought a radeon SDR 32"..."it's awfull another joke from ATI!! http://www.opengl.org/discussion_boards/ubb/smile.gif) 3 times (or more) slower than a TnT2 board !!"

2 points...
1) you are not too bright, if you think SDR anything's will be fast
2) my radeon is faster than my friend's GeForce2 in many programs (and people say it looks better)...so obviously your tests suck

(sorry...that post just pissed me off too much to not comment)

[This message has been edited by HFAFiend (edited 05-13-2001).]

Sylvain
05-13-2001, 11:49 AM
Sorry guys who have bought this Radeon board but in real situation around 20000 - 30000 pts (fov) lighting enabled this card suck with generic drivers from 20 of march..
Anyway you can always listen to commercials @ATI who certainly claim that this card is the best ever made!! Now i don't think testings with games which don't handle T&L is a good thing.. Moreover using display List and think you are using T&L is just silly.. On Geforce as an example you can really boost performances with static geometry and VARange and i haven't heard of any (fixed) equivalent on Radeon for now..
Anyway, if q3 works fine with your radeon everything should work eh? blablabla..
My first attempt was to support this card but the drivers really suck!! so i'll wait a bit..

Humus
05-13-2001, 01:14 PM
Originally posted by Sylvain:
Sorry guys who have bought this Radeon board but in real situation around 20000 - 30000 pts (fov) lighting enabled this card suck with generic drivers from 20 of march..
Anyway you can always listen to commercials @ATI who certainly claim that this card is the best ever made!! Now i don't think testings with games which don't handle T&L is a good thing.. Moreover using display List and think you are using T&L is just silly.. On Geforce as an example you can really boost performances with static geometry and VARange and i haven't heard of any (fixed) equivalent on Radeon for now..
Anyway, if q3 works fine with your radeon everything should work eh? blablabla..
My first attempt was to support this card but the drivers really suck!! so i'll wait a bit..

Yet another nVidiot comment ...
You get the card and after you've done a few tests that didn't work as expected then you just call it crap. Why not try to find the source of the problem instead. If the problem can be traced down to a driver problem then send a mail to devrel@ati.com instead of coming with stupid comments about it.
Also, static display list is (or at least) should be transformed and lit by the T&L unit. Sure, vertex array range may be faster on some systems but that doesn't make the bench invalid.
Also, tried any game that actually uses the T&L unit? Radeons aren't that far behind, and way ahead of any software T&L card.

About the "commercials @ ATI" stuff, since you're making such comments I hope you're able to see through the commercials @ nVidia too. Wasn't the GF3 supposed to give "up to seven-fold increase in delivered performance" as stated in the GF3 product overview pdf?

D'Agostini
05-13-2001, 01:16 PM
I only would like to know if this driver problem is only in win 2000(mainly with AMD processors) as I heard or what?

Sylvain
05-13-2001, 01:43 PM
arf.

Gorg
05-13-2001, 07:34 PM
I am not going to add to this senseless and very childish discussion but I though Matt was clear about extensions a while ago. This is how I understood it:

when a vendor creates an extension, then call it with their company id. If other vendors come to them an says : hey! this extension is cool, I want to help doing it, then it becomes and EXT.

If the review board say : hey this extension is cool and generic enough, we will make it ARB.

oh, and having an NV or ATI on an extension doesn't stop another vendor from using it.

[This message has been edited by Gorg (edited 05-13-2001).]

paddy
05-14-2001, 05:28 AM
This thread is very funny.
I wonder if it belongs in "OpenGL advanced coding" but it's funny.

JackM
05-14-2001, 10:13 AM
Humus and Sylvain, get a life you both. Terms like Nvidiot etc maybe acceptable in some fanboyish site, but not here.

This thread is officially sucks....


Jack

davepermen
05-14-2001, 10:18 AM
yeah, sucks somehow, but anyways, the stats shows some simple fact:
radeon is **** or radeondriver is ****..
i dont think radeon is ****, so it has to be the driver

AND
the nvidiadrivers have now new CopyTexSubImage routines, the old ones where much slower.. and i know BigB uses the new drivers..

CopyTexSubImage means copieing memory, and in the softwarerenderer its no big stress cause there its in fact nearly a simple memcpy http://www.opengl.org/discussion_boards/ubb/wink.gif dont take so much time..

Humus
05-14-2001, 11:18 AM
Originally posted by Gorg:
I am not going to add to this senseless and very childish discussion but I though Matt was clear about extensions a while ago. This is how I understood it:

when a vendor creates an extension, then call it with their company id. If other vendors come to them an says : hey! this extension is cool, I want to help doing it, then it becomes and EXT.

If the review board say : hey this extension is cool and generic enough, we will make it ARB.

oh, and having an NV or ATI on an extension doesn't stop another vendor from using it.


Sure, but if it's proprietary they can't implement it nor promote it too ARB regardless of how cool it is.
That just kills the whole idea of an open API.

Humus
05-14-2001, 11:35 AM
Originally posted by JackM:
Humus and Sylvain, get a life you both. Terms like Nvidiot etc maybe acceptable in some fanboyish site, but not here.

This thread is officially sucks....


Jack

Sorry, but I just don't like when people base their opinion on a single test and start to bash the card and/or the driver team without trying to find the source of the problems which most likely lies somewhere within their own code. When I experience a problem I try to find the source of it. When the source is found I correct it if it's within my own code and if it resides in the drivers I notify the driver developement team. During the 6 month I've owned my Radeon it's only been two times I've found a problem that wasn't my own code. Both times I notified the driver team and got good feedback. The first time it wasn't even a driver bug but rather a hardware limitation I was unaware of, and the second time the bug got fixed quite quickly.
You can certainly not expect the driver team to fix bugs they are unaware of.

heeb
05-14-2001, 11:36 AM
When I posted this topic I was hoping that one of the ATI guys might pick up the demo and give some insight into the problem. The original question was a serious one.

Humus
05-14-2001, 11:54 AM
Originally posted by davepermen:
yeah, sucks somehow, but anyways, the stats shows some simple fact:
radeon is **** or radeondriver is ****..
i dont think radeon is ****, so it has to be the driver

AND
the nvidiadrivers have now new CopyTexSubImage routines, the old ones where much slower.. and i know BigB uses the new drivers..

CopyTexSubImage means copieing memory, and in the softwarerenderer its no big stress cause there its in fact nearly a simple memcpy http://www.opengl.org/discussion_boards/ubb/wink.gif dont take so much time..

Neither the Radeon or it's drivers are ****. I haven't used glCopyTexSubImage myself, but since it seams to be really slow I think the best solution to the problem would be to send a message to devrel@ati.com that there is a problem.

Humus
05-14-2001, 12:11 PM
Originally posted by heeb:
When I posted this topic I was hoping that one of the ATI guys might pick up the demo and give some insight into the problem. The original question was a serious one.

I just picked up the code and found 1 prob, but it's not THE prob. You're taking out a 512x512 texture from a window of size 640x480 ... that 512x512 doesn't fit into the window.

Gorg
05-14-2001, 12:17 PM
Originally posted by Humus:
Sure, but if it's proprietary they can't implement it nor promote it too ARB regardless of how cool it is.
That just kills the whole idea of an open API.

I am not sure where you get that from. From what I understand, anybody can still implement it. And ARB can ask the vendor to move to ARB, just like they did with dot3.

Opengl is open.

heeb
05-14-2001, 12:47 PM
Originally posted by Humus:
I just picked up the code and found 1 prob, but it's not THE prob. You're taking out a 512x512 texture from a window of size 640x480 ... that 512x512 doesn't fit into the window.

No, look at the start and end of updateReflectionTexture function, I resize the viewport to 512 by 512 when rendering for the reflection, then resize the viewport again when rendering to the current window size.

chrisATI
05-14-2001, 04:17 PM
Originally posted by heeb:
Hi,

I have a demo app that creates a reflection on water by the rendering the scene into a texture (using glCopyTexSubImage) and then blending with the water surface using a texture projection to make the reflection distort as the water ripples. The frame rates on 3 different systems are shown below, which brings me to my question:

Why is there a massive performance difference between the GeForce & Radeon cards? The bottleneck on the Radeon is the glCopyTexSubImage call (see Radeon performance with and without render to texture).

Why is the s/w renderers performance unaffected by the call to glCopyTexSubImage, but the Radeon's is?

Source and/or exe can be downloaded here if anyone wants to try it. www.futurenation.net/glbase (http://www.futurenation.net/glbase)
Both downloads are less than 300k.

Performance stats:

AMD ThunderBird 800 / Geforce 2 MX (by BigB)
Running at 1024x768/32 bits i got between 90fps and 140 fps.., very smooth
Running at 1024x768/16 bits i got between 30fps and 25 fps...

Celeron 466 / Radeon 64DDR (my system)
1024x768/32 bits - 2 fps with render to texture on (250 fps with render to texture off)

Pentium 266 / no OpenGL acceleration (my laptop)
1024x768/32 bits - 1 fps with render to texture on (1 fps with render to texture off)

Thanks
Adrian


without looking at your code, i can make the following suggestion, you might want to look at the pbuffer extension that lets you render to an off screen color buffer and then bind this buffer to a texture. i just finished a demo that does this and it is quite fast. also, i downloaded and ran your binary and i see that the reflection texture often contains pixels from my desktop (outside of the GL window)... so you may be having issues that you aren't aware of that make your program slow. --Chris

Humus
05-14-2001, 10:54 PM
Originally posted by heeb:
No, look at the start and end of updateReflectionTexture function, I resize the viewport to 512 by 512 when rendering for the reflection, then resize the viewport again when rendering to the current window size.

It doesn't matter, you still need to have a framebuffer large enough to fit the texture since it's read from framebuffer. You didn't think it would resize the framebuffer each time for you? That would kill performance.

Humus
05-14-2001, 10:57 PM
Originally posted by Gorg:
I am not sure where you get that from. From what I understand, anybody can still implement it. And ARB can ask the vendor to move to ARB, just like they did with dot3.

Opengl is open.

How come then so few of nVidias extensions has become ARB? I can hardly think they weren't cool enough. The DOT3 extension (which was invented by ATi) wasn't proprietary, thus it could be promoted to ARB.

ffish
05-15-2001, 12:58 AM
I have to agree with Gorg, Humus. Check out the extension registry, especially the links at the top. Extensions are just specifications, just like OpenGL is a specification. The ARB doesn't dictate how the specification has to be implemented internally, just that an implementation complies to the spec.

All the nVidia, ATI, SGI and other extension specifications are available to all at the extension registry and if any company wants to implement them, they are free to do so, as long as they comply with the specification. I think the reason why there are so many NV extensions is that they are forward thinking and interested in development. Probably the reason why so few NV extensions have become ARB is because the other companies don't want to implement the extensions because their hardware may not support it so well, or because it's a lot of work to develop a new feature like that.

Anyway, have a read of the extension registry notes for more info.

[This message has been edited by ffish (edited 05-15-2001).]

Humus
05-15-2001, 02:36 AM
Well you tell me what "IP status: NVIDIA Proprietary" means. It's there on almost all their extensions.
Sure, nVidia ís forward thinking, but so are ATi too. The only difference is that nVidia is locking their extension with legal crap to prevent others from implementing them. I wouldn't call that "open".

Gorg
05-15-2001, 05:04 AM
If you look at ATI extension, they are also marked ATI proprietary.

I believe NVidia extensions haven't been put ARB because they are too close to their hardware. Just look at the register combiners : You really need to have designed it with hardware in mind. It is probably extremely difficult to implement that extension on hardware that wasn't design for it.

Ati extensions are more simpler and higher level. DOT3, Vertex streams, vertex blend.


I still consider I might be wrong, but I don't think so at the moment. I'll look into the whole extension story when I will have the time.

davepermen
05-15-2001, 05:15 AM
first of all.. the demo is cool.. i have a gf2mx so no problems with speed here.. it looks pretty nice..

second:
i like the straightforward extensions on lowlevel of nvidia very.. they are not simple to understand, but once you got em you can get everything you want out of your geforce.. and like that you can get very very very very fast stuff.. means with simple opengl with glBegin/glEnd compared with optimiced VAR i have a boost bout 10/20 times.. and with RC's i can do very much in one pass.. complete per pixel diffuse and specular shading, thats great, too.. with the higherlevel apis, you get the possibility to do it on every ( nearly every ) gpu and like that you can do nice stuff for everyone, but the lowlevel let you do much nicer stuff for a specific board.. i would like if there are lowlevel-extensions for ati, too.. and nvidia could do sometimes highlevelapies itselfs, too.. cause sometimes its simple terrible sitting here reading specs and dont understanding any word http://www.opengl.org/discussion_boards/ubb/wink.gif

i think both do a great job, this CopyTexSubImage - "bug" is bad, somehow.. i think this is a big fault on the driver on the ati..

but the demo looks pretty cool, gave me a little feeling of gf3-dot-3-reflect-stuff in the texture-shader.. just a bit, but enough to like this type of reflections http://www.opengl.org/discussion_boards/ubb/wink.gif

daveg
05-15-2001, 05:40 AM
I spoke with the driver team about this. Currently the Radeon drivers only accelerate glCopyTexSubImage if the copy is from a pBuffer and you are using ARB_make_current_read.

ET3D
05-15-2001, 07:31 AM
Good to know, Dave, but doesn't sound too good. It means that I'll have to use pbuffers if I want to get decent performance on the Radeon. Or maybe I'm wrong in assuming that other cards that don't support pbuffers do accelerate this code path. How does the Rage 128 driver handle this? Does it have any acceleration for this function?

Humus
05-15-2001, 08:26 AM
I'm not 100% sure but I think pBuffers would be (at least slightly) faster than using the framebuffer for all cards that support it. I'd use pBuffers if it's supported and fall back to framebuffer if it isn't.

mcraighead
05-15-2001, 12:16 PM
Pbuffers are not a performance win -- they use up extra video memory and force you to do extra MakeCurrents. Using the back buffer directly should get you better performance.

The reasons to use pbuffers have more to do with functionality than performance -- independence from window size/window clipping, an extra buffer, offscreen rendering, etc.

So don't use pbuffers unless you actually need to. (That goes for any feature.)

- Matt

heeb
05-15-2001, 01:56 PM
Thanks guys. I'll look at the pbuffer extension specs.

zed
05-15-2001, 05:09 PM
anyone know if there are any plans to enable pbuffers to support greyscale buffers eg GL_LUMINANCE?

ET3D
05-16-2001, 04:21 AM
Thanks, Matt. Now I really don't know what to do http://www.opengl.org/discussion_boards/ubb/wink.gif I'll still have to use pbuffers to get good performance on the Radeon... Hopefully ATI will optimize copy to texture more fully.

Xmas
05-16-2001, 01:56 PM
I just tested the demo on a Voodoo 5 board. It performed horribly with render to texture turned on (below 5 fps, about 60 fps when disabled). And the reflection was totally screwed. Instead of a ball I only saw a colored bar (just as wide as the ball, with the same colors) directed towards the mountains. It looks like a one pixel wide texture stretched all over the water surface.
Well, no one's going to fix these drivers...

As rendering to textures is getting more and more important, I wonder if someone will come up with an easier way to do it. OpenGL's way of managing textures doesn't seem perfect to me.

Lev
05-16-2001, 04:10 PM
If I remember correctly Matt or Cass told us that NVIDIA is working on a easier way to render-to-texture stuff. But maybe i'm wrong and it was just a dream. Would be a nice thing though.

-Lev