PDA

View Full Version : VBO and vertexprogram in software



Mazy
04-06-2003, 07:19 AM
I got strange results when i tried VBO on my gf4mx ( nforce2 ).. normally it works as it should, increased FPS.. But ehwn i tried bumpmap the fps dropped fast. Moving the vertexarrays back to the normal routines (bypass the VBO) restored the FPS again..
On my gf3 it worked fine all the time.

So the question is, Doenst gf4mx emulate the vertexprograms? and if so, can the slowdown depend on that it has to transfer data from agp/videomem, back tp system, calculate, and trasnfer it back? in that case they either must let the driver take care of this (using a 'vertex shadow' in systemmem if vertexprograms are active) or let the developer know that what will happen through a glget something, do that we can avoid this behaivor..

or maybe its just some bug in my code http://www.opengl.org/discussion_boards/ubb/smile.gif

Korval
04-06-2003, 11:01 AM
VBO's and software vertex programs are a bad combination. Typically, VBO memory is not memory that the CPU should read from. Of course, software vp's requires the CPU to read from that memory. Hence the slowdown.

I don't think nVidia is going to put too much driver emphesis on what to do with someone who is using software vertex programs.

Mazy
04-06-2003, 12:24 PM
I mean 'Software vertex program' as in that ARB_vertex_program are emulated with the CPU where the chip arent able to do it, but the driver is. (ie, the emulation is hidden to the developer)

vincoof
04-07-2003, 01:23 AM
With VBO, vertex data may be stored in the graphics card, but the vertex program software emulation has to be performed on the CPU side.
In that case, vertex programs computations need to download the vertex data from the graphics card to the CPU side every time ; and because AGP is optimized for upload (not for download), downloading vertex data every time is expensive.

Zengar
04-07-2003, 07:47 AM
The problem is: on cards that support vertex programs in software driver should support a copy of VBO in system memory to avoid performance slowdown.
How can I detect if vertex programs are supported in hardware howewer? If it's impossible, than VBO makes not much sence for <<GL3 cards. I like to use both VBO and vertex programs in my programs, so I must be shure it will also work at all cards.
nVidia should really consider it.

vincoof
04-07-2003, 12:55 PM
I don't think that "nVidia should consider it". Every nVidia chip prior to NV20 emulates vertex programs in software. So when you detect a graphics card NVxx, if xx is inferior than 20 you may disable vertex programs.

Korval
04-07-2003, 01:19 PM
I don't think that "nVidia should consider it". Every nVidia chip prior to NV20 emulates vertex programs in software. So when you detect a graphics card NVxx, if xx is inferior than 20 you may disable vertex programs.

Actually, I disgree on 2 parts.

First, I think you should turn off VBO before you turn off vertex programs. Given a moderately decent CPU, you can still get decent speed out of the feature. Also, I would imagine that the power of a vp is more important to the user (who has an older card) than the speed of VBO's.

Secondly, there is no readily avaliable way to "detect a graphics card NVxx." If nVidia isn't going to expose a means of testing vertex-program-with-VAR/VBO performance, then their drivers should have to deal with the situation. Granted, I seriously doubt nVidia is going to spend significant driver developer resources on 2-year-old cards, but that's the correct solution.

Zengar
04-07-2003, 01:36 PM
I agree with Korval in all points. vp is more important then VBO :-)

Humus
04-07-2003, 01:42 PM
The problem is solvable on the driver-side, but I don't expect this to be top-priority until apps using VBO appears on the market. The driver can make system memory copies for the CPU processing. The hard problem is to decide when to do that. The driver will have to track how each buffer is used and see whether it would be beneficial to make a system memory copy. For well-behaving apps this should be feasable.

vincoof
04-07-2003, 01:46 PM
Originally posted by Korval:
First, I think you should turn off VBO before you turn off vertex programs.

So far, I don't remember having written that vertex programs should be disabled 'before' VBO.
Though, it all depends on the application (as usual) since sometimes quality is a must, and sometimes speed is a must.



Originally posted by Korval:
Secondly, there is no readily avaliable way to "detect a graphics card NVxx."
Even though I agree it still lacks some kind of glGetIntegerv(GL_CHIPSET_NV), you can actually check glGetString(GL_RENDERER) which gives IMO a sufficient information for the 4-5 years to come at least. And if your software survives that period, you still can deliver online a patch that tests brand new 2008 graphics cards.



Originally posted by Korval:
Granted, I seriously doubt nVidia is going to spend significant driver developer resources on 2-year-old cards.
That is at least one point on which we agree http://www.opengl.org/discussion_boards/ubb/smile.gif

Zengar
04-07-2003, 01:53 PM
Originally posted by vincoof:
you can actually check glGetString(GL_RENDERER) which gives IMO a sufficient information for the 4-5 years to come at least.

Oh really? "Buzz/emulated by GeForce3/3DNow!/SSE/AGP"? Or you want me make a lexical analysis of the string in my program? That's monstrous!
NVxx, GeForcex[-mx] and next drivers will probably write something new.

No, the opnly solution see at time is writing in your readme.txt:

"If you experiance low performnce and run GeForce1/2/4MX thry disabling option Use VBO in program options menu http://www.opengl.org/discussion_boards/ubb/smile.gif"

Zengar
04-07-2003, 01:55 PM
Eeeh, you should write it without all my mistakes. I really ought to do something with my spelling...

Greetings...

Mazy
04-07-2003, 02:04 PM
And this is by now means tied to just nVidia.. All vendors can support VP with the CPU, and they can have VBO or other AGP extensions.. should we know about all different vendors/cards? i dont think it has to be queryable, becourse this must be an issue in their DXdrivers aswell. And i bet its solved there.

vincoof
04-07-2003, 02:07 PM
Originally posted by Zengar:
Oh really? "Buzz/emulated by GeForce3/3DNow!/SSE/AGP"? Or you want me make a lexical analysis of the string in my program? That's monstrous!
ROFL
Indeed, the string analysis is not that hard. And if the analysis fails (so messy that you can't detect anything in the string) you still can fallback to default settings which, for instance, take for granted that the EXTENSIONS string only expose hardwired features.



Originally posted by Zengar:
"If you experiance low performnce and run GeForce1/2/4MX thry disabling option Use VBO in program options menu"

That's a good thing too, but keep in mind that a consumer hates reading documentation, be it a short file README_NOW_OR_YOURE_DEAD_CHICKEN.txt or be it an holographic cover with nude girls on it.

vincoof
04-07-2003, 02:14 PM
Originally posted by Mazy:
should we know about all different vendors/cards?
Unfortunatelty, I think the answer is yes.



Originally posted by Mazy:
i dont think it has to be queryable, becourse this must be an issue in their DXdrivers aswell. And i bet its solved there.
Of course it's solved. There's no compatibility with different DX APIs, and the specification (for what we can call a spec) is closed, unlike "OpenGL" which is "opened".

[This message has been edited by vincoof (edited 04-07-2003).]

cass
04-07-2003, 02:38 PM
Just to clarify, VBO was designed to be able to move/keep arrays in system memory when it could recognize that as the optimal location. It will probably take some time for drivers to be able to make this determination well, as the solution to this problem is non-trivial.

Mazy
04-07-2003, 03:00 PM
Thanks Cass, that was the anwer i (sort of) wanted.. it doenst help me right now, but i guess that means that eventually the driver will recognise this scenario and i can go back to allways use VBO for my arrays.

( correct me if im totally wrong )

cass
04-07-2003, 06:40 PM
Originally posted by Mazy:
Thanks Cass, that was the anwer i (sort of) wanted.. it doenst help me right now, but i guess that means that eventually the driver will recognise this scenario and i can go back to allways use VBO for my arrays.

( correct me if im totally wrong )


That's the basic plan, Mazy. Hopefully in a few months VBO can be recommended without qualification.

Thanks -
Cass

jwatte
04-07-2003, 06:40 PM
The implementation could choose to be a memory pig and just always keep the system memory copy, choosing to copy to AGP the first time the array is used, and again later if the AGP space has been re-used for something else.

Regarding detecting real vertex program support: if there are less than 4 texture units, vertex programs are emulated in software.

There is hardware where there are 4 units or more, where vertex programs are still emulated; Intel i845G for example. However, the i845 is unlikely to suffer a slowdown using VBO anyway, as it has no HT&L at all (might not even implement the extension).

Zengar
04-08-2003, 12:53 PM
Originally posted by vincoof:
That's a good thing too, but keep in mind that a consumer hates reading documentation, be it a short file README_NOW_OR_YOURE_DEAD_CHICKEN.txt or be it an holographic cover with nude girls on it.

Oh, I can assure you: if consumer gets aboout 10 fps at his GeForce4 MX he WILL read readme only to find out what name does the sun of bitch who coded this application bear. That's because you should write your name down near the TROUBLESHOUTING section. Or better inside it.

vincoof
04-08-2003, 01:06 PM
I bet this consumer will browse the web during 10 years before he even thinks of opening the three-sheet documentation. And if that person is not connected to the Internet s/he will call the hot-line, especially if it's free.

Mind me, I love documentation. But my little experience has shown the poor interest people bear in docs.

Anyway I think this discussion is pointless. I'm not going to change your opinion as well as you're not going to change mine http://www.opengl.org/discussion_boards/ubb/smile.gif
Moreover it's off-topic, so could we stop it here please ?

cass
04-08-2003, 06:34 PM
<comments deleted>

Anyway I think this discussion is pointless. I'm not going to change your opinion as well as you're not going to change mine http://www.opengl.org/discussion_boards/ubb/smile.gif
Moreover it's off-topic, so could we stop it here please ?

It's a party foul to make comments on an off-topic sub-thread and call for the sub-thread's termination all in the same post. http://www.opengl.org/discussion_boards/ubb/smile.gif

Cass

edit: Screwed up the UBB mark-up by being too clever with my edits...

[This message has been edited by cass (edited 04-08-2003).]

vincoof
04-08-2003, 11:58 PM
I'm asking him to shut up. I don't really care if he replies. It would be fair if he did (for the last-word theory), but anyhow I shall not reply then. I hate off-topic posts and I think this has gone too far here (and while I'm replying again it is "even more too far" http://www.opengl.org/discussion_boards/ubb/wink.gif)

Zengar
04-09-2003, 03:41 AM
Originally posted by vincoof:
I'm asking him to shut up. I don't really care if he replies.

Sorry, vincoof, but what do you mean by "shut up?" Did I say something funny? I guess it was you who tried to be funny. I don't really care if you like me or not, but try to choose expressions.
Simply wanted to clear it out.

vincoof
04-09-2003, 12:46 PM
ROFL It's a typo !

I wanted to write "I'm not asking it to shut up" instead of "I'm asking him to shut up".

I rarely check what I write in my posts, but ... this time it makes a big difference !

Sorry for the inconvenience, I did not want to flame. Sincerely.

Zengar
04-10-2003, 04:41 AM
It's all right. Sorry.