PDA

View Full Version : GeForce2 Seems slow



lpVoid
02-01-2001, 04:31 AM
I have been writing short GL apps at work and at home. The work machine is an ~800mHZ Pentium III with a 16mb Rage 128 card. At home I use a 1.4gHZ Pentium 4 with a 64mb GeForce2 Ultra. The projects I build at work run fine, however, when I port them to the home machine, as both source and executable, they seem to be running at less than half the framerate. I'd expect the exact opposite to happen, but nonetheless, the app crawls on the faster machine. I am not using any OpenGL extensions. I ran the same app on a friend's GeForce2 machine and got the same results, so I'm pretty sure it's not an OS/Hardware problem. Do I need to specifically write code for the GeForce in order to get standard performance? Has anyone else had a problem like this?

Thanks,
Dave

DFrey
02-01-2001, 04:51 AM
Sounds like your programs are using a pixelformat that is not supported by the Geforce2. So my guess is, you need to enumerate the pixelformats and choose an appropriate one.

lpVoid
02-01-2001, 04:57 AM
Thank you! You are the first person who's been able to give me a solid peice of advice. Going to try that immediately!

Dave

Eric
02-01-2001, 05:12 AM
Either that (which seems unlikely, I must say: GeForce2 must accelerate everything that a Rage128 accelerates !) or you have polygon smoothing enabled (which the Rage128 perhaps ignores).

Try this before rendering anything:




glDisable(GL_POLYGON_SMOOTH);


Regards.

Eric

Bob
02-01-2001, 05:42 AM
Eric, maybe I misunderstood you, but itsn't this about accelerated pixelformats, and not accelerated features?

You say a GeForce accelerate everything the Rage 128 accelerates. This is true if you talk about features, but not about pixelformats. As far as I know, GeFroce only accelerates 16/16+0 and 32/24+8 pixelformats (bpp/depth+stencil). A Rage 128 maybe accelerates other formats which the GeForce can't/don't, while the Rage 128 doesn't accelerate GeForce's pixelformats, but I don't know for sure.

Eric
02-01-2001, 05:48 AM
Hi Bob !

I was talking about pixel formats...

I must say, I have not checked it but I thought a GeForce would at least accelerate the same pixel formats than a Rage128.

Actually, I didn't know that GeForce was that limited on pixel formats (probably because I only use 32/24+8 !).

Sorry for the mistake ! http://www.opengl.org/discussion_boards/ubb/smile.gif
And thanks for the info !

But lpVoid, you should check the polygon smoothing anyway: I find a lot of posts about a GeForce being slow when people try their app on it for the first time and this is quite often the cause !

Regards.

Eric


[This message has been edited by Eric (edited 02-01-2001).]

lpVoid
02-01-2001, 06:01 AM
Bob, Eric Thanks

I do have poly smoothing enabled. Sometimes in my cut and paste frenzys I stop paying attention to what I'm really doing. And not surprisingly, the pixel format was taken straight off the msdn without a second glance. I was using 32/24 + 0.

Thanks again for the help. Hopefully by tonite's end I'll have this problem resolved and I might even be a shade less greener :-)

Dave

Eric
02-01-2001, 06:36 AM
On my computer (GeForce DDR 32Mb, Windows NT 4.0 SP6a + Detonator 6.67), when I ask for 32/24+0, I obtain an accelerated pixel format. So your problem most probably lies in the polygon smoothing !

Regards.

Eric

DFrey
02-01-2001, 07:59 AM
Check the renderer string to make sure you are actually running in hardware too. Remember, color depth and z-depth are not the only factors in determining if a pixelformat supports acceleration by an OpenGL implementation. But I would agree with Eric at this point, your problem is probably the polygon smoothing. But checking the renderer string should also be done regardless as it makes these kinds of problems much easier to figure out.

Elixer
02-01-2001, 10:40 AM
I have a Rage 128, and it only supports 16 or 32, no 24. (that is bitdepth)

What I would do is if you app is using a window, then try it fullscreen. If it is full screen, then have it render in a window. Just make sure your desktop is set to 32bits. http://www.opengl.org/discussion_boards/ubb/smile.gif

AFAIK, anything the Rage 128 can do, the Geforce 2 series does it as well. You may also have to try a different driver, since you may have run into some driver bugs.

Or maybe it is a Pentium 4 bug. http://www.opengl.org/discussion_boards/ubb/biggrin.gif



[This message has been edited by Elixer (edited 02-01-2001).]

Rizo
02-01-2001, 11:07 AM
Could it be your CPU? I hear P4 has a whole bunch of problems which will make your system as slow as a 486...Thechnically your vid card should take care of everything, but if there is something that the vid card doesn't accelerate (like what the other guys suggested), then you have to use your cpu for it.

Check out: www.emulators.com/pentium4.htm (http://www.emulators.com/pentium4.htm)

If you don't have P4 optimized code (which you can only have if you are doing assembly as no compiler generates barely even PIII optimized code) then that could be another reason why u're slow...

How fast is your RAM (PC600/PC800)? P4s are VERY memory hungry and if u're doing a lot of loading from RAM (I don't know, like loading a data file) then your RAM is slowing you down.

Rizo

lpVoid
02-01-2001, 11:22 AM
PC600.

I've thought about the p4 issue, but commercial products run excellent on the system. I've never seen Quake or Maya run better!

The application that I noticed this problem on was a simple underwater caustics technique using a series of 32 bitmaps for the caustic blends. Each bitmap is using 8 color bits and is no more than 16k. There is no complex geometry in the scene and only one transformation.

I really don't think it's a CPU problem because it ran similarly bad on an athalon with a GeForce2.

( just want to say thanks for all the support from all of you. I've never gotten such prompt help with anything before! )

Dave

[This message has been edited by lpVoid (edited 02-01-2001).]

DFrey
02-01-2001, 11:26 AM
... anything the Rage 128 can do, the Geforce 2 series does it as well.
This is not the case. For example, the Rage 128 supports 32 bit depth buffers, were as the maximum a TNT or Geforce supports is 24 bit.

lpVoid
02-02-2001, 04:58 AM
Hi all,

Well, I went home and tried about every pixel-format variation as well as disabled smoothing and still got nowhere. I'm thinking that I must be doing something silly somewhere in my code, so I've thrown it up on my companys' website. If anyone has the time, could they take a look at tell me what the heck I'm doing wrong?
http://www.planetpolicy.com/davecode/underwater.htm

Thanks,
Dave

Bob
02-02-2001, 05:36 AM
In your pixelformatdescriptor, you ask for a 24/32+8 format. Insert a DescribePixelFormat after setting the pixelformat, and see what it returns. Thats the only think I can see that can affect performance like the way you describe.

Also check what glGetString(GL_RENDERER) returns. If it says something about Microsoft, you are on the wrong track http://www.opengl.org/discussion_boards/ubb/smile.gif

ribblem
02-02-2001, 07:27 AM
I think that you're alpha buffer being 8 bits it what's killing performance. I tryed it on my pc which has a geforce2 and that mode dies out. Try setting that to zero and see if you get you're fps up. I couldn't get a hardware alpha buffer working on any reasonable bit settings that I could think of. Not sure why though, if you find one that works do tell.

Thaellin
02-02-2001, 10:01 AM
One thing which would keep you from getting an accelerated OpenGL context which I have not seen mentioned is if you have a second monitor enabled.

I saw nothing in your posts to indicate this might be the case, but had a similar situation not long ago... I realized I was getting software OpenGL and it took about half an hour to remember that my other monitor was enabled.

Good luck either way,
-- Jeff

DFrey
02-02-2001, 10:07 AM
To get an 8 bit alpha buffer, you'll need to be in 32 bit color mode on the Geforce or TNT. I have no problem using alpha buffering on my TNT with no appreciable slowdown. I never looked to see what happens if you use alpha buffering and do not have a hardware alpha buffer. Perhaps in that instance it falls back to software. Also, I've seen code that sets the pixel format properly, but fails to first set the display into the mode the pixel format describes. I mean, there is no point using a 32 bpp pixelformat if your display is still running at 16 bpp or vice versa.

lpVoid
02-02-2001, 10:19 AM
Still at a loss....

Here's what I've done so far.

Changed the pixel format to 32/24 + 8 and 32/24 + 0.
Explicitly disabled smoothing with glDisable( GL_SMOOTH ).
Checked the renderer string and got "geforce/geforce2 agp"
Added framerate checking code and here's the results.
RAGE 128: 192fps. ( this seems way out there, but I checked the math in debug mode and it seems correct )
GeForce2: 86fps.
VooDoo4: 84fps.

For all intensive purposes, 86 fps is definately acceptable, but the fact that the GeForce2 is giving me slower resluts is driving me nuts. I won't be able to sleep until I get this figured out.

DFrey
02-02-2001, 11:07 AM
Interesting, I just benchmarked the code on my computer (TNT, 400 Mhz K6-III) and here are my results:
250 FPS at 16 bpp
142 FPS at 32 bpp

If your Geforce2 is stalling that bad, sounds like it may be emulating the 32 depth buffer you request. My TNT automatically uses either a 16 bit or 24 bit depth buffer depending upon the color depth. I'm guessing the Geforce2 driver is emulating something, and I'd guess it was the depth buffer. Come on Matt or Cass give us a clue. http://www.opengl.org/discussion_boards/ubb/smile.gif

Humus
02-02-2001, 11:55 AM
Originally posted by lpVoid:
Still at a loss....

Here's what I've done so far.

Changed the pixel format to 32/24 + 8 and 32/24 + 0.
Explicitly disabled smoothing with glDisable( GL_SMOOTH ).
Checked the renderer string and got "geforce/geforce2 agp"
Added framerate checking code and here's the results.
RAGE 128: 192fps. ( this seems way out there, but I checked the math in debug mode and it seems correct )
GeForce2: 86fps.
VooDoo4: 84fps.

For all intensive purposes, 86 fps is definately acceptable, but the fact that the GeForce2 is giving me slower resluts is driving me nuts. I won't be able to sleep until I get this figured out.


Can it be a Vsync thingy? 84fps and 86fps looks kinda close to 85Hz which is a standard refreshrate ...

DFrey
02-02-2001, 12:01 PM
Hehe, I bet that's it. (I have a plate of crow ready just in case.) I also forgot to mention, you have the animation rate of the texture dependent on the frame rate. I think for that animation, you need to derive the texture index from time. That way it'll look the same on different hardware as long as the frame rate is higher than the animation frequency.

lpVoid
02-02-2001, 01:27 PM
I am such an Idiot!!!

That seems to be exactly the problem. I upped the refresh rate to 100 Hz and guess what? The framerate jumped to ~100.

Seems like the max refresh rate my monitor can handle is 100. I had no idea that the monitor affects OGL performance. Talk about a learning experience!

I agree about the fact that the animation should be tied to time, and not the framerate, but nonetheless, I've learned a valuable lesson here by doing the opposite.

I'd like to thank everyone for their help on this problem. I only hope that soon I'll be able to help coders with their problems as well as you all helped me.

Dave

Elixer
02-04-2001, 10:53 AM
I hate these type of "errors". You wonder why your brand new card is only doing a certain fps, and then you find out it is because of refresh rate.

Of course, you could disable vsync, and see how fast it can get. http://www.opengl.org/discussion_boards/ubb/smile.gif

Eric
02-05-2001, 12:46 AM
Originally posted by lpVoid:

Explicitly disabled smoothing with glDisable( GL_SMOOTH ).

I understand that you have solved you problem but I just wanted to add something. Using glDisable(GL_SMOOTH) is incorrect with OpenGL. You should use:




glDisable(GL_POLYGON_SMOOTH);


GL_SMOOTH is for interpolation in the polygon (i.e. color interpolation) and is used with glShadeModel while GL_POLYGON_SMOOTH is for drawing anti-aliased polygons and is used with glEnable.

Actually, using glEnable(GL_SMOOTH) should result in an error (and do nothing !).

Well, anyway, glad you found the problem !

Regards.

Eric


[This message has been edited by Eric (edited 02-05-2001).]

[This message has been edited by Eric (edited 02-05-2001).]