GeForce2 Seems slow

I have been writing short GL apps at work and at home. The work machine is an ~800mHZ Pentium III with a 16mb Rage 128 card. At home I use a 1.4gHZ Pentium 4 with a 64mb GeForce2 Ultra. The projects I build at work run fine, however, when I port them to the home machine, as both source and executable, they seem to be running at less than half the framerate. I’d expect the exact opposite to happen, but nonetheless, the app crawls on the faster machine. I am not using any OpenGL extensions. I ran the same app on a friend’s GeForce2 machine and got the same results, so I’m pretty sure it’s not an OS/Hardware problem. Do I need to specifically write code for the GeForce in order to get standard performance? Has anyone else had a problem like this?

Thanks,
Dave

Sounds like your programs are using a pixelformat that is not supported by the Geforce2. So my guess is, you need to enumerate the pixelformats and choose an appropriate one.

Thank you! You are the first person who’s been able to give me a solid peice of advice. Going to try that immediately!

Dave

Either that (which seems unlikely, I must say: GeForce2 must accelerate everything that a Rage128 accelerates !) or you have polygon smoothing enabled (which the Rage128 perhaps ignores).

Try this before rendering anything:

glDisable(GL_POLYGON_SMOOTH);

Regards.

Eric

Eric, maybe I misunderstood you, but itsn’t this about accelerated pixelformats, and not accelerated features?

You say a GeForce accelerate everything the Rage 128 accelerates. This is true if you talk about features, but not about pixelformats. As far as I know, GeFroce only accelerates 16/16+0 and 32/24+8 pixelformats (bpp/depth+stencil). A Rage 128 maybe accelerates other formats which the GeForce can’t/don’t, while the Rage 128 doesn’t accelerate GeForce’s pixelformats, but I don’t know for sure.

Hi Bob !

I was talking about pixel formats…

I must say, I have not checked it but I thought a GeForce would at least accelerate the same pixel formats than a Rage128.

Actually, I didn’t know that GeForce was that limited on pixel formats (probably because I only use 32/24+8 !).

Sorry for the mistake !
And thanks for the info !

But lpVoid, you should check the polygon smoothing anyway: I find a lot of posts about a GeForce being slow when people try their app on it for the first time and this is quite often the cause !

Regards.

Eric

[This message has been edited by Eric (edited 02-01-2001).]

Bob, Eric Thanks

I do have poly smoothing enabled. Sometimes in my cut and paste frenzys I stop paying attention to what I’m really doing. And not surprisingly, the pixel format was taken straight off the msdn without a second glance. I was using 32/24 + 0.

Thanks again for the help. Hopefully by tonite’s end I’ll have this problem resolved and I might even be a shade less greener :slight_smile:

Dave

On my computer (GeForce DDR 32Mb, Windows NT 4.0 SP6a + Detonator 6.67), when I ask for 32/24+0, I obtain an accelerated pixel format. So your problem most probably lies in the polygon smoothing !

Regards.

Eric

Check the renderer string to make sure you are actually running in hardware too. Remember, color depth and z-depth are not the only factors in determining if a pixelformat supports acceleration by an OpenGL implementation. But I would agree with Eric at this point, your problem is probably the polygon smoothing. But checking the renderer string should also be done regardless as it makes these kinds of problems much easier to figure out.

I have a Rage 128, and it only supports 16 or 32, no 24. (that is bitdepth)

What I would do is if you app is using a window, then try it fullscreen. If it is full screen, then have it render in a window. Just make sure your desktop is set to 32bits.

AFAIK, anything the Rage 128 can do, the Geforce 2 series does it as well. You may also have to try a different driver, since you may have run into some driver bugs.

Or maybe it is a Pentium 4 bug.

[This message has been edited by Elixer (edited 02-01-2001).]

Could it be your CPU? I hear P4 has a whole bunch of problems which will make your system as slow as a 486…Thechnically your vid card should take care of everything, but if there is something that the vid card doesn’t accelerate (like what the other guys suggested), then you have to use your cpu for it.

Check out: www.emulators.com/pentium4.htm

If you don’t have P4 optimized code (which you can only have if you are doing assembly as no compiler generates barely even PIII optimized code) then that could be another reason why u’re slow…

How fast is your RAM (PC600/PC800)? P4s are VERY memory hungry and if u’re doing a lot of loading from RAM (I don’t know, like loading a data file) then your RAM is slowing you down.

Rizo

PC600.

I’ve thought about the p4 issue, but commercial products run excellent on the system. I’ve never seen Quake or Maya run better!

The application that I noticed this problem on was a simple underwater caustics technique using a series of 32 bitmaps for the caustic blends. Each bitmap is using 8 color bits and is no more than 16k. There is no complex geometry in the scene and only one transformation.

I really don’t think it’s a CPU problem because it ran similarly bad on an athalon with a GeForce2.

( just want to say thanks for all the support from all of you. I’ve never gotten such prompt help with anything before! )

Dave

[This message has been edited by lpVoid (edited 02-01-2001).]

… anything the Rage 128 can do, the Geforce 2 series does it as well.

This is not the case. For example, the Rage 128 supports 32 bit depth buffers, were as the maximum a TNT or Geforce supports is 24 bit.

Hi all,

Well, I went home and tried about every pixel-format variation as well as disabled smoothing and still got nowhere. I’m thinking that I must be doing something silly somewhere in my code, so I’ve thrown it up on my companys’ website. If anyone has the time, could they take a look at tell me what the heck I’m doing wrong?
http://www.planetpolicy.com/davecode/underwater.htm

Thanks,
Dave

In your pixelformatdescriptor, you ask for a 24/32+8 format. Insert a DescribePixelFormat after setting the pixelformat, and see what it returns. Thats the only think I can see that can affect performance like the way you describe.

Also check what glGetString(GL_RENDERER) returns. If it says something about Microsoft, you are on the wrong track

I think that you’re alpha buffer being 8 bits it what’s killing performance. I tryed it on my pc which has a geforce2 and that mode dies out. Try setting that to zero and see if you get you’re fps up. I couldn’t get a hardware alpha buffer working on any reasonable bit settings that I could think of. Not sure why though, if you find one that works do tell.

One thing which would keep you from getting an accelerated OpenGL context which I have not seen mentioned is if you have a second monitor enabled.

I saw nothing in your posts to indicate this might be the case, but had a similar situation not long ago… I realized I was getting software OpenGL and it took about half an hour to remember that my other monitor was enabled.

Good luck either way,
– Jeff

To get an 8 bit alpha buffer, you’ll need to be in 32 bit color mode on the Geforce or TNT. I have no problem using alpha buffering on my TNT with no appreciable slowdown. I never looked to see what happens if you use alpha buffering and do not have a hardware alpha buffer. Perhaps in that instance it falls back to software. Also, I’ve seen code that sets the pixel format properly, but fails to first set the display into the mode the pixel format describes. I mean, there is no point using a 32 bpp pixelformat if your display is still running at 16 bpp or vice versa.

Still at a loss…

Here’s what I’ve done so far.

Changed the pixel format to 32/24 + 8 and 32/24 + 0.
Explicitly disabled smoothing with glDisable( GL_SMOOTH ).
Checked the renderer string and got “geforce/geforce2 agp”
Added framerate checking code and here’s the results.
RAGE 128: 192fps. ( this seems way out there, but I checked the math in debug mode and it seems correct )
GeForce2: 86fps.
VooDoo4: 84fps.

For all intensive purposes, 86 fps is definately acceptable, but the fact that the GeForce2 is giving me slower resluts is driving me nuts. I won’t be able to sleep until I get this figured out.

Interesting, I just benchmarked the code on my computer (TNT, 400 Mhz K6-III) and here are my results:
250 FPS at 16 bpp
142 FPS at 32 bpp

If your Geforce2 is stalling that bad, sounds like it may be emulating the 32 depth buffer you request. My TNT automatically uses either a 16 bit or 24 bit depth buffer depending upon the color depth. I’m guessing the Geforce2 driver is emulating something, and I’d guess it was the depth buffer. Come on Matt or Cass give us a clue.