GeForce 4 vs Quadro 4

Hi,

Does anybody know what the basic differences are between the GeForce 4 TI 4600 and the Quadro 4 XGL 900? I’m browsing the NVidia site but I can’t find any information on how to choose the best card for what I’m doing. The only thing I found is a comparison between the Quadro 4 XGL 750 and the GeForce 4 TI 4600. It’s a very basic comparison that you can find at http://www.nvchips-fr.com/articles/article.php?IDa=55&p=10, but it shows slightly better performances for the GeForce 4. So why is the quadro so much more expensive? It should do something more… I hope.

Thanks for the help.

I think one of the main advantages of the Quadro is that it draws lines (antialiased or not) a lot faster than a GeForce4, which is useful if you are using CAD applications. At least that was the difference between Quadro and GeForce.

Note that it was possible to turn a GeForce card into a Quadro card by doing some soldering…

I do not know if there are more differences between Quadro4 and GeForce4.

Regards.

Eric

Let me just say though that I’m using a GeForce 4 ti 4600 (by Gainward) and it’s amazing. I’m running Serious Sam (which seems to be most intensive on the majority of benchmarks) at 1920 x 1200 resolution, 32 bpp, and nearly all special affects and algorithms maxed out. And I’m getting average 50 - 100 fps. Saying that I’m a happy guy would be an understatement.

Found this: http://www.nvidia.com/docs/lo/1930/SUPP/Quadro_Versus_GeForce_Final.pdf
under technical briefs here: http://www.nvidia.com/view.asp?PAGE=pg_20020219673469

I don’t know about the quadro4 v geforce4 but the Geforce 3 didn’t support quad buffering whereas the Quadro version did. Since the geforce4 has dual outputs quad buffering isn’t really necessary for stereo 3D anymore. Quad buffering was a bit buggy anyway.

Thanks for the links, it’s great information. Juste too bad the compare old cards instead of the newest ones.

I also have a GeForce 4 4600. I don’t like the one from leadtek cause it doesn’t sync with the vertical retrace. But the winfast one is great. Sooo powerfull!

I was just wondering if I could get something ever more powerfull.

Originally posted by Relic:
Found this: http://www.nvidia.com/docs/lo/1930/SUPP/Quadro_Versus_GeForce_Final.pdf
under technical briefs here: http://www.nvidia.com/view.asp?PAGE=pg_20020219673469

Hi
Did someone of you, guys, paid attention to one of the benchmarks in this paper, which shows the benefits of using SSE2 instructions in the NVidia drivers for immedeate mode (page 27)? Maybe this has relation to this thread: http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/006395.html
Knackered uses there dual PIII (which is maybe used for SW T&L in immedeate mode…) with GF2 gts (which is maybe used for HW T&L display lists)… Interesting…

Regards
Martin

One thing which the article didn’t say, is that Quadro supports buffer flipping of non full-screen window (vs. buffer blitting)
Alas, with FSAA, buffer flipping is not available on Quadro neither.
I think this has big implication on ViewPerf scores, since ViewPerf always uses non full-screen window (and doesn’t use FSAA), and runs the tests without vsync. So on the higher framerate tests, you are really measuring swapbuffer times – and too many of them.
For example, if you have a test that takes 3 milliseconds to render, and it takes 2 milliseconds to swap the 1260x960 window, you will get a very biased result. When in reality, you are not going to swapbuffers 200 times a second (probably only 70-80 times a second…)

Sorry if I seem ignorant (I am ), but what are the differences between buffer-flipping and buffer-blitting? My GeForce2 MX has both options for rendering, which is better?

Originally posted by yoale:
Sorry if I seem ignorant (I am ), but what are the differences between buffer-flipping and buffer-blitting? My GeForce2 MX has both options for rendering, which is better?

3D graphics is double buffered, so the next frame is being built up in one buffer, while the previusly completed image is being scanned out to the monitor. When rendering completes, the two buffers mus switch roles. This is called “swapbuffers”. With buffer flipping, this operation costs nothing (or almost nothing) because it is essentially swapping ‘pointers’. With buffer-blitting, there is a copy operation inside the graphic’s board memory. So it takes time, and more time the more pixels you have in your window.
So obviously, buffer flipping is better

I have one Quadro2 card in my home, and a gts4 4600 in my office. It’s seems under windows2000, the gts4 has some problem to get decent 24 bit z-buffer, and occur some intersect error. I didn’t know why. It’s faster than Quadro2 in most of condition, but when heavily loaded model, above 13,000 polygon, it get 10 fps, while Quadro2 get 46fps.

It’s seems under windows2000, the gts4 has some problem to get decent 24 bit z-buffer, and occur some intersect error.

It could also be that the application is not clever enough to select an appropriate pixelformat in case the GeForce4 offers more choices.