PDA

View Full Version : opengl's performances



meuns
03-17-2003, 12:30 AM
If I wants 50 images/second on a AMD Duron 700 with a GFORCE 2 MX, which is the maximun number of polygon which I should use each rendering? (in rendering i use lighting and mutlitexturing on each polygon, the polygons are quads)

sorry for my very bad english!

OldMan
03-17-2003, 01:31 AM
That is something you cannot predict. You must take into account the texturing (filtering and mip mapping), the number of lights, the size of your vertex lists, your resolution and tons of other stuff.

You will need to try... It may even be that your program will not be T&L limited.. so that the number of polygons is not the main factor.

raverbach
03-17-2003, 04:43 AM
Are you using an opengl instanced light for each polygon ? you know , glEnable(GL_LIGHTx)
well this may be very expensive and not give you the results you want ...
for traditinal gouraud lighting ..go for lightmaps ...they get the job done with multitexturing http://www.opengl.org/discussion_boards/ubb/smile.gif

jwatte
03-17-2003, 08:16 AM
That depends, in addition to the conditions you posted, on:
- the degree of overdraw
- the mechanism you use for submitting geometry
- the degree of blend
- how much gets Z culled (front-to-back vs back-to-front)
- how large your textures are
- how high resolution textures actually get rendered
- whether you use MIP mapping (bilinear == win, usually)
- how large the output window is
- what AGP rate your motherboard can sustain
- what RAM kind is in your computer (SDR vs DDR)
- which specific 2 MX it is (100, 200, 400 or original ?)

The ONLY way to answer your question, is to build representations of the kinds of scenes you want to render, say using only balls and cubes of various tesselation, with representative material complexity (texture detail, number of lights, etc), and BENCHMARK.

Once you have such a tool, it can answer this question for you over and over again, for different scene compositions and different hardware configurations.

rgpc
03-17-2003, 08:25 PM
You could build your app, assuming you can get near perfect triangle rates for the card in question (what ever nVidia states as the possible triangle rate for the GPU) and implement a LOD system. Then let your app adjust the LOD bias if things get too slow. (Just a thought)

Cab
03-18-2003, 12:44 AM
A good talk about OpenGL performance tunning: http://www.r3.nu/~cass/gdc_slides/GDC2003_OGL_Performance.ppt

This is from this year OpenGL tutorial at GDC. You can find more talks (at this moment) from this tutorial at: http://www.r3.nu/~cass/gdc_slides

Hope this helps.

meuns
03-18-2003, 02:44 AM
Thanks