Measuring performance

Hi, I am very interested to measure the performance of some of my Cg / GLSL shader, however, I have no idea where should I start my timer . Besides, I had heard that there is some kind of measurement that based on the generated shader assembly code, any idea from all OpenGL guru here ? Thanks in advance.

There are no fixed rules for measuring performance. On top of that it depends on whether you are measuring vertex or fragment throughput. Add variations in hardware and drivers and you’ve got quite a few parameteres that can alter your performance (and hence the “measurement”).
For example if you want to test the performance of a scene that uses Non-Power Of Two (NPOT) textures. You test it on R3xx and get decent performance. However, testing it on NV3x would give you bad performance. Inexperience can, quite often, leave you wandering in no-man’s land.
The bottom line is that measuring performance is not simple. Tools (gDebugger?, don’t know whether it can be used as a benchmarking tool, haven’t used it myself. There was a tool on nVidia’s website as well, for OpenGL) will only give you some “assistance”, but in the end you will have to “figure” things out yourself.

Another thing that is not so easy to figure out is that your CPU can become a bottleneck more often than you could imagine. Poor batch submission is one example that can lead to CPU bottlenecks.

Thanks. How about analysing the cycle length of the generated assembly shader ? any tools or reference ?

Try:
http://developer.nvidia.com/object/nvshaderperf_home.html

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.