performance metrics

Both NVIDIA and ATI has special performance counters.

One usefull counter which NVIDIA has is number of fragments killed by the zcull unit. Is it possible to read these counters and how?

Yes, with NVPerfSDK.

I’ve used this on Linux and it “<u>was</u>” very useful. Unfortunately, NVidia hasn’t been updating it at all, and the current is broken with modern kernels. This thread did have a response from an NVidia guy saying an NVPerfSDK update for GeForce 8 was coming around Nov '07 which would fix it, but seems there’s some revisionist history going on with NVNews.net, as both his reply and my response to his reply are gone. One reason forums suck and mailing lists are better (don’t even get me started on the cgshaders.org/shadertech.com forums, or lack thereof, anymore).

Cool.
Unfortunately I only have my ATI 9600 right now.

You can use gDEBugger. It has support for ATI hardware counters.

I know the gDEBugger, but since I have a background in DirectX with tools such as Microsoft PIX and NVPerfHUD I am bit reluctant to pay for such tools.

Doesn’t ATI expose these counters in an instrumental OpenGL driver?

I also would like to see some more “free” performance tools, maybe for the SDK group it would be an idea to expose those measuring tools, so that opensource tools such as glintercept could help us all. It’s somewhat sad how “second choice” developing with opengl feels, when there is lots of stuff for free for Dx, and gdebugger feels extremely expensive. While of course developing such tools costs money, exposing counters and so on would be only fair to “do it yoursel” or / opensource tools…

>> One usefull counter which NVIDIA has is number of fragments killed by the zcull unit. Is it possible to read these counters and how?

u can implement this yourself with the stencil buffer, + draw the result onscreen with a color palette

And then read back the results and count fragments culled on CPU side ;-), sounds like I am reinventing the wheel in a very bad manner. Thats reading back all pixels, effectively stalling the pipeline instead of using an instrumental driver to query the zcull unit.

I have a background in DirectX where there are free tools made by Microsoft and NVIDIA, actually turns out ATI even has a few tools for debugging as well all free, but NO support for OpenGL.

With OpenGL I am stuck with reinventing the wheel once more?

I have thrown together a small sample that suits my needs using NVPerfSDK with one of the existing examples.
But there are two problems with that.

  1. OpenGL is only support in a limited form compared to the amoun t of info you can get from the DirectX driver.
  2. Seems like ATI doesnt expose anything at all.

Are the ARB board members working on developing OpenGL tools?

Also if gDEBugger can query ATI hardware under OpenGL how can others do this themselfs? Is this not exposed in the driver?
Its fine with me if its on Windows only just as long as it is exposed.

i believe its prolly more flexible than what the tools offer
eg the same scene with overdraw visualized with A/ depthtest fail B/ depthtest pass

though i do agree with your complaint about the graphics companies

I’ve been waiting for NVidia’s Perf Tools for Linux to be updated for the GeForce 8 series, perhaps some day. Should be great for OpenGL if it is ever finished.

You can use a opengl timer query on NVidia to get GPU time and profile that way (perhaps there is something similar for ATi?). However given that GPU is threaded, times can vary a little, but works well enough to find trends to optimize with.