OpenGL metric

I started working on some metrics about my render engine and i cant figure out how to count the actual number of triangles/vertexes drawn. I have the total number of mesh vertexes from the moment i load the obj model.

I found some answers but most of them were that i have to count them myself. Yes, i understand that is is not a OpenGL API call but how do i count them when i use culling or other mesh is behind the wall ?? It is obvious that i am not rendering all the mesh that i have on the screen.

Do i need to use like atomic counters to count how many times vertex shader runs.

NOTE :: I am trying to avoid OpenGL 4 until i start working with learning tessellation.

Thanks

i cant figure out how to count the actual number of triangles/vertexes drawn.

What do you mean by “actual number” and “drawn”?

Are you interested in how many vertices and triangles your code sends to be rendered? If so, that’s something you can handle easily enough by just saving how many vertices you told the system to render.

It’s also not clear what you mean by “when i use culling”. Are you referring to your code culling meshes by not rendering them (if they’re outside of the viewport or whatever)? Do you mean “culling” based on the triangle being outside of the viewport, which OpenGL will fully clip post-transformation?

Or are you referring to meshes being “culled” due to the depth test? Because the latter is not “culling” of triangles at all; it’s discarding [i]fragments[/i]. Part of a triangle can be visible and part not visible.

Do i need to use like atomic counters to count how many times vertex shader runs.

That would be wrong for many reasons, not the least is that clipping happens after the VS. So vertices that pass the VS can still be clipped. And fragments generated by the rasterizer can still be depth-tested away.

Furthermore, the Vertex Shader’s invocation frequency can confound your count. Lastly, there’s really no point. If you want to count the number of vertices sent to OpenGL, just keep a running tally yourself. No need to make the GPU do it for you.

Not unless you’re using [Conditional rendering of some form](https://www.opengl.org/wiki/Conditional Rendering).

am trying to avoid OpenGL 4 until i start working with learning tessellation.

OpenGL 4 is about a lot more than mere tessellation. Indeed, tessellation is probably the second-least-useful thing in it (with shader subroutines being the least-useful).

OK sorry for lots of misunderstandings.

For I send to the GPU to render the whole mesh since it is very small, though in future i might need to to implement clipping before i send the data to be rendered but lets skip that for now since it is not a trivial problem to solve (http://www.gdcvault.com/play/1014234/Excerpt-Quake-Postmortem-Optimizing-Level, http://www.eecs.berkeley.edu/Pubs/TechRpts/1992/CSD-92-708.pdf). I can count those vertexes since i know the number when i import the model and since i am rendering the whole scene it is easy to count them with a loop.

What i would like to know as a metric is after i send the data to be rendered at the GPU lets say for now the whole mesh to be rendered how can i know the number of triangles that are rendered after culling is applied to them in GPU (sorry for misusing depths test instead of culling, that caused confusion).

Something like: when we benchmark a game like Crysis or Star Citizen etc we had some metrics that told us how many triangles/polygons/vertexes were actually being rendered on screen and that number varied.

Since i use culling i know not all the triangles are rendered so i was wondering how can I extract that or how can i calculate that. I have attached a picture of Crysis and i have circled the metrics that I am interested, I believe that will make things much more easy.

I am sorry if i messed up how i explained the whole thing.

[ATTACH=CONFIG]986[/ATTACH]

[QUOTE=lummxx;1265490]
Something like: when we benchmark a game like Crysis or Star Citizen etc we had some metrics that told us how many triangles/polygons/vertexes were actually being rendered on screen and that number varied.[/QUOTE]
Such figures invariably reflect the number of primitives which the application is asking OpenGL (or DirectX) to render, which will vary based upon the applications visibility calculations and level-of-detail selection.

If the numbers were provided by the developer, there’s an incentive to make them appear as large as possible, so they might even include polygons which were “considered” but rejected by simple culling even before being passed to the rendering API.

If you wanted to count the number of distinct primitives for which fragments were written, you’d need to use something like a bit-vector indexed by gl_PrimitiveID, with the fragment shader setting the appropriate bit for each invocation. But that still wouldn’t count primitives which were processed right up to the depth test then discarded by early-depth optimisation, as the fragment shader would never get invoked for those. Also, such a method would have sufficient overhead that I can’t see anyone using it in a benchmark.

In short, observing what happens in the vertex and geometry shaders doesn’t tell you anything you can’t already determine from the counts passed to the draw calls, while observing what happens in the fragment shader only gives you the final result (at a significant cost), regardless of whether processing stopped right after primitive assembly or just before the fragment shader or at any point in between.

It sounds interesting and more ore less what i was expecting, ether excessive overhead or not possible to do it after issuing the draw call.

I guess i have to wait until my engine will need a culling solution and other optimizations to have those metrics show more information and be more useful.

Thanks :smiley:

I’m sorry but I have to interfere a little bit…

There is a way to get back information from the driver how many primitives are rendered, but you need a different API for that, not OpenGL.

For example, if you are using NV graphics card, NV PerfKit could give such information. Just use “setup_primitive_count” hw counter. It would tell you how many primitives are rasterized. Earlier there was a “triangle_count”, but it is not supported on Fermi/Kepler/Maxwell cards. “setup_primitive_count” counts all rasterized primitives. Also, there were software counters, like “OGL batch count”, “OGL vertex count” and “OGL primitive count”. However, I cannot retrieve their values for a long time.

Thanks for more info. :d