Relative Cost of Function Calls

Where do you go to get information about the cost of library function calls (sqrt, sin, etc.) for your GPU? I have a GTX 280, but I am finding it difficult to find any of this information.

Thanks.

One way you can do it is write a very simple shader, that does not uses any of these functions. Draw a single point. Measure the time of the frame that uses this shader. Repeat the same step but this time the sahder calls one of these functions only once.

The difference in time may give an approximate measure of GPU time taken to perform this instruction.

“Draw a single point.” lol and measure with granularity of billions of cycles?

Enable additive-blending, and draw a mesh of 60000 fullscreen triangles. With the two forementioned shaders; Tune that 60000 number so that the framerate is not above 60fps.

http://developer.download.nvidia.com/compute/cuda/1_1/NVIDIA_CUDA_Programming_Guide_1.1.pdf

I thought OpenGL has GPU timing facility now…??? That’s why I said ONE point :slight_smile:

That’s still with a mighty granularity, you know.

Yes, there’s the ARB_timer_query extension to time the execution of various GL commands on the GPU, and that’s definitely what you should be using for timing data in this case (definitely do NOT use CPU-side timers! That will generally just tell you how long it took for the CPU to write the relevant date to the GPU command buffer, which is likely to be entirely unrelated to your GLSL performance).

But yeah, you’re still better off drawing/timing lots of pixels rather than just one. In addition to avoiding problems with timer resolution and smoothing out the data, it’s probably a more accurate measurement overall given the ridiculously parallel nature of the GPU.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.