Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 1 of 1

Thread: glDispatchCompute calling overhead

  1. #1
    Newbie Newbie
    Join Date
    Apr 2014
    Posts
    1

    glDispatchCompute calling overhead

    Dear Gurus,

    I'm facing an annoying problem (or is really?) that every call to glDispatchCompute ALLWAYS comes with an annoying overhead of about 0.2ms.
    Just make it clear: Even executing a “glDispatchCompute(0,0,0)” call on an EMPTY shader cost 0.2ms.

    Questions:
    1. Does this make sense?
    2. Is it NVIDIA only issue?
    3. Is there a way around this?


    *Note1: The 0.2ms is measured using “glBeginQuery(GL_TIME_ELAPSED,...)
    *Note2: Platform is GTX-560, Windows 7, Latest NVIDIA drivers.

    Thanks guys!
    Last edited by edoreshef; Yesterday at 11:56 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •