Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 2 of 2

Thread: glDispatchCompute calling overhead

  1. #1
    Newbie Newbie
    Join Date
    Apr 2014
    Posts
    1

    glDispatchCompute calling overhead

    Dear Gurus,

    I'm facing an annoying problem (or is really?) that every call to glDispatchCompute ALLWAYS comes with an annoying overhead of about 0.2ms.
    Just make it clear: Even executing a “glDispatchCompute(0,0,0)” call on an EMPTY shader cost 0.2ms.

    Questions:
    1. Does this make sense?
    2. Is it NVIDIA only issue?
    3. Is there a way around this?


    *Note1: The 0.2ms is measured using “glBeginQuery(GL_TIME_ELAPSED,...)
    *Note2: Platform is GTX-560, Windows 7, Latest NVIDIA drivers.

    Thanks guys!
    Last edited by edoreshef; 04-23-2014 at 11:56 AM.

  2. #2
    Newbie Newbie
    Join Date
    May 2014
    Posts
    1
    I am also having this issue. I get the exact same delay ~0.2ms when measuring with glBeginQuery(GL_TIME_ELAPSED, ...).

    For reference, I am on Windows 7 with GTX-470 on the latest drivers.

    Could someone shed some light on this?

    EDIT: I have just tried the same thing on a GTX-780 and the delay disappeared. I guess the delay was something only present in 560 and below?
    Last edited by inutard; 05-12-2014 at 05:35 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •