WARNING: glGetError is VERY expensive

Just in case there are people who, like me, assumed that glGetError simply checked some kind of static flag…it obviously doesn’t, because after sprinkling generous numbers of these calls throughout my renderer, I discovered (with the help of vtune) that it is actually a hugely expensive thing to do.
(obviously I only do it in debug mode, but it’s so severe that debug became almost unusuable!).
Consider yourselves warned.
I’m curious as to why it’s so costly, though - any thoughts, anyone?

probably because its like a lot of the glGet*() function calls, the error state is probably held on the server side.

Mmm, but you would have thought that invalid enums etc. would be dealt with in the driver.
No, I think I know why - it’s because it must cause a flush of all pending gl commands, in order to determine if any have invalid arguments, etc. Is this correct?

I’d be surprised if GetError was expensive on our implementation. Our GetError implementation could hardly be any more trivial.

  • Matt

Now I’m worried. Knackered, what system are you running this on?

It’s independent of system, but all systems I’ve tried it on have geforce3/4’s in them.
Basically, I had a function that called glGetError(), then glDrawElements(), then glGetError() again, which was being called many hundreds of times, and glGetError was singled out by vtune as being the major hotspot - even above the glDrawElements call. [edit] and removing the glGetErrors gave a 4X frame rate increase.[/edit]
Watch out for those glGetErrors!

[This message has been edited by knackered (edited 09-19-2002).]

Originally posted by knackered:
It’s independent of system, but all systems I’ve tried it on have geforce3/4’s in them.
Basically, I had a function that called glGetError(), then glDrawElements(), then glGetError() again, which was being called many hundreds of times, and glGetError was singled out by vtune as being the major hotspot - even above the glDrawElements call. [edit] and removing the glGetErrors gave a 4X frame rate increase.[/edit]
Watch out for those glGetErrors!

This is how glGetError is implemented in Mesa

/*

  • Execute a glGetError command
    */
    GLenum
    _mesa_GetError( void )
    {
    GET_CURRENT_CONTEXT(ctx);
    GLenum e = ctx->ErrorValue;
    ASSERT_OUTSIDE_BEGIN_END_WITH_RETVAL(ctx, 0);

    if (MESA_VERBOSE & VERBOSE_API)
    _mesa_debug(ctx, "glGetError ← %s
    ", _mesa_lookup_enum_by_nr(e));

    ctx->ErrorValue = (GLenum) GL_NO_ERROR;
    return e;
    }

I doubt commercial implementations are more complex than this one.
If you want to get really sure, you could debug the assembly version of your implementation, I’m sure it won’t be more than a few instruciones in assembler and a return.

Another thing is that even the most trivial function can appear in VTune, if called enough times (if you are CPU bound, that is).

Originally posted by knackered:
Mmm, but you would have thought that invalid enums etc. would be dealt with in the driver.
No, I think I know why - it’s because it must cause a flush of all pending gl commands, in order to determine if any have invalid arguments, etc. Is this correct?

Any function call goes immediatly to the driver where an error may be raised or not. Flushing or finishing should not be necessary …

What about the vertex programs? Do they involve a server side check?

Wasn’t it you that said that this function can cause a large hit.

Whoaaa baby! dejavue!

V-man

perhaps youre doing something like

printf("%s
", gluString( glGetError() ) );
or whatever the correct syntax is,
true i can see that this line will be relatively slow

Originally posted by knackered:
[b]It’s independent of system, but all systems I’ve tried it on have geforce3/4’s in them.
Basically, I had a function that called glGetError(), then glDrawElements(), then glGetError() again, which was being called many hundreds of times, and glGetError was singled out by vtune as being the major hotspot - even above the glDrawElements call. [edit] and removing the glGetErrors gave a 4X frame rate increase.[/edit]
Watch out for those glGetErrors!

[This message has been edited by knackered (edited 09-19-2002).][/b]

I have literally dozens of glGetError() calls spread throughtout my code (in particular, I have about 6 in my texture layer setup code, that is called each time a texture layer is prepared, up to four layers per material, several dozens of materials for some models) and yes, while it does have an effect, it shouldn’t bother you TOO much.

4x the effect seems harsh – especially if you were only getting 10FPS to start with… :slight_smile:

But, as you should only be using it in debug it shouldn’t effect your release at all, and if your only using it in debug, you should probably look at scrapping some of the unneccesary calls once you know something works…

Leathal.

This is all very well and good, but removing the glGetError’s gave an dramatic speed increase. I can’t say much more than that, sorry.

Regardless of implementation, you are essentially dereferencing a function pointer every time you call glGetError. If you have enough of them, just the cost of calling and returning could be noticeable (especially with the added sanity checks in debug builds).

Function call overhead is a major reason why glDrawElements on a triangle list (in system memory) is so much faster than calling glVertex3f hundreds of times.

Originally posted by zed:
[b]perhaps youre doing something like

printf("%s
", gluString( glGetError() ) );
or whatever the correct syntax is,
true i can see that this line will be relatively slow [/b]

Missed that one - don’t be daft!
This is my code:

void GLContext::GLError()
{
static GLenum error=0;

error = glGetError();

if (error)
{
char string[4096];
sprintf(string, "GLError: ");
switch(error)
{
case GL_INVALID_ENUM: strcat(string, “GL_INVALID_ENUM”);break;
case GL_INVALID_VALUE: strcat(string, “GL_INVALID_VALUE”);break;
case GL_INVALID_OPERATION: strcat(string, “GL_INVALID_OPERATION”);break;
case GL_STACK_OVERFLOW: strcat(string, “GL_STACK_OVERFLOW”);break;
case GL_STACK_UNDERFLOW : strcat(string, "GL_STACK_UNDERFLOW ");break;
case GL_OUT_OF_MEMORY: strcat(string, “GL_OUT_OF_MEMORY”);break;
default: strcat(string, “Unknown error”);break;
}

  MYASSERT(0, string);

}
}

Completely irrelevant remark …

Why

static GLenum error=0;
error = glGetError();

instead of

GLenum error = glGetError();

Can’t see the purpose of using a static variable. Will reduce performance slightly, probably not measurable though.

As a static variable, error doesn’t have to be allocated everytime GLError() is called.

Oh, that was left in because a previous bit of code in there wanted to know what the previous error was…thanks for pointing it out humus.

Originally posted by vincoof:
As a static variable, error doesn’t have to be allocated everytime GLError() is called.

Yes, but it also prevents ‘error’ from being in a register - it will have to do a memory lookup to find the value every time…it’s a mistake that it’s still there.

Regardless of implementation, you are essentially dereferencing a function pointer every time you call glGetError. If you have enough of them, just the cost of calling and returning could be noticeable (especially with the added sanity checks in debug builds).

Come on, dereferencing a function pointer isn’t that expensive. On today’s processors, you can call millions of functions in 1/30th of a second. I don’t think he’s calling glGetError a million times per frame.

Function call overhead is a major reason why glDrawElements on a triangle list (in system memory) is so much faster than calling glVertex3f hundreds of times.
If you’re only drawing a few hundred triangles, the glVertex3f method is not slow. It’s when you start to draw thousands that it gets to be a problem. And unlike glGetError, glVertex3f takes three arguments. If Knack were calling glGetError every time he drew a triangle, I would suspect function call overhead.

I think the only people who tell us why it is slow are the folks who wrote the drivers.

Haveyou considered doing some (possibly pointless) testing as to where/why the function is slow?

I mean, it wouldn’t take much time to write a dummy version of glGetError() that did nothing (or something small and silly, to try and stop the compiler optimising it away), and make a function pointer to it. Then put that in place of your glGetError() call and see if its still slow, then you’d know if it’s function overhead, or if it’s something slowing down inside the glGetError() function. You can also add random stuff to your dummy function to semi-mimic the true glGetError() routine and see how slow it gets. I dunno, just a thought.

-Mezz

Just curious; did vtune pick out glGetError or your GLError? Also how many times does GLError get called a second (approximately)….