Strange performance issue

My new OGL app has a strange behaviour pattern.

For the first few seconds, the FPS of the drawing loop is horrible, and the animation on-screen is noticeably slow (jerky - it is frame independent animation).
Then it improves to what one would expect and it performs like that for the rest of the lifetime of the process.
The more vertices/faces i have on screen, the longer it takes for the “slowness” to go away.
(If you want numbers, looking at ~30 seconds for 4 models of 13214 faces and 1014 vertices each, and ~10 seconds for 4 models of 4221 faces and 515 vertices each.)
The FPS jumps from 4 to 60(=vertical refresh, i tried disabling that but issue remains).

Other OGL apps I have written do not have this behaviour, and it is not computer-specific either.

I’ve narrowed it down to the call to glDrawElements() (i’m using vertex/normal arrays).
I’ve tried disabling pretty much all features (depth, color, lighting) with solely a vertexarray enabled for the call to gldrawelements. I’ve tried getting rid of animations /rotations / translations all together leaving nothing but the camera modelview, to no avail.

I guess i’m not exactly looking for an answer from these message boards but hopefully some suggestions as to what else i could try. The only thing i could see is some strange difference in initialization between my other apps and this one, but i can’t really understand the reasoning behind that.

Either way i’ll look into that and hopefully with some suggestions from this board i will find the answer.

Follow-up:

Interestingly enough replacing the call to glDrawElements() with a for-loop over the vertices pointer and calling glVertex3f() works just fine, i get maximum framerate as soon as the app starts.

Initialization of OGL is the same as in my other apps.

Not computer-specific? Well, if you’re using Geforce on a dual core machine, check this and this.

CatDog

Interesting!
Will try that later today, i did try on several machines but accidentally all dual core and geforces.
Still doesn’t explain why the other apps don’t have the issue, but it’s a starting point and it’s exactly my issue.

Will update the thread if i find anything.

You can also give glDrawRangeElements() instead of glDrawElements().

Using range elements is more efficient for two reasons:

-If the specified range can fit into a 16-bit integer, the driver can optimize the format of indices to pass to the GPU. It can turn a 32-bit integer format into a 16-bit integer format. In this case, there is a gain of 2×.

-The range is precious information for the VBO manager, which can use it to optimize its internal memory configuration.

N.

I’ll give that a try. I’ll be replacing the arrays with VBOs soon too. Glad to meet another Belgian here.

Anyway, I gave the registry edit a go as mentioned in the referred post, and it worked. Changing it back to 0 gave me the performance issue again.

I’ll guess i’ll have to read up on the NVidia website to check what can be done.

Good luck and keep us posted please.

I resigned on that topic. It’s a driver bug, it’s well known for two years, and nothing changed.

Btw, in latest Geforce drivers, you can switch off threaded optimization in the application settings. So at least editing the registry by hand is not needed anymore.

CatDog