OpenGL application moving way too slowly

So I’m trying to debug this program. I did not write this program, and the code is not mine, so I can’t really post any of it up here. Sorry, guys.

Basically, it’s an extremely large program using a wxGLCanvas. However, the rendering speed is running extremely slowly. I created a small Qt program to display the same data, and the whole rendering loop takes about 24.8 clock ticks (thousandths of a second) to render the scene, while the exact same code in the Wx application takes 534.6 seconds.

The code is of this form

 
glBegin(GL_TRIANGLES) 
for(iterator::i=theList.begin(); i!=theList.end(); i++){ 
glVertex(theList.node[0].x, theList.node[0].y, theList.node[0].z); 
glVertex(theList.node[1].x, theList.node[1].y, theList.node[1].z); 
//The rest of the points here 
//Yes, I'm making sure they're actually triangles 
} 
glEnd() 

A few things that are bizarre:
First, although the input routines between the two programs are slightly different (wxString v. QString), I’ve outputted the data structures, and diff-ed their output. Except for minor round-off changes, there is no difference. However, even if I take all the glVertex calls out of the loop (and turn off glBegin/glEnd) the loop still takes longer to execute in the WxWidgets program. Bizarre? I think so! It’s merely an stl iterator going through an stl map, and yes, I’ve checked, they’re both going through the loop the same number of times. However it takes on average 0.4 seconds on my toy Qt app, but 32.5 seconds on my large Wx app.

Also, I’ve output every glGet, and there is no difference (save for GL_STENCIL_BITS, but that has no effect), but even the glVertex calls take longer in the Wx app. (I made a for loop to draw the same triangles over and over, and it took 23.6 on the Wx app, and 1.73 on the Qt app)

So questions I have:

  1. Is there anything in OpenGL’s state that would be affecting the rendering speed that I can’t check with a glGet? Because I’ve checked everything in glGet.

2)Can anyone think of any other reason this effect would happen.

I have a feeling that the author of this code did something strange with WxWidgets that I haven’t done in my toy program, but I am having a bugger of a time figuring it out.

Any ideas, planet earth?

25 ms vs. 534600 ms? 21,000 times slower! Wow!

Well, you seem to have established with your test that this is not a GL issue, but a wxWindows issue. You’re probably going to get the most help from a wxWindows forum.

Try a CPU profiler like valgrind, VTune, or gprof and see where you are spending all your time. With waste like that it’s very likely CPU-side.

The code is of this form

 
glBegin(GL_TRIANGLES) 
for(iterator::i=theList.begin(); i!=theList.end(); i++){ 
glVertex(theList.node[0].x, theList.node[0].y, theList.node[0].z); 
glVertex(theList.node[1].x, theList.node[1].y, theList.node[1].z); 
//The rest of the points here 
//Yes, I'm making sure they're actually triangles 
} 
glEnd() 

Well, this could be sped up but that’s not your question. Bouncing around an stl::map can generate a lot of CPU cache misses, and immediate mode is probably the slowest way to submit triangles to the GPU, but with the timings you’ve got, these are really tiny fish by comparison.

If these triangles are static from frame-to-frame, just for testing, you can compile these triangles into a display list and get rid of pretty much all the CPU-side inefficiency with this stl::map bounce-around and immediate mode. But again, small fish…

Break out your CPU profiler.

Don’t be so sure that the GL_STENCIL_BITS is not responsible. This may force the driver into software emulation with some hardware and your times look like exactly this is happening.

So the first questions to answer are:

  1. What graphics card do you use?
  2. What display mode? 16 bit, 32 bit? Do you use a depth buffer? What else is on?

First…whoops that was a typo! that should have been 534 ms, not 534 seconds. It was running slowly, but not that much slower. :smiley:

Funny, you should tell me to go to the Wx forums, they told me to go over here O_o

As it turns out, the problem was neither Wx nor OpenGL, but merely my own stupidity. As it turns out, I was compiling in visual studio for Debug, not for release, and gaining all that overhead on the ITERATOR operations, of all things.

Funny how Debug will do that, huh.

Thanks for the advice, everyone.