Optimisation - A general question

Hey guys,

A general question for you regarding the efficiency of an application I’m running.

I have 40 houses lit in real time using CG shaders.

There are times I run the application that it maintains a pretty consistent 60fps. Other times I run the application it averages at 30fps without changing even one single line of code.

I’m at a total loss as to why this kind of discrepancy can occur.

Without knowing the ins and outs of my app, can anyone point me in the right direction as to where I should be looking for bottlenecks?

Thanks

If it’s either near 30 or 60fps then it’s probably vertical sync that affects the framerate.
Try wglSwapIntervalEXT(0);

Hi k_szczech

Where exactly in my code should I implement this?

i tried it in the render method and didn’t get any joy.

As a side note, bizarrely, I’ve had a couple of runs of my program this evening, again without changing any code where the frame rate has hit 700fps. :expressionless:

Well, you need to call it once whenever you want - it disables vsync.
Calling wglSwapIntervalEXT(1); will enable it.

Since you mentioned 30/60 fps it seemed that it is vsync indeed. But now you mentioned 700 fps, which makes me believe that you have some bug in fps measurement. You should review your code carefully.

k_szczech

You say that there is a bug in my fps measuring code?

Possibly, but what I can tell you for sure is that at times with massive amounts of lights, the rendering is definately running super super fast with slick transition between frames and not the slightest hint of stutter.

Here is my frame updating code

INT64 freq = 0;
    INT64 lastTime = 0;
    INT64 currentTime = 0;
    float elapsedTimeSec = 0.0f;
    float timeScale = 0.0f;

    QueryPerformanceFrequency(reinterpret_cast<LARGE_INTEGER*>(&freq));
    QueryPerformanceCounter(reinterpret_cast<LARGE_INTEGER*>(&lastTime));
    timeScale = 1.0f / freq;
	

	while(!done)													
	{
		if (PeekMessage(&msg,NULL,0,0,PM_REMOVE))					
		{
			if (msg.message==WM_QUIT)								
			{
				done=TRUE;	
				break;
			}	
				TranslateMessage(&msg);								
				DispatchMessage(&msg);								
			
		}
		else												
		{
			QueryPerformanceCounter(reinterpret_cast<LARGE_INTEGER*>(&currentTime));
            elapsedTimeSec = (currentTime - lastTime) * timeScale;
            lastTime = currentTime;
			fps = 1.0f/elapsedTimeSec;
			UpdateFrame(elapsedTimeSec);
			
			if ((active && !DrawGLScene()) || keys[VK_ESCAPE])		
			{
				done=TRUE;											
			}
			else													
			{
				SwapBuffers(hDC);									
			}
		}
	}

So as you can see my fps code is solid and is taken from many examples on the web.

I got 98fps last night for 200 lights!

Just a quick note about your FPS measurement code. From what I see, it is ok, however usually FPS is measured a little bit differently. Either by measuring how many frames do you draw in some time period (for example in one second) or by measuring time necessary to draw multiple frames (for example 50 frames) and then averaging this time.

But back to your problem. I think, that nobody here can help you much unless you give us more information. (ideally a link for your application so someone can try it).

If I understand you correctly, you are saying that your application runs on 30,60 or even 100+ FPS when you are rendering it with the exactly same settings and when rendering the exactly same scene (this means that your camera points to the same place and you are rendering the same objects). If this is true, then it really makes no sense to me.

Trahern - it makes no sense to me either hence the reason I’m posting here.

I did think initially that the problem was with the fps code. However, the difference is so clear at times. For example the first time I ran the app with 100 houses and 100 lights, it crawled at approx 10fps. Then I ran it again about 5 mins later to get a screenshot and the fps shot up. I hadn’t altered one line of code apart from the line which specifies the number of lights.

I was wondering if the load on the CPU at the time I run the application could have anything to do with it?

Screenshot here.
Screenshot

Originally posted by StevieDB:

I was wondering if the load on the CPU at the time I run the application could have anything to do with it?

Definitely. You should always benchmark your applications when there is nothing else running on your computer.

Hi StevieDB

You can use gDEBugger with NVIDIA’s NPerfKit integration to locate your graphic pipeline bottleneck.

Additional information is available at:

The gDEBugger team
www.gremedy.com