Porting Linux App to Windows

I am new to OpenGL but I have been given the task of making a Linux based gtkglext based project to run on Windows. I got it converted and running with the gdk runtime and it renders the screen fine. However, I am getting about 2 fps in Windows and better than 15fps on on Linux.I am thinking that it is not using Hardware rendering but I am not sure. Is there a way to verify if a project is using hardware rendering?

Check these functions to get some information about your implementation.

glGetString(GL_VENDOR);
glGetString(GL_RENDERER);
glGetString(GL_VERSION);
glGetString(GL_EXTENSIONS);

I put those commands in the code but it doesn’t seem to indicate one way or another about the state of hardware rendering.

Those commands return strings that describe your renderer. You’d need to check their output to see what they do.

To get the string in c++ you’d do something like this:


std::string renderer(
reinterpret_cast<const char*>(glGetString(GL_RENDERER))

…and similarly for others. If you have software rendering, the renderer string would probably be “GDI Generic”. Otherwise if vendor string contains Nvidia or Ati you have acceleration.

I am getting GDI_GENERIC so software rendering it is. Now to figure out how to get the Hardware Rendering to work.

Thank you very much for the help.

For that you hav to know your OS version and the video accelerator model. Then download and install appropriate graphic drivers (not the ones suggested by Windows autodetection).
That’s all.

It can be more tricky on laptops.

OK so I found some code that pointed me a bit further:

int pf = ChoosePixelFormatEx(hdc,&bpp,&depth,&dbl,&acc);
PIXELFORMATDESCRIPTOR pfd;
ZeroMemory(&pfd,sizeof(pfd)); pfd.nSize=sizeof(pfd); pfd.nVersion=1;
//try to set the pixel format.
if(!SetPixelFormat(hdc, pf, &pfd))
{
TRACE0("Can’t Set The PixelFormat for OPENGL !!!
");
ASSERT(0);
return FALSE;
}

//see if we are indeed accelerated
DescribePixelFormat(hdc, pf, pfd.nSize, &pfd);
// Driver type /////////////////////////////////
if(0 != (pfd.dwFlags & PFD_GENERIC_FORMAT))
{
TRACE1(" PD: %d ************** Generic Software ******************
“, pf);
}
else
{
if(0 != (pfd.dwFlags & PFD_GENERIC_ACCELERATED))
TRACE1(” PD: %d ************************* MCD ACCELERATED **************
“, pf);
else
TRACE1(” PD: %d **************** ICD ACCELERATED ************************
", pf);
}

My results were: ICD ACCELERATED

Must be something about how it is initialized I am guessing. OpenGL Extension viewer is showing my ATI card.

What parameters do you use for ChoosePixelFormatEx ?
This is quite old “many cannot accelerate when the desktop is at 32bpp in high-resolution” but may help :
http://www.wischik.com/lu/programmer/wingl.html#accelerated

That is where I started. I used the same parameters that are specified on that page.

Check GLenum to see what parameters need to be set on the PIXELFORMATDESCRIPTOR structure to get an accelerated pixel format.

That link appears to be broken. Anyone have another link to it?

The program linked above just WorksForMe ™.

hmmm…

I get page can not be found.

nice… I switched to my Linux Box and the link worked.

You might want to check your <Windows folder>\system32\drivers\etc\hosts … maybe microsoft.com was redirected to something else …

So the pixel format that ChoosePixelFormatEx is shows up as IDC Accelerated Type. There are approximately 26 accelerated types. Should I just try all the types? Filling the structure for SetPixelFormat.

Yes… Yuo can try to initialise structure before Choose/SetPixelFormat calls… Example…


	PIXELFORMATDESCRIPTOR pfd =				// pfd Tells Windows How We Want Things To Be
	{
		sizeof (PIXELFORMATDESCRIPTOR),			// Size Of This Pixel Format Descriptor
			1,	
			PFD_SWAP_EXCHANGE |			// Version Number
			PFD_DRAW_TO_WINDOW |			// Format Must Support Window
			PFD_SUPPORT_OPENGL |			// Format Must Support OpenGL
			PFD_DOUBLEBUFFER,			// Must Support Double Buffering
			PFD_TYPE_RGBA,				// Request An RGBA Format
			32,					// Select Our Color Depth
			0, 0, 0, 0, 0, 0,			// Color Bits Ignored
			8,					// No Alpha Buffer
			24,					// Shift Bit Ignored
			0,					// No Accumulation Buffer
			0, 0, 0, 0,				// Accumulation Bits Ignored
			24,					// 24Bit Z-Buffer (Depth Buffer)  
			0,					// No Stencil Buffer
			0,					// No Auxiliary Buffer
			PFD_MAIN_PLANE,				// Main Drawing Layer
			0,					// Reserved
			0, 0, 0					// Layer Masks Ignored
	};


Well I generally gave up on trying to get hardware rendering using the win32 version of gtk opengl interface. I have converted the implementation over to glut and I am getting hardware rendering. The frame rate problem ended up being related to the unix gettimeofday function that has no win32 equivalent. Once I wrote an found an Win32 function function that did the same thing my frame rates jumped up to something similar to the Linux frame rates.

Thanks for all the help.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.