problem while using nvidia card

Hi,
My program is loading a small model(.ms3d)in a GL window. I am using VC++ 6 to wright my code. With on board graphics card i am getting a good frame rate, but when i am installing the Quadro FX 1300 card on my machine along with the proper diver i am getting a less frame rate. I am not able find out the problem. is it anything i have add in my code to utilized the graphics card, or is it something other , please help

Perhaps there’s something unsupported about the framebuffer format or other features you’re using forcing you to fall back to software.

After you makecurrent the context what does the vendor string you querry say?

Have you tried enumerating pixel formats?

Dear dorbie,
Thanks for your reply. I am sending the pixel format structure i have used in my program. Please go through it.

static PIXELFORMATDESCRIPTOR pfd= {
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL |
PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32, //Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
16, // 16Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};

I unfortunately didn’t understand the point try to make by the line “After you makecurrent the context what does the vendor string you querry say?”, so please write it little more.

I am sending you the code how i attach the device context to rendering context.

(declared global)
HDC hDC=NULL;
HGLRC hRC=NULL;

In the window creating section
hDC=GetDC(hWnd); //hWnd - handle to the newly
//created window
GLuint PixelFormat=ChoosePixelFormat(hDC,&pfd);
SetPixelFormat(hDC,PixelFormat,&pfd);
hRC=wglCreateContext(hDC);
wglMakeCurrent(hDC,hRC);

Waiting for your valuable comments.
With warm regard

What’s your onboard graphic card ? Which one is more powerful ?

Also try to answer what Dorbie said to you, otherwise we’ll only be able to make assumptions.

What does glGetString (GL_VENDOR) returns ? (that what he asked)

And if I’m not wrong 16 bits z-buffers are out of date.

Might be 16 bit z combined 32 bit RGBA color forcing you to software fallback.

Ask for 1 bit z, it’s secret sauce for requesting the largest supported zbuffer.

The better way is to enumerate all pixel formats and querry the parameters and pick one you like from the list.

Get the vendor string and see what it says for the visual you are getting.

tukitaki,
I wrote a very simple program for Windows to query OpenGL info. glinfo.zip
It writes useful info and the details about all supported modes into a file, glinfo.txt.

Please refer to “setPixelFormat()” and “findPixelFormat()” in my source code.

findPixelFormat() will look through all supported modes and accumulate the scores based on the expected pixel format, in order to find the best mode for your application.
(you may easily extend these two functions, for instance, by adding stencil buffer bits and depth buffer bits)

BTW, any GL call including glGet*() function will be failed if OpenGL rendering context is NOT created yet,

For Windows, the context can be initialized by wglCreateContext() and wglMakeCurrent(). Therefore, OpenGL functions must be called after this.
==song==

Dear friends
I have solved the problem from NVIDIA settings. I swithed off the virtical sync property from advanced setting and get a FPS almost 6 times than that of my onboar graphics. Is there anything i can do from my code to solve this problem.

with regards
tukitaki

Yes, this extension is used to switch swapbuffer’s vsync behaviour on Windows:
http://oss.sgi.com/projects/ogl-sample/registry/EXT/wgl_swap_control.txt

He he, little did I realize that the frame rate went from refresh speed to ludicrous speed.

vsync is desirable IMHO and you should leave the users with the option of using your software locked to the refresh rate through their desktop settings.