Hello,
Thanks guys. I’m using an nVidia GeForce GTS 250 in TwinView mode on a new-ish (circa last summer) HP with 8GB RAM and generally ‘high end’ specs. I’m running my program under Ubuntu 9.10. As for hardware acceleration, I’m not sure? The GTS is supposed to have good OpenGL support, and the first chunk of my glxinfo output suggests to me that things are pretty well enabled–
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control,
GLX_EXT_texture_from_pixmap, GLX_ARB_create_context, GLX_ARB_multisample,
GLX_NV_float_buffer, GLX_ARB_fbconfig_float, GLX_EXT_framebuffer_sRGB
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_NV_swap_group, GLX_NV_video_out, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGI_swap_control, GLX_ARB_create_context, GLX_NV_float_buffer,
GLX_ARB_fbconfig_float, GLX_EXT_fbconfig_packed_float,
GLX_EXT_texture_from_pixmap, GLX_EXT_framebuffer_sRGB,
GLX_NV_present_video, GLX_NV_multisample_coverage
GLX version: 1.3
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control,
GLX_EXT_texture_from_pixmap, GLX_ARB_create_context, GLX_ARB_multisample,
GLX_NV_float_buffer, GLX_ARB_fbconfig_float, GLX_EXT_framebuffer_sRGB,
GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTS 250/PCI/SSE2
OpenGL version string: 3.0.0 NVIDIA 185.18.36
OpenGL shading language version string: 1.30 NVIDIA via Cg compiler
About 100 OpenGL extensions are also listed. (Dark Photon this answers your question.)
My viewport is the size of the entire screen, 1920x1200, one of two same-sized monitors hooked up to the card. Civ 4, which is vastly more graphically demanding than what I’m doing, plays completely smoothly under Windows.
I was not using vsync when I made my original post, but when I tried it via the nVidia settings panel it didn’t seem to make much difference (once I removed the ‘glFinish()’ command). I only had a few frame drops running it just now, but that’s still >0 so I don’t think the problem is solved (and maybe the next run, more…). I am double buffering.
As for my window manager, I’m using Gnome with all the desktop effects turned off. Interestingly my system can’t seem to handle these effects–when I turn them on the taskbars disappear and the effects revert back to ‘none’ after a few seconds.
As a last note, putting these 40+ rings on the screen–which is basically not a whole lot more than 40+ calls to gluDisk–takes 6-7ms which seems like a really long time. On the other hand, I’m able to render a full-screen texture, which consists of a single small 1D texture repeated ~50000 times every frame–in less than 5ms per frame. I don’t understand this at all. Maybe there’s just some gross inefficiency in my code somewhere in the former case but I don’t think so.
Anyway, thanks for your consideration!
Matt