Hardware acceleration of OpenGL picking

Not exactly a programming question but… when is GL_SELECT hardware accelerated?

I have huge selection lag in Maya, the 3D modeling program, which uses opengl picking to select objects. I have tried a Radeon 4670, a FirePro V4800 and an Nvidia GTX 285.

The lag is several seconds even on the FirePro, which is workstation card certified for Maya.

Can some things prevent the driver from using the hardware during GL_SELECT rendering?

I have tried every driver/settings I could… help would be greatly appreciated.

Thanks.

Mike

I never heard of any implementation of GL_SELECT being hardware accelerated.
Probably explains why it was dropped from OpenGL 3.x/4.x core rendering.

Thanks, but I’d be very surprised if that was the case, since many CAD applications including the current Maya 2011 use it: it makes no sense that such an important part of these programs, interactivity with what you are designing, is doomed to lag because it’s not hardware accelerated at all. At least the workstation cards should be able to do it (like my FirePro), it seems such a basic requirement for CAD.

There is a simple way to test that: I know for a fact that some people have zero lag when interacting with 50x50x50 polygon cube, whereas some like me have multi second lag. It seems definitely related to drivers because many have reported dramatic drops with GL_SELECT when upgrading drivers, like http://communities.intel.com/thread/13212 “OpenGL selection picking (GL_SELECT) appears broken in latest Intel drivers”: they develop a CAD application and have seen dramatic drops in object selection performance.

Also, if GL_SELECT is deprecated in OpenGL 3+, does it replace it with anything? Workstation applications are an important part of OpenGL’s market, how can they deprecate such an important feature as 3D object selection and not replace it with anything?

Thanks.

Mike

http://www.opengl.org/wiki/Common_Mistakes#Selection_and_Picking_and_Feedback_Mode
http://www.opengl.org/wiki/Specify_An_Exact_Color
http://www.opengl.org/wiki/Unique_Color_For_Every_Primitive

I wouldn’t pay much attention to poor performance on Intel graphics, to be honest. These parts are widely known for universally poor OpenGL performance, and their latest models should be considered as “Vista desktop accelerators” rather than anything else. They do work well with D3D, but that’s mostly by accident (as a consequence of being intended to accelerate the Aero desktop) rather than by design.

That’s more or less one of the reasons why there are different profiles in current OpenGL implementations. It’s well known and well understood that there are different markets for OpenGL, and that the requirements of one market are not always the same as the requirements for another (sometimes they might even pull in opposite directions).

Thanks a lot. It seems really strange that Maya doesn’t follow basic OpenGL guidelines.

Is there a way to dump the OpenGL commands sent by an application to make sure that Maya uses glRenderMode(GL_SELECT)?

Mike

Sure, use GLIntercept: http://glintercept.nutty.org/

It hasn’t been updated for a few years so it might not play happy with an OpenGL 3+ context, but it certainly does work with applications coded to older specs running on modern hardware.

You can also configure it to give you timings for how much time is spent inside each OpenGL function, which is great for determining where you might be going through software emulation.

Wow, that’s great, thanks a lot, will try that.

I have tried GLView http://www.realtech-vr.com/glview/ and even on the Firepro, when I try to do a rendering test in “Compatibility” mode/context/profile, it doesn’t render anything, whereas in any of the Forward Context (3.0, 3.2, 3.3, 4.0) it works perfectly.

Does that mean even the Firepro doesn’t support compatibility mode, which would explain why GL_SELECT is not supported in hardware since it’s deprecated so it is only part of the compatibility profile?

I checked with GLIntercept: Maya definitely uses GL_SELECT. Which is deprecated and officially discouraged. Hmmm…

glSelectBuffer(60000, …)
glRenderMode(GL_SELECT)

glInitNames()
glPushName(0)

glLoadName(0)
glBegin(GL_TRIANGLES)
glVertex3fv(…
glEnd()

etc…

glLoadName(1)
glBegin(GL_TRIANGLES)
glVertex3fv(…
glEnd()

etc…