selection methods

Some people say that selection doesn’t belong in GL, as Im sure you all know.

The GL selection method returns all the hits in the viewport.

How can the same results be acheived with the color picking methods (each object is painted with a unique color)?

yes draw each object in a different color (turning off dither,lighting etc)
readpixels + lookup the matching object.
note this is different to selection, selection will return ALL the objects with in the region (not just the closest)

That’s my point, with color selection it not as poweful as the opengl selection.
Or maybe someone has found some efficient way to get around this.
Plus I think that having high color (24 bpp and above) is a requirement. If you seen has 100 000 polygons and you want to give each a unique color …

I’m going to implement color picking anyway since it fits my needs.

V-man

V-Man can your please teach me or show me where can i get a color picking tutorial. i am doing a CAD program and i would like to add a “snap to” function. i would like to know what is the fastest way to perform "snap to objects"like what autocad did. Thanks

[This message has been edited by chengseesin (edited 05-12-2003).]

I dont know of any tutorial, but zed already described the method.

I do something like this after rendering the scene

GLuint objectID;
glReadPixels(xPosition, yPosition, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, &objectID);
objectID=objectID & 0x00FFFFFF;

The last line masks out the alpha because I dont always ask for an alpha bitplane.

“Snap to” is an algorithm that searches for the closest object to the mouse cursor.
That involves projecting your “stuff” using gluProject and then doing a closest to mouse cursor match.

Look! I’ve got this rasterization nail! Why don’t I use that for solving my scene graph problem?

You need a model of your geometry in-program. You need queries that quickly resolve whatever you are interested in knowing, such as “what triangles are hit by this ray” or “what vertices are near this cone” or whatever.

Rasterizing to get this information is inefficient and not likely to help you once you realize you also need to find the normal of the point you hit, because that’s the direction the user wants to move the object, or whatever.

jwatte is right, but I find its still good when you are dealing with large screen resolutions. Suppose you want to get every object that is directly visible to you? Casting rays can be extremely slow at times, where as grabbing the screen and looking up color values in that buffer is alot faster. In general, I would say to use the color picking to find your object first. Once you have that object, you can get the normal through your regular routines, etc…

Originally posted by lxnyce:
In general, I would say to use the color picking to find your object first. Once you have that object, you can get the normal through your regular routines, etc…

Forgive me, but isn’t what you just said nonesense? ie…“get the normal through your regular routines” - well that involves a ray-to-triangle test for every triangle in the selected object, doesn’t it? Or are you colour coding every single triangle in every single object - which would involve using per-vertex colours, which in turn increases bandwidth usage and makes for very messy code?
My point is, if you’re going to have to do the ray-tri intersection tests for one object anyway, then why not add on the very cheap cost of a few ray-box intersection tests to narrow down the potentially intersected set of objects without this colour coded render cr*p?
I’ve seen colour coded selection and gl selection before in fellow workers projects, and it…well…disgusted me then, and disgusts me now (strong words, I know - but I’m a passionate orange). You already have the information you require in nice and fast system memory, hopefully structured properly - why oh why would you want to send it all through the render pipe to find something as simple as an intersection?
You will have to confront this problem sooner or later when you add collision detection…make it sooner rather than later.

To each their own I suppose. I just happen to think that being able to color code your rendering exactly as you have it is an ideal way to do selection on any type of geometry (especially since you get it for free if your framework supports it). Sometimes its not always about just finding the angle. I am not opposed to tracing rays, and I have code to do that built in also. I use color selection as a fast and easy way to grab exactly what is under the mouse and trigger its event. Think of the case of a highly tesselated torus. The shape info is not known, as its stored as a mesh (for our example). Lets say it lies directly in front of us so the hole is in the middle of the screen. We will always have to test the ray against every polygon in our torus mesh because the ray will always hit the bounding box (but not necessarilly the mesh). Now imagine a whole line of these objects lined up (so that the holes are still in the center of the screen). Granted the calculation time might be okay to deal with for only one ray, but imagine if you had to cast a ray for every pixel on your screen. The time it would take would be ridiculous (my opinion). Especially if the torus is just one of many objects you have in that same scene. Sometimes you just want to find out which models are being hit by your rays quickly. Grabbing the entire screen and processing the color values is wayyy quicker than casting rays for each one of those objects (otherwise we would have more realtime HIGH-RES raytracers). But if you can get away with doing the calculations every frame, go ahead. I just think its handy to have a fast method of grabbing exactly what is visible in very complex scenes.

Come again?.. why on god’s green earth would you be casting a ray for every pixel on the screen?.. i thought the objective was to find out what object was under a single pixel.

If a scene is so complex, surely it is slower to re-render the entire thing for the purposes of intersection testing than it is to perform those tests geometrically in main memory.

And as for shooting through the centre of several torus which are lined up… i wouldn’t think that was very representative of a general case.

[edit] stopping to think about it some more, the only case I can see where raster-based collision detection may be better is when an object is transformed/animated by vertex programs so that the exact shape of it is not easily known before render time

[This message has been edited by BadMmonkey (edited 05-13-2003).]

A single pixel is only one case. There are many cases where you need to find out what is selected under a certain area on screen (selection box) or find out exactly what is visible on screen. In my case, I run into screen grabs frequently. I don’t know what you’re rendering, but seeing as how you have already pushed everything to the hardware to render it, why sit and throw rays for every pixel on the screen (which will get really slow), when you have everything already rendered. I only use this method to determine which objects the user has selected under a 2d box on screen. Because I use a hiearchical approach, I can group together stuff (models) or even go down to their polygon level pretty easily. Simple as throwing a flag which will handle all the coloring for me.
Have any of you done this type of selection with the ray casting method? I would just like to know how fast it is to cast a ray for every pixel on screen in a complex scene to determine what objects are visible. I am not talking about doing 1 or even 20 casts which would seem quick (for games). Lets say that it only took 0.0001secs for one ray cast. At a small resolution of 800x800 it would take ~64 seconds to do that for every pixel on screen. To grab the frame buffer on my GeForce4mx, its only ~2secs. Please keep in mind, there is no telling how complex the geometry can be. I use a lot of parametric objects and combinations of those to build new objects. Everything can use transforms, and almost nothing is static.
If you are still convinced that it would be best to do it with ray casting, I would just like to see what results you got for doing it for every pixel on screen. If you are writing a FPS or something similar, most likely you’re also using a bsp tree with pvs and have few models visible. You might (weak possibility) be able to get away with casting rays for every pixel in this case, but now you are dealing with specialized cases. My environment is completely dynamic, and can contain thousands of objects visible at once. I am not saying that ray casting is bad, I am just saying that color selection is not as bad as you are making it out to seem.

Well, if you’re talking about pixels within a rectangle on the screen, then obviously you wouldn’t ray cast for every pixel, you would box-cast, which would involve one traversal of your scene. You’d get results way faster than 2 seconds for even a hugely complicated scene.

You would still have to cast a ray for every pixel within your “box” frustum to see which ones got hit. You only want the visible objects. If that “box” was the size of the screen, you would still have to throw a ray for every pixel on the screen. You can’t just create a new frustum and grab all the elements inside of it.

I don’t know if its the right assumption, but I assumed box cast meant create a new 3d frustum from the rectangle coordinates. I would like to know more pleasem if this is not what you meant.

[This message has been edited by lxnyce (edited 05-14-2003).]

What do you want to know?
What objects are inside a selected rectangular region?
Or what objects have been touched by the mouse pointer?

I’m a bit puzzled as to what you want to select in this hypothetical scenario you’ve set up…

The goal is not to find out which objects are inside the frustum, but which objects are directly visible (touched by the mouse within this frustum). A better explanation would be only the objects that the Z in the z-buffer belong to.

With this, you can do a direct select of all the objects selected by the user. Take for example in the case of a 3D editor. You would want to grab all the visible faces to do some calculation or apply some texture. To do so, you would have to shoot a ray for every pixel on the screen (depending on the size of the box). Editors are usually ran at high resolutions, so you can easily reach 1000x1000 raycasts just for a particular region on your screen.

Another reason you would want to do this, is for accelerated first hit raytracing. I know some people are against the idea of using hardware for raytracing, but if you have it and can get away with it, you might as well.

I am also experimenting with this to automatically calculate hidden areas (PVS)for geometry thats not indoor based. You can decide which nodes in an octree are visible by rendering a structure as seen from outside tagging each polygon with whatever octree quadrant’s color its in. If you render the 6 panaromic views, you can read in all the color values and know which octree quadrants are visible from the outside. You can later use this information as a sort of LOD by only rendering those nodes when you are outside of the structure. It’s not the greatest, but you should be able to gain a huge speed increase for complex indoor geometry mixed with an outdoor environment. All processed in realtime, due to color selections.

All I am saying is that raycasting has its benefits, but color selection has its own benefits too.

Originally posted by lxnyce:
The goal is not to find out which objects are inside the frustum, but which objects are directly visible(touched by the mouse within this frustum).

So depth sort your objects when you are doing your ray casting.

I’ll go for ray casting over pushing my entire geometry set, with a different colour per poly/tri to figure out what tri/vert/object is being clicked on.

If you want to do “Select the nearest vert” type functionality you are stuffed if you use colour selection.