Bundle Orientation library

Hello,

I’m trying to write an optical flow application, but I need to compute the relative orientation between images (webcam stream) Writting that from the scratch is out of my math skills, so I thought that there should be any library that performs that. After looking up for the web I didn’t find anything, but perhaps anyone here knows a good one. If it is the case I really appreciate any suggestion.

Thanks.

Check OpenCV.

I already checked that (in fact, I’m using it for webcam access and optical flow calculations (later I’ll replace that part with a GPU’ized KLT algorithm), but OpenCV lacks of a bundle adjustment method. (what I really need, at the end, are the camera matrices their rotations and translations. 3D features’ positions are easy to compute and I already has that part done)

I (almost) understand.
There is some interesting developments detailed here, you need to add stuff to the cvaux.h to have access to these features (still in dev) :
http://opencvlibrary.sourceforge.net/EffortFor3dReconstruction

“All this functions are not declared extern so you can not use them. To make this work you have to put the definitions in the cvaux.h”

EDIT : oh and by the way, I would be very interested too, please tell me if you find anything.

I just stumbled upon this, GPL package for bundle adjustment (I did not test it yet) :
http://www.ifp.uni-stuttgart.de/publications/software/openbundle/index.en.html

I’ll test it. If it works will serve me for prototyping, many thanks :slight_smile:

Edit: After playing for a while I can see that the lib depends on LAPACK, BLAS and Fortran. Well, I dont like that very much, but will be fine for prototyping, and that’s what I was looking for :slight_smile:

ZBuffer, are you interested, when I get my first orientation with this library, in the sample application? you pointed me to that lib, so you deserve it :slight_smile:

Yes please ! :slight_smile:

In fact I have a pet project on the backburner right now, it is a small turtle robot with a wide angle camera, controlled via parallel port, and I am toying with doing SLAM only using the monocular vision :

  • when moving forward, I track points using OpenCV, estimating distances by measuring relative displacement… Currently the reconstructed world is not really recognizable (euphemism) as I assume “perfectly linear movement with fixed speed” … The “bundle adjustment” techniques would seem adequate to have more robust results.

I can put some pictures online if you are interested.

EDIT: this guy has some impressive results on 3D slam, but no source code alas : http://www.robots.ox.ac.uk/~gk/

That stuff is really impressive. I really liked the oclussion examples :slight_smile:

I work now in a digital photogrametric company. We are developing an application oriented to produce true ortophotos, but the process passes throught the digital reconstruction of the building, so I am familiar with photogrammetry algorithms (RANSAC, KLT(GPU), SWIFT, Bundle Adjustment, etc), but I’m not an expert in that area.

Now, as a personal project (perhaps for gaming, or for marketing) I’m developing an augmented reality system (without oclussions, thats what I liked that part of your link :slight_smile: ) that should work with a middle quality webcam. What I want to achieve now is a very-low density 3D reconstruction of an object seen by the camera and the camera’s relative positions. My goal is getting a realistic composition of 3D syntetic objects with the video stream.

Now I’m in a prototype phase, where I test algorithms and try to realize its performance and which ones must (and can) be gpu’ized (please OpenCL, come soon…)

So, well, when I have a test project, rightly oriented with this library, I’ll send it to you.

P.D: It would be nice see those pictures of your project :slight_smile:

I managed to compile dgap in my Debian. It took to me some days because standard Debian lapack packages are broken and have linker errors. Curiously, it compiled perfectly in my Yellow Dog 6 linux (PS3) but for Debian I had to use the Tuned Version of lapack&blas (libatlas3) and change accordingly the Makefile (ah, and the Makefile that comes with dgap was incorrect, I had to fix it)

Now I’ll try to create and orientate a project, I’ll send you (ZBuffer) my results when I got them.

That would be great, thanks !
And here are some pictures of my toy : http://www.chez.com/dedebuffer/robotics.html

Feel free to comment/ask more.

Of course, having now a really autonomous toddler eager to “slam” prevented me to do much progress in the last months…

Hey,I like your robot :slight_smile:
I have a question about hardware, perhaps you can help me.
I’m getting crazy searching for a consumer camera with enough quality. I tried several webcam models, from 12€ to 100€ but I had problems with all of them. Or they haven’t enough resolution (I need at least 640x480) or they haven’t enough refresh rate; I mean, the camera only gets the on-specifications fps with the propietary application. I wasn’t able of getting more than 5 fps at any resolution using the uvcvideo driver.
Perhaps can you tell me a camera model (webcam or not) that could be suitable for my purposes? My target is 640x480 at 30fps.

Thanks :slight_smile:

I had the same needs, and when I bought it, this one was the best I could find with guaranteed Linux support :
http://www.unibrain.com/Products/VisionImg/tSpec_Fire_i_DC.htm

100% support under Linux/Mac/Windows, live up to its specs.
Plus I needed a wide fov (90° horizontal), and could order a wide lens as well.
A bit costly : paid around 200€ 2 years ago for camera board+plastic case+widelens+firewire cable+small tripod with standard mount+shipping.
But this is hassle-free, and use a standard DCAM interface :
http://en.wikipedia.org/wiki/FireWire_camera#Interface

If you want to track points, I suggest you get a pure BW model, images are less sexy but you don’t have to struggle with BayerRGB interpolation, even in ‘bw’ mode it is noticeable. I got a color cam so that it could serve as a regular see-distant-people-webcam too, but it is not the best for robot applications.
An infrared model would have been even better, with some IR leds to see in the dark…

By the way, a clarification : the bundle adjustment works by first tracking points of a know geometry ? Or is it completely calibration-free ?

EDIT: I thought the recent UVC linux driver provided good support across a range of usb webcams :
http://linux-uvc.berlios.de/
What webcams did you test ?

For bundle adjustment you have to remove lens distortion, so a minimal calibration is needed (I’m talking about lens calibration) You basically need focal distance, principal point, ccd dimensions and distortion coeficients (K and P) but using Field Calibration, you can automate and refine this process (Look at Multiple View Geometry book, from Hartley & Zisserman it REALLY worths)

Bundle adjustment works with arbritrary points. At work we are using manual marked features but in this month we are going to code a semi automatic way of detecting and pairing features, so they are totally arbitrary.

About bundle adjustment precision, we can reconstruct a complete object (in fact we use this way to reconstruct buildings and heavy industrial boilers) with a precision of 1:10000 (for example, in an object that covers the whole photo and in reality its width is 10 meters, we can take measures with an error of -+1 millimeter, using a 10 or 12 Megapixel camera, for instance Cannon EOS 40D or 5D)

Of course I’m talking constrained to my limited knowledge of digital photogrammetry. Those algorithms I wrote about works well with long base-line systems. for short base-lines, well, I problable have to test them and investigate more on SLAM and Parallel Tracking :slight_smile:

About the cam, I already tried the UVC video driver. My camera is supported but in an very basic way. I have to write my own frame grabber to perform camera capabilities test :stuck_out_tongue:

For now I simply did a manual calibration of my cam, I don’t need it to be automatic.
This book looks awesome, but at 65€ I will have to be sure I need it :slight_smile: but at least it would teach me the background and vocabulary of the domain.

Bundle adjustment works with arbritrary points. At work we are using manual marked features but in this month we are going to code a semi automatic way of detecting and pairing features, so they are totally arbitrary.

I did some simple reconstructions with an old demo of Photo Modeler, yes manual marking is tedious, and sometime that program refused to converge towards a plausible reconstruction…

we can reconstruct a complete object with a precision of 1:10000

That is very impressive !

Those algorithms I wrote about works well with long base-line systems. for short base-lines, well, I problable have to test them and investigate more on SLAM and Parallel Tracking :slight_smile:

Indeed, short baseline means easy automatic correspondence of features, but much less depth precision. Being precise with lots of imprecise data is possible, but good mathematical tools are needed.
Currently I aim at using “particle filters” : a very versatile tool, and even with my moderate math level I (think I) understand the idea sufficiently to start an implementation.

Then once all the points are fully determined in 3D, the question is : how to connect them to have surfaces/solids ?
Currently I capture 3D “keyframes” every second, then apply 2D Delaunay triangulation on it from the webcam POV, and finally texture the surface with captured image. After this, I still have the problem of merging together all these noisy 3D snapshots to get something precise.

I worked with PhotoModeller too. It’s a “difficult” application to work with (lets let things there…) but I dont know a better (or even similar) tool for doing the same thing.

By the way, I found an amazing library for AR (perhaps it could help you, unless you want to code all math stuff for yourself, but is very good for prototyping) The library is ARToolkit and you can check it here: ARToolkit

About DGAP, I’m still doing some tests but I’m facing some troubles to convert my project data (photos, features) to its format. Thats why this is taking so much time (and I have not much free time those last days…)

About ARToolkit, I did some tests with it, but they where not really conclusive.
I don’t remember if I could not get all the deps on linux and compile the project, or if the tools did not suits my needs.
I guess I have to give it a try again :slight_smile:

I’ve actually had decent results with ARToolkit. I based a good amount of my graduate work on it, including my master’s project. Got it working on various Linux distros as well as Irix. This was, however, 4 years ago, so things might have changed in the interim.

CrazyBillyO, can you share some info about what you have done with ARtoolkit ? Do you have a website, a paper, or source code about what you have done ? It looks very interesting.

I believe I do have some papers on some of the things I did. However, I don’t have them with me right now and I’m not entirely sure where they are located (Or even if I have non-physical copies).

My master’s project involved creating a polygonal approximation of a real-world object by placing markers that ART recognized on the object and panning the camera around it, accumulating and intersecting surfaces represented by the markers. There was some reasonable success, but the results could have been better. Filters could have been applied and the camera resolution improved to get better results.

Another project I worked on attempted to create curved surfaces using the markers. The positions of the markers were used as the extents of a surface (think control points) and the directions of the marker axes were used as initial curve directions. These were funneled into parametric curve/surface equations to generate the inner points.

If I remember and have some spare time, I’ll see if I can scrounge up the papers/documents.

Hello,
After playing with DGAP this is what I realized out. I tried it (it was difficult because its input data doesn’t match at all with mine) and after lossing myself in those heavy maths, I contacted with the library’s author. Basically I know now that this library can be used to perfom an orientation, but as input parameters an initial estimation of cameras positions, I mean, it could be used to refine the initial suppositions, but it doesn’t computes the total orientation.

Well, it does less things than I expected, but does the difficultiest ones. It will serve me later to write part of the total orientation algorithm (if at the end I decide to use this approarch)