mapping flat 2d to 3d coordinates on an image

Hi!,

I have coordinates (x,y) on a “flat” soccer-field. These are the coordinates as seen from directly from top-down and they have the lower-left corner as their origin.

Now, I have an image like the one below which is rotated in space. I don’t know exactly how it’s rotated as I did not make the image. I would like to map the flat coordinates onto the image below.

I have tried using linear interpolation in order to find the matrix that maps a coordinate in the 2d field to the 3d field. However, this seems to work only partially.

Tips anyone?

Thanks in advance!

I imagine you could use calibration/warping techniques from computer vision to solve this problem. You might look into OpenCV, for example.

You’re going to have to do perspective matching. First display the image of the soccer field in the background of an OpenGL window. Use glDrawPixels to do this. Next, set up a camera (using gluPerspective) that looks at a wireframe version of your flat soccer field from a point similar to wherever the image was photographed. Your wireframe soccer field will now overlay the image of the field. But they probably won’t line up very well. So you have to keep adjusting the position and field of view of the camera until the wireframe version of the field exactly matches the underlying image. If I was doing it, I’d put in GUI elements that let me interactively change the X,Y,Z location of the camera, it’s pan, tilt, and FOV. This would help zero in on a solution faster. If you don’t want to mess with GUIs, you can just go into the code, tweak all those variables, recompile and execute, over and over again until you converge on a solution. So you see, this is not a trivial thing to do using OpenGL.

Do you have any 3D graphics software you could use instead? An animation package like Maya, Max, or Lightwave, or perhaps a CAD package would probably be an easier way to go than writing code.

Good luck.

Just for the ‘fun’ of it, I tried to follow my own advice and implement the ‘perspective matching’ approach described in my previous post. The result appears below as a screen grab of an OpenGL program which splits a window into 2 viewports. The top viewport shows a 2D model of a soccer field. Note that soccer fields can have different widths and lengths, though the internal lines must have certain dimensions. Part of this problem was figuring out what field width and length would match the field in the poster’s image. The pink field in the top viewport is overlayed onto the poster’s image in the bottom viewport using my idea of ‘perspective matching’. There are problems. No matter how I tweaked the viewing parameters, I could not get my internal field markings to match up with the markings in the poster’s image. Perhaps there’s some subtlety about perspective projections that I don’t understand. Or, maybe the poster’s image was somehow distorted? I put those yellow lines on the field to show clearly that the center circle in the poster’s image seems to be offset quite a bit from the true center of the field. If the poster’s image has somehow been distorted, he’s going to have problems mapping points from 2D field coordinates onto his 3-space image. Is it possible that the poster’s image has been generated from a dimensionally correct model of the field, and that it differs from my overlay because a different sort of perspective projection was used? I used gluPerspective to do my overlay.

Looking at your image again, I don’t think there’s any doubt that the model is off. The goal area rectangles and the center circle are not centered between the sidelines. This is going to throw off any attempt to do what you are trying to do.