Computing an objects transformation matrix from in

Can anyone help me solve this problem?

I have three known world-space xyz points located on a plane. All three of these points are visible to the camera.
The cameras frustum is known and the locations in the cameras pixel space of each of the three points of
the plane are also known.

From this information is it possible to calculate the transformation matrix of the plane? I can only think of an
iterative method that doesn’t seem very robust.

Thanks in advance!

you talking about screen space??
from three axes you can construct a transformation (rotation) matrix by putting each axis into the corresponding column. matrix needs to be orthonormal in other words each axis has to be 90 degrees to the other and normalized. the axes you get by subtracting each point from plane origin.

Thanks for the response!

However, I’m not sure you understood my question. Thats my fault!

I’ll go into a little more detail to explain exactly what I’m trying to achieve.

Firstly, the camera that I mention is actually a real camera (A canon G9 camera that I’m operating remotely)

I know the focal length of the camera so basically I can work out the perspective matrix of the camera (theres also
a bit of lens distortion but its not too bad).

The camera is located anywhere in the room but is always facing the inside of a sphere. My problem is that I need
to calculate the location of the camera in relation to the centre of the sphere.

In order to calculate this I figure that I need to physically mark three arbitary points on the inside of
the sphere that the camera can see. I can measure the XYZ locations of each of these points relative
to spheres centre.

The resulting camera position would be a best-fit solution.

ok, thats a bit different approach then actually specify the camera position with a matrix where you already have the position values. i don’t know how you set the camera where it already is but would guess some math on spheres would help you out.

“In order to calculate this I figure that I need to physically mark three arbitary points on the inside of
the sphere that the camera can see”
the camera can see anything in space without changing position which is then simply the orientation of the camera. sorry can’t help you more on that. maybe someone else posts any ideas.

This remind me the star tracker system but I am not well informed on this. I have found this paper googling, it might help you:
http://www.buildexact.com/PositionAndAttitudeSensing.pdf