I am working on the foundations of a concept that would allow an observer to view a virtual environment, not on a 2D plane but within 3D object.
The concept in essence is very simple. To explain the idea easily, the following is a possible user scenario:
Within a space is a white cube. Near the cube is a projector set up to project imagery over the cube. Connected to the projector is a computer that is also connected to two tracking cameras set up to track the observer’s eyes. One camera is to track the vertical position of the eyes and the other camera is to track the horizontal position of the eyes.
The system is designed to project over the physical white cube another virtual cube of the exact same dimensions, so to perfectly overlay it. There is an example of this on my website here:
www.kitwebster.com.au/installation/wakingspaces
In the videos the illusion looks good but from only one position, that is of the same position as the projector. If the viewer changes position, the imagery becomes oblique and the effect is destroyed.
The idea is to have the dimensions of the imagery transform in realtime according to the shifting position of the observer, so the projected imagery always appears as being of the correct proportions. This way the observer can view the virtual environment projected to appear within the cube, from any given position within the space. For the observer witnessing this effect, an illusionary sense of depth within the cube is created, making the virtual environment appear to be physically inside the cube.
I have reached a stumbling block in the development of this concept, as I don’t have the necessary programming skills to distort the imagery correctly.
I am wondering if anyone can give me an idea of what is the best way to approach this? As many of you are already working with 3D environments can you visualize how the imagery would distort? Any help would be really appreciated. Thanks.