PDA

View Full Version : REAL fisheye view with OpenGL



Lyve
06-07-2003, 02:36 AM
Hello,

is it possible to get a REAL fisheye view with OpenGL? Is it possible to tell openGL to use the whole vertex to calculate the distance to the camera, not only the z coord?

With REAL fisheye I mean the error that occures on real cameras, you move too near to a plane with a big fov and the plane distorts into a sphere. In current 3D applications, this doesn't happen because only the Z coord is used to compute the distance between a vertex and the camera position.

Is this possible or do I have to compute it for myself?
I would prefer doing it with hardware to save CPU time. I could use a vertex shader to do it but I want my app to run on computers with less than a geforce 3 if possible.
Any chance to do it with hardware?

Lyve

Zengar
06-07-2003, 03:38 AM
The only way, I guess, is to use vertex programs. They are emulated in software for older cards(like GeForce 1, 2, MX series), so you shlould have no portability problems.

jwatte
06-07-2003, 06:53 AM
You can adjust the w to compensate for how the interpolator will use Z to determine distance.

BEWARE! If you do this, then the rules for interpolating straight lines are NOT linear, and you will get distortion in the middle of triangles and lines, so you have to tesselate very highly for this to look convincing.

epajarre
06-07-2003, 12:17 PM
Originally posted by jwatte:

BEWARE! If you do this, then the rules for interpolating straight lines are NOT linear, and you will get distortion in the middle of triangles and lines, so you have to tesselate very highly for this to look convincing.[/B]

Do you mean that the straights lines will be still straight, even if they actually should be curved? If so then I agree.

I think that the person with the original question would be able to get a reasonably good fisheye effect, by first rendering the
scene normally and then using rendered image as a texture map, which is distorted to the final image. If a very wide angle image is needed, it is possible to render multiple original images for different view directions and use them as (a sort of) cube texture map.

Eero

dorbie
06-08-2003, 12:24 PM
Yes, but now you're talking about an image based approach, rather than the vertex program based approach initially suggested. There is an nvidia demo that does the vertex based approach in their SDK.

I've implemented the image based approach and it works very well. Problems are the readback overhead (copyteximage or render to texture), the oberhead for drawing the distorted image mesh to the screen, and finally sample quality over the distorted image, you're resampling an image that has been rendered with a tan theta pixel distribution so multiple views are important to get the quality right, even if you could render a single view to get the coverage you need. Some approaches like render to texture negatively affect antialiasing on some hardware.

A cube map probably won't do a good job for you but you can calculate the appropriate projection for a mesh over your original image and transform that to the new fisheye view.