real time image-warping

hi,
we are trying to develop an application that does real time image-warping. the goal is to be able to project any possible content via a videobeamer, not only onto planar screens but also on spherically distorted surfaces such as a dome.
so far we tried to capture the content of the screen/desktop and use this as a texture to map it onto a warpergeometry via opengl.
the problem is: all attemps to capture the screen were too sloww for a realtime application (with a framerate of about 10 fps).

so either there is a more performant possibility to capture the screencontent or we have to look for another way to get this working.

we would be very pleased to get any tips on this topic since we should develop a working solution by the end of september.

an example how the whole thing could work is a feature of the drivers for the new nvidia graphics cards to be found under:
http://www.nvidia.com/object/feature_nvkeystone.html

any ideas what technology they used?

thanks in advance, floww

This is what you need, I’m listing some patents but prefixing it with something that I have discovered more recently that is related but unlisted prior art. You’d probably want to use render to texture these days.

Julie O’B. Dorsey, Francois X. Sillion, and Donald P. Greenberg. “Design and simulation of opera lighting and projection effects.” Computer Graphics (SIGGRAPH '91 Proceedings), 25(4):41–50, July 1991.
http://www.graphics.cornell.edu/pubs/1991/DSG91.html

U.S. Patent # 6,369,814

Transformation pipeline for computing distortion correction geometry for any design eye point, display surface geometry, and projector position

U.S. Patent # 6,249,289

Multi-purpose high resolution distortion correction

Patents probably assigned to Microsoft by now, mine was originally filed while I worked at Silicon Graphics but subsequently sold to Microsoft & I’m pretty sure Remi’s was too.

Links to the above patents (dunno of these links will expire)
http://patft.uspto.gov/netacgi/nph-Parse…RS=PN/6,369,814
http://patft.uspto.gov/netacgi/nph-Parse…RS=PN/6,249,289

P.S.

Definitely use render to texture or glcopytexsubimage2d, if you’re reading back with readpixels then you will be slow, but in general the copy to texture is the biggest performance overhead even using the fastest available method. Render to texture is the way that may not be optimal now but promises to deliver improved performance in future on some platforms if you set up your pbuffers correctly, OTOH it may not and the copytex is currently fast (allegedly). 10Hz seem excessively slow if you have a decent graphics card. Reducing the resolution may help if you can and remember that you can enable bilinear filtering and magnify the image using the texture filter and simply draw the image smaller to the texture to reduce the copy time at the expense of quality without changing the video resolution.

FYI - These pages show this kind of thing running at 60Hz on an Infinite Reality simulator I helped integrate for SAAB and their supplier:

(both images) http://www.seos.com/prod_saab.html

(lower image) http://www.helicomp.net/node1434.asp

(lower image) http://www.saab.se/future/node2561.asp

(screenshots undistorted) http://www.saab.se/future/node806.asp

Most of the distortion correction was done in the projectors for the forward channels but the LCD dome projector used the readback to texture distortion correction techniques although it was seat of the pants integration without the fully automatic mathematical formulation of the transformation from “OpenGL screen space” to real world 3D observer space at that time.

Readback was on the low resolution side at 768x768 but each graphics pipe drew a high res channel AND a distortion corrected channel.

I know of a number of OpenSceneGraph users that
have implemented image distortion correction on modern PC graphics hardware with real-time frame rates (i.e 60Hz). One of the examples in the distribution provides an example of pre-rendering process to.

The technique imply by render to texture by first rendering the scene to back buffer the scene, then using glCopyTexSubImage2D to copy the pixels to a texture. This texture is then render on a mesh which is so computed as to correct for the projection systems distortion. The glCopyTexSubImage2D is well optimized with NVidia hardware so doesn’t turn out ot be a bottleneck.

Robert.

Take a look at software from Elumens ( BTW they have image distortion correction patent
but I’m not sure which one…) http://www.elumens.com/cgi-bin/softloader.pl

This soft is used for the use with Elumens dome displays www.elumens.com

internally for our projects we use fish eye vertex shader to distort images ( I had question here some time ago how to make stack of different shaders…there seemed were no solution but we solved a prob… so now at our projects we render entire scene ( some objects with their own shaders then apply entire scene vertex shader which outputs ‘fish eye’ to screen. for contacts visit www.vrtainment.com (I’m not official person to contact )

[This message has been edited by SergeVrt (edited 08-20-2003).]

glCopyTexSubImage2D is well optimized with NVidia hardware so doesn’t turn out ot be a bottleneck.

this part is no problem at all. just receiving the screen-data is too slow (actually 15 fps).

jfmi: how does the winxp remote desktop work? is there just meta-information going through the network?

sorry to post OT

OK, so your issue is that you want an external application to arbitrarily distort any image in the framebuffer (not your own). That’s a trick. There’s been some talk of getting a GDI buffer back to texture on this forum, but it’s out of my area of expertise, now that the problem is clearer let’s hope someone will pitch in and that “use DirectDraw with Direct3D” isn’t the only answer.

… Uh wait, you’re sending this image over a network to a different PC? Please sir, may I change my answer, 15fps sounds incredibly fast. Actually I dunno how remote desktop works, I assume it isn’t image based but obviously bandwidth will depend on the contents, this kind of stuff (on Windows specific APIs)is really out of my ballpark. Why remote desktop? Can’t you use something like a dual head card. Measure performance without sending to OpenGL, if it’s slow then you’re really OT asking here unless you have at least the chance of binding the desktop to a texture on the local machine. NVIDIA have their hands on the driver code and can pretty well do stuff we may not have access to like render the desktop to hidden buffers and send the video display through some custom texture operation. That doesn’t mean you don’t have options of course, but you may have some overheads they don’t.

[This message has been edited by dorbie (edited 08-20-2003).]

I just noticed this thread:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010237.html

It now seems relevant. Of course it’s probably not possible to display and do simultaneous continual readback from a desktop unless you have a dual head system of some sort, or driver level modifications, but at least the posters there are in the right ballpark and may offer better advice.