Live Video. Augmented Reality

Hi-

I would like to map a 3D reconstruction of a brain tumor with a live video image of the patient. I need to know what video capture boards OGL will support. If anyone has some experience with AR techniques using OGL, please let me know. I have heard that OGL 2.1 will support live video texture mapped on a polygon. Is this true?

Thanks,

Abhilash:

pandya@neurosurg.wayne.edu

Do you mean OpenGL 1.2 ?

OpenGL is an immediate-mode API for DRAWING. It doesn’t have anything to do with video capture directly, and never will.

That said, mapping live video onto a polygon is perfectly possible even with the current version on OpenGL. But it’s the application’s responsibility to upload each captured frame as a texture (probably fastest to use glTexSubImage2D to avoid having to reallocate every frame).

Bear in mind that current consumer hardware isn’t really optimized for this kind of thing. You’d probably be better off with an SGI.

Originally posted by paolom:
Do you mean OpenGL 1.2 ?

Yes…a little typo. I am basically trying to understand the hardware and software requirements for augmented reality for the PC so I can make an intelligent order as we begin this project.

Thanks,

Abhilash.

Originally posted by MikeC:
[b]OpenGL is an immediate-mode API for DRAWING. It doesn’t have anything to do with video capture directly, and never will.

That said, mapping live video onto a polygon is perfectly possible even with the current version on OpenGL. But it’s the application’s responsibility to upload each captured frame as a texture (probably fastest to use glTexSubImage2D to avoid having to reallocate every frame).

Bear in mind that current consumer hardware isn’t really optimized for this kind of thing. You’d probably be better off with an SGI.[/b]

Are you saying that to your knowledge there are no optimized video capture cards which will allow programming control for frame capture and display for “standard” pcs? Also, by SGI I assume you mean the SGI 320 or 520 pcs (a little bit pricy!)?

Thanks for your response.

Abhilash.

[This message has been edited by pandya (edited 02-17-2000).]

Originally posted by pandya:
Are you saying that to your knowledge there are no optimized video capture cards which will allow programming control for frame capture and display for “standard” pcs?

There may very well be lots of video capture cards which fit what you’re doing. All I’m saying is that this has nothing to do with OpenGL. OpenGL doesn’t “support” video capture cards, any more than it supports three-button mice or ODBC. You can certainly use those things in an application alongside OpenGL, but you’re going to have to provide the middleware.

While you can (presumably) get any video capture card to write images into main memory, and you can upload those images as textures into any OpenGL-accelerated 3D card, this is all going to take a fair amount of bus bandwidth if you want to do it in realtime; that’s the reason I suggested you might need to move to a workstation. But OpenGL is extremely portable, so you could always start off on a PC and see how it goes.

>[QUOTE]Originally posted by pandya:
>[b] Are you saying that to your knowledge >there are no optimized video capture cards >which will allow programming control for >frame capture and display for “standard” >pcs?

I’ve never actually implemented this, but
I’m fairly certain it would be a two step
process. You would most likely have to
get the “texture” image from the camera using
a DirectX interface (most likely DirectVideo I think). I would assume most camera vendors would have DirectX drivers for their product, or you wouldn’t be able to do much with them :slight_smile:

After you’ve gotten the image with DirectX calls, you would then have to convert it into
a format more appropriate for OpenGL. After that, just map it to a 2-D polygon and draw it into the background. Then pop your 2-D viewing matrix, load up a 3-D view matrix, and clear the Z buffer. Then draw the brain :slight_smile:

Keep in mind that if you want your OpenGL
app to run full screen, you’ll most likely
have to use DirectX (DirectDraw) anyway to
set the screen dimensions, and depths.

I’ve done a little with Direct3D, but nothing with DirectVideo. I’m pretty sure it wouldn’t be that difficult. You could probably find a ton of books explaining the API.

>Also, by SGI I assume you mean the SGI 320 >or 520 pcs (a little bit pricy!)?

We tested a couple of these are were really dissapointed with their graphics throughput. Not much better than some PC’s i’ve seen with Nvidia GeForce boards (if at all)

>Thanks for your response.
>Abhilash.

I know it’s not much info, but it’s a start.

Terry

Originally posted by TerryMayes:

Keep in mind that if you want your OpenGL
app to run full screen, you’ll most likely
have to use DirectX (DirectDraw) anyway to
set the screen dimensions, and depths.

Really? I thought that just about any necessary mode changes could be done with ChangeDisplaySettings(). DirectDraw is notoriously unstable when used in conjunction with OpenGL; even if it works for you, it’ll probably break for somebody.

I am currently doing a Augmented Reality project for my Honours degree. What I have is a real scene (a model archway) captured by a camcorder, which is mixed with computer generated graphics to produce an augmented scene (displayed on a small TV).

The computer generated virtual object is a cube. This cube can be moved around the scene, but will not pass through the archway due to collision detection routines. The cube can pass in front of the archway, where it occludes the archway, and behind the archway, where it is occluded by the archway.

I also have a virtual light source positioned so that it corresponds with the real light source illuminating the scene. Shadows from the archway fall on the virtual cube, and the cube casts a shadow onto the archway.

I used OpenGL for rendering, and Mark Kilgard’s GLUT 3.7 for windowing on Win98.

Andy.