Stereoscopic 3D with frame packing.

Hi.

I’m looking into doing Stereoscopic 3D rendering with OpenGL and was hoping to be able to avoid the expensive hardware for quad buffering.

The HDMI 1.4 standard has support for S3D in 720p@60Hz with frame packing, which might be sufficient for my needs.

So, say I have a HDMI 1.4 capable screen. Does this mean I can get a 1280*1440 back buffer and just render my left/right frame into the top/bottom with calls to glViewport? Or do I still need quad buffers or any other hardware/driver magic in this case too?

This forum might not be the best place for this question but I’ve found it hard to find any information about this. Any help or information on the topic is welcome and feel free to direct me to other forums if you know a more suitable place. :slight_smile:

Apparently HDMI 1.4 “frame packing” for stereo 3D is a low level system on the HDMI transfer protocol, which your card have to support :
http://www.avsforum.com/avs-vb/showthread.php?t=1301544

The top/bottom or side-by-side modes seem to be doable whithout any special system, however your screen must have support for it.

3D stereo for consumers could be much more developed if Nvidia did not intentionally cripple the dummy 3dvision to be D3D only and not controllable by developers. If there were working glasses for the 3D TVs in demonstration in stores it would help too…

Frame packing formats - in theory - have a special flag with the signal that identifies it as frame packed. In practice, most sets will pick up 1280x1440 as a 3D top/bottom format automatically. On all 3D TV’s I’ve been exposed to, you can select a mode manually as well - so you can treat any kind of signal as 3D - for instance if if not easy to get a 1280 x 1440 output.

No you don’t need special hardware, but it’s up to you to split the screen and draw the scene twice - also dealing with aspect ratio if needed.

Bruce

While we are at it, how does 3D TV sync the active glasses ? Is the emitter controlled directly by the TV, or by something else ?

With Nvidias 3D Vision glasses you get an emitter that you connect via USB so I guess it uses vsync or something else in the graphics driver to toggle the glasses. Some new monitors seem to have built in emitters as well and I guess they can control it directly in that case.

Back to my question, so it seems I might be able to do S3D this way. I guess the easiest way is to get some HDMI 1.4 capable hardware and try it out.

Thanks.

Crap. Non-CRT screens often have a delay to display frames, so that won’t work unless you can tune the transmitter delay to match the screen, each screen have its own built-in delay…
On the contrary, emitter from the TV will not have this problem.

Crap. Non-CRT screens often have a delay to display frames, so that won’t work unless you can tune the transmitter delay to match the screen, each screen have its own built-in delay…
On the contrary, emitter from the TV will not have this problem. [/QUOTE]

Good point. As I said I’m not entirely sure how this stuff works, I’ll have to read up more and see if there are ways to sync correctly. In my case we will most probably want to be able to use projectors in the future as well and I’m not sure how that would handle this stuff either. Built in emitters would probably not work at least.

Edit: I remember reading something about HDMI 1.4 displays having the ability to report delay back to the user. So maybe it’s not impossible to sync correctly. Not sure about this either though so more research needed. :slight_smile:

NVidia corrects for this, as much as they can. That’s why only some monitors will allow the 3D Vision to activate.

Edit - using EDIDs.

Yeah why use a low tech and always working device such as a cursor on the emitter to tune the delay, when you can have a high tech and mostly non-working system based on EDID !