Mat, Cass ...twin view on nvidia hadrware?

With respect to nvidia twin view cards, assuming you can accelerate both displays, how does the memory work on board the card? Does the driver divide say 64mb into to seperate 32mb chunks? If not, wouldn’t memory fragmentation be a problem? And presumably, if you say ran the same app on both monitors, wouldn’t the performance be half that of running just one version of the app? Or will it only extend the desktop acroos two monitors, rather than treat them as seperate card/monitor combination?

come on guys - too many people want a definative answer … How does twin view work? and what are the scenarios for using multi monitor displays? And what are the memory implications?

FFS - please answer this - this is important …

From my point of view - I want to have a secondary display which shows info in a current game - will this be accelerated or not? and what are the implications? Or am I better off using the MS GDI?

The MSDN is totally unclear about this, it says that multiple OpenGL apps CANNOT be accelerated - that’s why I want an nvidia view point.

BTW - do I get a free card for dev purposes?

This goes to ATI too - I need test equipment!

BTW - how do I know which extensions are accerated on which cards?

Not through glGetString … is there list on the dev web site which lists which cards accelerate certain functions?

[This message has been edited by Shag (edited 02-26-2002).]

please? pretty please?

For the nVIDIA boards, at least on Win2k, the frame buffer is one large device as large as GDI is concerned. Thus, both monitors must use the same resolution. Contexts you create can straddle the monitor if you want, etc. Covering both monitors is just like creating two windows on the same monitor (or like creating one window that’s 3200 pixels wide and 1200 pixels tall).

Why do both need the same resolution? I can’t see why.