PDA

View Full Version : Which Current Hardware supports OpenGL 2.0



Xristos Gripeos
02-03-2005, 05:22 PM
I'm not sure if this is the right place to post this topic, but what the heck....

I'm looking into getting a Radeon 9700 card. I also want to program fragment shaders in OpenGL 2.0.

Has ATI anounced anything conserning OpenGL 2.0 support on Radeon 9700 cards?

If not, what card should I get thats available now that comes the closest to supporting all the features of OpenGL 2.0? (cheepest card, that is)

Any input will be greatly appreciated.

Cheers!

Lurker_pas
02-03-2005, 11:35 PM
Great deal of the GL2 functionality (including GLSlang) is available now through extensions, so imho there shouldn't be too much problems with writing GL2 implementation (still, I don't do such things so it's only a guess). R9700 already supports GLSlang and (at least from what ATI says) multiple render target.
I don't know the prices, but maybe you should compare Radeon's features with GF6600 for example - especially in terms of fragment programs. You can check the available extensions on www.delphi3d.net (http://www.delphi3d.net) with their hw info.

V-man
02-04-2005, 07:06 AM
This would make a great integrated solution :

http://www.extremetech.com/article2/0,1558,1753722,00.asp

Justin Couch
02-08-2005, 10:11 AM
Our experience with GLSL on the 9700-class cards has been extremely poor. The old ARB FP/VP programs are fine because you have a great deal more control. It's just the GLSL compilation and running that is terrible.

If you're looking to use the full shader language capabilities, then steer clear of this and the 9800. You'll need to get at least the X800 or the nVidia 6600-6800 class cards in order to be able to develop non-trivial code using GLSL.

Zengar
02-08-2005, 01:10 PM
EDITED: Sorry, I didn't see the word "cheepest". Get a GeForce 6200 or, much better a GeForce 6600(about 150 euro - which is also much).

A 6200 won't perform very well i fear.

V-man
02-09-2005, 04:36 AM
Originally posted by Zengar:
A 6200 won't perform very well i fear.Yes, but it is much better than a 5200 and it has SM3.0
It's good for light gaming and learning GLSL.

N64Marin
03-07-2005, 03:48 AM
ATI cards doesn't support GL_EXT_blend_equation_separate,
GL_ARB_draw_buffers,
GL_ARB_texture_non_power_of_two

Try an nVidia Geforce 6x00 with Release 75 Forceware driver for full OpenGL 2.0 support

eyebex
03-08-2005, 08:15 AM
@N64Marin: And NV still lacks GL_ARB_draw_buffers for full OpenGL 2.0 support. However, I'd still go for a NV card as I've both heared and experienced much more GLSL issues with ATI cards (read: drivers) than with NV.

yooyo
03-08-2005, 10:50 AM
Originally posted by eyebex:
@N64Marin: And NV still lacks GL_ARB_draw_buffers for full OpenGL 2.0 support. However, I'd still go for a NV card as I've both heared and experienced much more GLSL issues with ATI cards (read: drivers) than with NV.ARB_draw_buffers will be avaible in FW release 75. Even now, if you install beta 75.90, driver returns OpenGL 2.0 version with ALL GL2 extensions.

yooyo

Humus
03-09-2005, 09:32 PM
For ATI you don't even need a beta. :p
The latest Catalyst has OpenGL 2.0 support.

stanlylee
03-10-2005, 08:57 PM
If you want to learn GLSL . I think you should buy a NV4x(GeForce 6200 6600 6800). ATI's OpenGL Driver's performance is very very poor....
I have a ATI R9700 pro in company.and try the ATI 5.3 driver. It's not very well (eg. not support FBO .it's very convenient for you to implement RTT ).

And . ATI 's product is not cheaper than Nvidia. So I will not buy any ATI's Graphic card utill they make their OpenGL driver better..

stanlylee
03-10-2005, 09:07 PM
For ATI you don't even need a beta For most of ATI's driver version number. I think they should add a postfix: "beta"

eyebex
03-11-2005, 01:09 AM
For most of ATI's driver version number. I think they should add a postfix: "beta"True ... and finally add support for EXT_framebuffer_object.

Humus
03-12-2005, 05:44 PM
Originally posted by stanlylee:
For most of ATI's driver version number. I think they should add a postfix: "beta"I could rant a bit about the poor quality of nVidia's driver too if I felt for it. Most of my SDK samples didn't work and/or crashed on exit on the 6800 with the latest official drivers.

stanlylee
03-13-2005, 01:32 AM
Driver is a software too.Every software has BUGs.

But in gerneal . NVidia's OpenGL driver is more better..

Whith D3D. I think ATI's product is more fast thank Nvidia

Silkut
03-13-2005, 06:32 AM
Originally posted by stanlylee:
...
Whith D3D. I think ATI's product is more fast thank NvidiaHm, and with fBuffers, and superBuffers, what about ATI...?

powerpad
03-28-2005, 10:14 AM
I plan on buying a radeon 9600 (pro) would you recommend this, I don't have much money to spend and need a card that supports opengl 2.0.
any other recommendations ?

Korval
03-28-2005, 10:47 AM
I plan on buying a radeon 9600 (pro) would you recommend this, I don't have much money to spend and need a card that supports opengl 2.0.Define "Much money". If you can spend $200, I'd get a GeForce 6600GT. Good performance and it has the benifit of nVidia drivers. Plus it support vertex texturing.

If you can't afford that, I imagine that 9800's are pretty cheap these days.

powerpad
03-28-2005, 11:14 AM
I read about that the ati drivers are less stable in terms of opengl. Well 200$ should be the maximum I wanted to spend 100 so like 100-150$

powerpad
03-28-2005, 11:20 AM
Well in terms of cheap speaking - I found the radeon costs about 150 ~ 180$, which is roughly equal to the price of a gforce 6600gt

I need a card that supports opengl 2.0 - glsl and is not that pricy (200$ should be the maximum), I don't need that card for playing (since I don't have enough time to play) just coding stuff

powerpad
03-29-2005, 12:46 AM
I bought myself that 6600gt and let's see :D

Obli
04-04-2005, 11:46 PM
Correct me if I'm wrong, I've been away from 3D programming for some time so I'm a bit out of date...

As far as I know, NV4x adds only two main functionalities
1- Half-float blend (I like this)
2- Vertex texturing (slooOOoow)

I think NV3x and Radeon9500+ are rather the same in terms of functionalities, with minor differences in terms of supported ops per shader or stuff like that which I really don't care too much.
This is hardware-wise. Drivers however are a different game.

Is this correct?
Thank you

Relic
04-05-2005, 03:32 AM
You're wrong, read chapter 4:
http://developer.nvidia.com/object/gpu_programming_guide.html

Al1
04-08-2005, 05:45 AM
I am working on my senior project which implements GLSL and I needed for loops, which my 9700 pro couldn't imlement, so I bought a 6600gt in the hopes that it might solve my problem and after popping it in and rebooting, I loaded up RenderMonkey and Visual studio and voila!, I hit the ground running as far as for loops are concerned. So if anyone is hesitant because transitional concerns, here is a positive story.

michagl
04-08-2005, 09:27 AM
can anyone offer any kind of forcast on how soon vertex texturing will be comporable to fragment texturing and support filtering and non floating point buffers?

i really don' want to by a new card until this is in the bag. am i being realistic? should this functionality be available in the next round of hardware? and when should the next round of nvidia hardware be expected?

if holding out is not realistic, then i may as well invest in a new system because my old setup is quite frankly beyond dated performance wise.

i'm also thinking about going with the possiblity of a double pice16x board. is there any chance of mixing and matching relatively comparable cards on these boards... or do you have to go with identical twins?

sincerely,

michael

edit: just to reiterate can some one please give me a best guess on verex texturing. is it going to be a year or more before it comes around? 6 months? next quarter? or what?

i'm really thinking about going with this system:

http://www.xpcgear.com/gigabytegv3d1.html

i could swap it out perfectly with the system i'm running right now. can anyone see any flaws in this setup? the only issue i have with it is it would not support vertex texturing, and if i wanted to put a card that did support vertex texturing, then the two cards could not run in parallel i believe. i suppose parallel means they can draw to the same buffers. i guess it would be possible to put two different cards on the system, not using the 'SLI modem', and have them draw out to seperate monitors with hardware acceleration -- correct?

def
04-10-2005, 02:10 AM
michael: SLI only works with two identical cards.
The system you mentioned is somewhat of a hack and (in my opinion) doesn't make a lot of sense, although most people agree it's a "cool" technical archivement.
Two 6600GT GPUs would get you about the performance of one 6800Ultra, the memory is not shared between GPUs so you effectively only have 128 MB of GDDR3 memory.
And if you go for the 6800Ultra card you don't need an expensive SLI mainboard so pricing should be near to even.

In a short while I will get my hands on a dual 6800Ultra configuration with two 16x PCIe lanes... now that's a different story (money and performance wise). Can't wait to see those benchmarks!

michagl
04-10-2005, 10:33 AM
Originally posted by def:
michael: SLI only works with two identical cards.
The system you mentioned is somewhat of a hack and (in my opinion) doesn't make a lot of sense, although most people agree it's a "cool" technical archivement.
Two 6600GT GPUs would get you about the performance of one 6800Ultra, the memory is not shared between GPUs so you effectively only have 128 MB of GDDR3 memory.
And if you go for the 6800Ultra card you don't need an expensive SLI mainboard so pricing should be near to even.

In a short while I will get my hands on a dual 6800Ultra configuration with two 16x PCIe lanes... now that's a different story (money and performance wise). Can't wait to see those benchmarks!so you would have to program for parallel gpus then? what is that like right now? i'm looking for a board with two 16x slots (best i can tell the dual gpu card only requires one slot and their is another)... i figure the 6600gt tech might be more bang for your buck than 6800ultra... i like to try to find the card that is the sweet spot for money/performance.

the memory not being shared is a major issue. i had assumed that one of the reasons for putting the gpu's on the same board was so they could share memory. it would be harder to synchronize them in software i figure if they couldn't work from the same memory durring the same phases if need be.

how much do 6600gt/6800ultras run? i figure the socket 939 64bit / dual pcie16x board is going to come in around at least 170$. that leaves 330$ for the card. which is about as much as i'm comfortable spending. i also like the idea that the board and card seem to be made for one another, and they are both under a unified 3year manufacturers warranty.

if the memory isn't shared though to be honest, that pretty much kills it for me. i can't have one cpu catering to two gpus with seperate memory banks, unless they are doing completely unrelated tasks, which means they may as well not be linked at all. are you absolutely sure about this memory partitioning? i will look into it... i don't think there is any mention of that in the promo i linked.

maybe its not all that bad though... i guess you could render say a landscape on one card, while another card is dedicated to actors. i would just like experience working with parrallel gpu's as well. i figure this might be a cheap way to go about it. i'm accustomed to working with 128mb of video memory. the general rule i guess is to just not try to do the same tasks across the same gpus.

how exactly do parallel cards work? i presume they must share the same frame buffers etc. do they both keep a copy of the buffers kept up to date by the modem? does the modem hold the buffers freeing up the memory on the linked cards for other uses? if you are using parallel gpus where do you plug in the monitors? i assume the whole point of them being parallel is so you can draw to the same display? is their an external linking adapter? can both cards write across to one anothers monitors? is their a master/slave relationship?

i guess i really don't know what i'm getting into.

any good docs out their?

sincerely,

michael

edit: the diagram here:

http://store1.yimg.com/I/extremepcgear_1840_9249054

points to two gpus seperately, but only points to one memory bank. i presume if it comes with two games, then the games must be able to use both gpus, and probably aren't programmed for parrallel gpus. and the specs say no mention of partitioned memory, or anything such as 128mb.

can someone offer up some incontrivertible proof that the gpus can not share memory please?

michagl
04-10-2005, 11:31 AM
does the card require both pcie16x slots? it looks like maybe like the picture is cut away in the card shot and its tricky to tell with the board shot.

it looks to me like maybe the card uses both pcie buses to double up its bus bandwidth. what does this mean:

"3D1 comes with 256MB of DDR III rated at 1.6ns which means a potential of 625MHz (1250MHz DDR). The card is rated with a core clock of 500Mhz and a memory clock of 560MHz (1120MHz DDR)."

what are the (double ups) about?

i don't know... it looks a bit hairy to touch for me. but i would still like some input.

it looks like the card is taking up both pcie16 slots... something i did not count on before. if this can move data across the bus twice as fast to a shared memory bank, i'm all for it. but if it means the driver has to issue commands to two different buses it looks like a spoiler, and a lot of trouble to program for. i'm inclined though to think that it is moving memory across the bus twice as fast, because i can't see why they would include two games with it. i assume the games are not new games, because otherwise they wouldn't be free add ons... not being new games they must not be programmed for parallel gpus. not using parallel gpus, it must mean that the card is indeed moving agp memory down a 2xwide bus. in this case it would seem that the 256mb memory must be shared memory. personally i can't see why the card would've been made if it isn't shared. finally assuming all of this is correct then it might actually be an awesome setup depending albeit on your needs.

i figure the gpus can work in tandom without parallel programming (note: game pack-ins). i also figure that they must be able to work in parallel if programmed to do so (otherwise they system may as well be unified unless it is more affordable from the manufactures perspective not to unify the hardware).

any thoughts/insight?

def
04-10-2005, 12:47 PM
It seems NVidia has not yet certified this card... has anyone actually seen this card for sale (and in stock)?
The memory is definitely not shared, so the card acts as a 128 MB card.
I for my part consider this card a joke. :rolleyes:
DDR stands here for Double Data Rate. It is a technology that transmits data on both sides of a tact signal. That is what the double ups are about. Since the RAM is 1.6ns fast it has (overclocking) potential to go up to 1250 MHz but it is shipped with 560 MHz (1120 MHz). The card does take up 16 PCIx lanes, but just one slot.

SLI is meant to work transparently so you don't need to program for paralelism, just need to follow a few rules for optimal performance. After activating the SLI mode in drivers and rebooting the two cards act as one. Output is done via the first card, the second card generates no signal.

Check out the NVidia SLI page for more info:
http://www.slizone.com/content/slizone/learn.html

Take a look at the 6800GT. Some manufacturers sell these with higher clock rates and they get close to the speed of Ultra cards for less bucks.

michagl
04-10-2005, 03:32 PM
i thought DDR was something that started with 66mhz memory (ie: double 33mhz) then just stuck as memory went up by 33mhz incriments.

i always though 100mhz memory was technical TripleDataRate (and so on), but just called DDR so not to confuse consumers.

i guess that is incorrect then?

michagl
04-10-2005, 04:25 PM
here is a telling article:

http://www.pcstats.com/articleview.cfm?articleID=1749

this card is really not being marketed honestly according to this review. it appears the card effectively uses an 8x bus as well as only effectively having 128mb of memory... and who knows what else according to gigabytes numbers.

so as i see it, sli mode will always duplicate video memory and send it to both cards halfing your effective memory? or does it somehow make decisions about which memory to send where? (i find this second option highly unlikely given sli constraints)...

can sli do dual monitors just out of curiosity?

personally i'm not much of a gamer, if i had 2 video cards laying around i would probably prefer 4 monitors to sli... i figure sli is controlled at the system level, and can't be entered by programs (or is it possible for a program to enter sli mode??? -- that would be much more useful)

is anyone making games for widescreen monitors? widescreens i think would better aproximate human vision. is dualscreen (ie: nview) the same as wide screen rendering? how come widescreen monitors only seem to come in the super expensive range? are they a status symbol? why shouldn't there be widescreen lcd screens for double the price of normal aspect lcd screens? at least that has been my experience thumbing through catalogs.

michagl
04-10-2005, 05:27 PM
what is the difference between the 6600 and 6600GT besides sli support and 6600GT never comes with more than 128mb?

edit: ok, GT has gddr3 memory... i guess that is worth it then???

at this point i'm basicly looking for a pcie 6000 series card with dual monitor, 256mb, and a price tag below 350$ (most bang for buck) cause i'm not really happy with any of these cards. i figure i will get a board with 2 pcie16 slots, and eventually get a better card and move the card i get now down to the other slot for non sli rendering.

basicly i'm just looking for a cheap pcie card and hoping for at least a 2x performance boost over my current card. and more memory than the 128mbs on my current card.

any new nvidia cards on the horizon?

sincerely,

michael

edit: nvidia.com says 6800 series supports 'geometry instancing'... what is this in terms of hardware. i haven't checked the equivalent 6600 page to see if it supports it. does this mean you could say draw more than one leaf (for instance) at a time with multiple transforms in a single render pass hardware accelerated somehow?

geometry instancing is under the 'vertex shader' heading along with displacement mapping... which i guess means 'vertex texturing'.

i still can't guess what geometry instancing means. it would be cool if you actually could use a single glDrawElements to render multiple instances with the same command... but i wouldn't guess this would fall under 'vertex shader(ing)'

why all the cryptic terminology beats me???

michagl
04-10-2005, 07:45 PM
ok, i think i'm finished going insane over this stuff.

my final decision seems to swing on memory.

the 3D1 dual-6600GT benchmarks actually look pretty good:

http://www.pcstats.com/articleview.cfm?articleid=1749&page=4

the only intrinsic flaw it appears to have is it is said to have a noisier fan (exactly how noisy i have no idea) ... noisy fans are a pet-peeve of mine, but i'm not sure if i will let that be a deal breaker.

i don't think i will go for a normal 6600GT, because they only have 128mb... so if i go 6600GT i would probably go with the 3D1.

i really want 256mb though. the 6600 comes with 256mb(ddr3)... the GT only comes with 128mb(gdd3). something tells me unless that gddr memory is godly, i would rather go with the 256mb cap.

the 6800s don't look impressive enough for their price tags.

does 6600GT have anything else over 6600 other than gddr memory and sli support is what i would like to know.

the 3D1 memory is said to be 128mb per gpu. i wonder if this means the memory banks would be a mirror image of one another... this just sounds crazy, because it seems like the memory could be shared, unless the card is just that much of a hack. i wonder if the memory might be scheduled somehow through sli.

then there is the the issue of the 3D1 pcie bus being split down the middle. i don't really see why this is necesarry. i don't understand why the bus couldn't just be 16 lanes wide and share the memory on the card's side. the benchmarks i looked at didn't seem to reflect that this causes a performance issues... but bus might not have been a limitting factor in the benchmarks.

so i think i've said my piece.

my options seem to be 3D1, 6600(256mb), or 6600GT(128mb).

the 3D1 saves me from having to make a decision on the motherboard as well. and comes with a combined 3yr manufacturer's warranty.

if anyone has some input that might help in this, i would definately be greatful for so much.

finally, i'm sorry about hijacking this thread so much. it looked like a discussion of card purchasing options though. so maybe someone else will find this stuff useful.

Zeross
04-11-2005, 05:49 AM
Originally posted by michagl:
i don't think i will go for a normal 6600GT, because they only have 128mb... so if i go 6600GT i would probably go with the 3D1.The 3D1 uses the SLI technology, thus even if the card has 256MB it really is 128MB per chip because data have to be duplicated for each chip.

http://developer.nvidia.com/object/sli_faq.html#10


Is the frame-buffer memory shared, i.e., can I access a 512MB frame-buffer if I have two GPUs with 256MB each?

No, each GPU maintains its own frame-buffer. Rendering data, such as texture and geometry information, is duplicated.

michagl
04-11-2005, 08:43 AM
yes i realize the 3D1 also only has 128mb, but it does no better in the memory department... so i was saying, if i settle on accepting 128mb, i would go with the 3D1 more like, because its benchmarks actually look fairly competitive, and i work the price out to be about 330$ for the card, which is about what i am expecting to spend... otherwise i would have to go for a 150$ card, or a 450$ card it would seem.

it seems that the 6600GT hardware is about the most cost effecient right now. the 3D1 is just two combined 6600GT so it gets cost effeciency times two, and still leaves room for me to put another card in a 2xpcie16 board down the road.

can someone comfirm that it is possible or at least planned to be possible to say have two different pcie16 cards running on the same board with seperate monitors and hardware acceleration on a per monitor basis... that is not running in sli mode. so that is you couldn't have hardware acceleration for a program bridging both monitors, the way win2k works last i checked if you have an agp and pci card running on the same machine... only with pcie16x2 you could have two pcie cards, and maybe another pci card as well. is this sort of setup doable? or are these dual pcie16 boards just designed for SLI? i'm guessing sense the 3D1 board has two pcie16 slots, and i'm assuming two 3D1 cards can not be SLI linked... it must be concievable to run multiple pcie16 cards not in SLI mode.

thanks btw for demonstrating that sli does indeed mirror video memory.

bChambers
04-11-2005, 08:52 AM
A couple of notes:

ALL NVidia SLI cards duplicate their data (so two cards w/ 128 each will act as though they have 128 between them).
ALL NVidia SLI cards use PCIe x8 (even if they're plugged into an x16 slot).
SLI is handled by the driver, not by the app.
SLI is effectively useless for dependant texture reads (ie: render to texture, pbuffer, fbo, etc).
Whenever using SLI, you MUST use two identical (not just chipset, also manufacturer) cards.
SLI mode disables multiple-monitor output.

michagl
04-11-2005, 12:45 PM
ALL NVidia SLI cards duplicate their data (so two cards w/ 128 each will act as though they have 128 between them).check



ALL NVidia SLI cards use PCIe x8 (even if they're plugged into an x16 slot).so do no cards on the market right now use 16x? or is this just during sli mode? is 8x equivalent to agp8x or how do these numbers stack up?


SLI is handled by the driver, not by the app.yes, but i'm curious what sort of toll this takes on the driver?



SLI is effectively useless for dependant texture reads (ie: render to texture, pbuffer, fbo, etc).wouldn't that be dependant texture writes? care to share a little more? so when you are writing to a pbuffer(fbo) via normal rendering dispatches sli falls out?


Whenever using SLI, you MUST use two identical (not just chipset, also manufacturer) cards.check



SLI mode disables multiple-monitor output.this one hurts the worst. sli can still do dual view right? where the horizontal resolution is effectively double.

is their any way for sli to be enabled and disabled based a running program. that is, even if you must enable sli via the os 'display properties' etc, can you set it up to recognize certain profiles... like when your computer is running a specific application... then when the app dies, sli mode is exited.

Relic
04-11-2005, 10:26 PM
Originally posted by bChambers:
ALL NVidia SLI cards use PCIe x8 (even if they're plugged into an x16 slot).
This was not a limit of the graphics but the motherboards. Most PCI-E chipsets have only 20 or 24 lanes. 2*8 lanes is the maximum config on those motherboards. The nForce professional chipset has more to support 2*16 lanes.