(OT?) Kyro2 and OpenGL

Not sure where to place this question, but I would like some developers opinion(s) so I think this would be a good forum… (?)

Anyway, I am about to buy a new gfx card for my PC (I run Win2k, Win98 and Linux Mandrake). I have three criteria for the card:

  1. It should be cheap
  2. It should have decent OpenGL drivers (Linux drivers is a plus)
  3. It should have a TV-out connection

So, I started to compare prices and performances, and ended up with the following conclusion:

Kyro2, Radeon 7500, GeForce2 Ti and GeForce4 MX440 are roughly performing on par, so these are my contendants at the moment.

The Kyro2 is by far the cheapest of the four, and I believe the Radeon has the best set of OpenGL features, but it’s also the most expensive one. Personally I really like the tiling technology of the Kyro, and since it’s the cheapest, it will probably be my pick.

So, finally I get around to my question:

Does anyone have any experience with OpenGL on the Kyro2? Are the drivers stable? Are there any important features missing (not counting vertex/fragment shaders and HW T&L)? Since the Kyro can outperform the GeForce2 and Radeon 7500 in many tests, I suppose the SW T&L implementation is decent.

BTW. According to PowerVR the Kyro has eight multitexturing “units” (really one iterative unit), where “8” is determined by the maximum number of texturing units in DirectX. OpenGL supports 32 texturing units - does the Kyro support 32 units for OpenGL??? (just curious)

Sell your car and get a geforce3 ti500

But seriously, I don’t know anything about the GL support on the kyro - but you’ll miss out on all the vertex program/per-pixel lighting stuff everyone’s so crazy about.

I just bought a Kyro (original, not II) the other day to do some testing of my application. For the record, this was done with the current Kyro drivers (Build 1.04.14.0028) under Win2K

The Kyro seemed pretty good except for 2 problems. The first one is that it seems to have some strange problem with my orthographic projection. The same code looks perfect on Geforce2, TNT2, Rage Pro, Permedia, Voodoo2, Banshe, Riva 128, and I740. However on the Kyro, at some places where I try to draw 2 quads butted up against each other, I get a 1 pixel gap. I would like to think its my code, but since it looks perfect on every other card and under the Microsoft GDI driver…

The other thing I noticed was that when I run my app in the VC++ debugger, when the app shut down, it always hits a breakpoint (in the driver I believe) just before terminating. Small annoyance.

As far as specs, yes its nice that it supports 8 textures, but not supporting things like pixel shader or register combiners makes 8 textures somewhat useless. However, I believe it does support dotproduct and maybe even EMBM, so it has its ups and downs.

Don’t buy a non-T&L card. Really. I’ve tested a Kyro2. Even though it achieved good fill rates, in polygon-heavy applications it could just barely beat a TNT2.

– Tom

I wouldn’t recommend an ATI card either. I keep testing the radeon8500 we have here every time they release a driver update…and even though the current drivers are much more stable, they still lock up the machines I test it on every so often (like after 30 minutes of using 3dmark2001). We just can’t use an 8500 in our simulators, even though it’s evidently a powerful card, but it’s just not practical - our customers would file lawsuits.
NVidia’s drivers are the state of the art at the moment, and there’s much more support (just look at all the nvidia specific posts in this forum, for example).

Go with NVidia, you know you want to…

Out of those four cards, I’d choose the GF4MX, even though it’s really just a speed-bumped GF2. Follow the other advice: don’t get a non-HT&L card.

If I had a smidgen more money, I’d look at a Radeon 8500 LE – with the caveat of driver stability that has been noted elsewhere. At least they are improving at a pretty predictable rate, which is a good sign for the future.

Actually I’m quite impressed with the current OpenGL drivers for ATI cards. I’m not that interested in D3D, so I can’t speak for that.

I’ve found two circumstance-independent bugs (the non-crashing type) so far, otherwise the drivers handle all kinds of really weird stuff (my stuff) just fine. I can even get decent pixel path performance.

Keep in mind though that I don’t have access to an 8500 series card, but at least these are supposedely not affected by my two bugs

Thanks you all for the replies!

Ok, I really couldn’t care less for vertex/fragment shaders, register combiners etc., since I write code that should work for as many cards as possible. I could of course use vertex shaders for fancy, expensive cards (which not many consumers have anyway), and have fallbacks for other cards. Then I come to the conclusion that if I can make my app run smoothly and look nice on non-fancy cards w/o HW shaders, they will run at least as well on the top-of-the-line cards, even w/o HW shaders. Well, that’s my opinion anyway.

So buying a GeForce3/4 or Radeon 8500 just for the sake of the shaders is out of the question. Plus, I use a monitor with a max res of 1024x768, so I don’t really need the fill rate either.

I do care about vertex throughput though, so HW T&L is definitely a plus.

Originally posted by LordKronos:
As far as specs, yes its nice that it supports 8 textures, but not supporting things like pixel shader or register combiners makes 8 textures somewhat useless. However, I believe it does support dotproduct and maybe even EMBM, so it has its ups and downs.

It does have a few nice features. It definitely has EMBM (see http://www.pvrdev.com/doc/f/Bump%20Mapping%20Comparison.htm ), and its blending/multitexturing features are also very nice (e.g. even in multipass rendering the framebuffer is only written once, never read). I don’t know what tex_env methods it supports though.

Anyway, with the input from you all, I would think that the GF4 MX440 is the best choise (with the strengths being HW T&L and drivers).

Perhaps the next generation of PowerVR chips will prove more suitable for OpenGL? The current generation has the fill rate, and some other nice features. All it lacks (from my point of view) are good drivers and HW T&L.

Originally posted by marcus256:
[b]Thanks you all for the replies!

Perhaps the next generation of PowerVR chips will prove more suitable for OpenGL? The current generation has the fill rate, and some other nice features. All it lacks (from my point of view) are good drivers and HW T&L.

[/b]

Heya Marcus! :wink:

Well, KyroIII chip was expected for last quarter of last year! and i’ve heard that PowerVR have encountered problems with chip production or something… Anyway, KIII should support HW T&L so it could be really interesting to see this technology running… (combination of the tiling tech + HW transform could be terrific! :wink:
BTW, i’ve got a KyroII for my own testings thus don’t hesitate to ask for GLFW!! :wink:

Finally buying a GeForce is a good choice, Nvidia drivers are definitely the best! (ATI has done significant improvements with their own drivers anyway) and it’ll be better when you’ll test Orky’s adventures!! ;))

[This message has been edited by Ozzy (edited 04-18-2002).]

I have to agree with kier. My experience with Radeon 8500 has been pretty bad so far. With the latest public drivers W2K, the alpha-buffer doesn’t even work properly, and the scale is bugged in the tex env combine extension when using blending! I know these bugs have been fixed in the beta drivers, but damn it, what am i supposed to say to people using my application ? Switch to a Geforce ?

Y.

Delphi3D.net has a list of the Kyro’s supported extensions:
http://www.delphi3d.net/hardware/viewreport.php?report=63

I assume those are from the win. drivers though, so that doesn’t say anything for Linux support. I also don’t know how recent that list is as I don’t own a Kyro.

One apparently missing feature which I consider important is cube maps.

One other important point I want to bring up about the GeForce drivers is their stability and error handling. When you develop an app, your app is inevitably going to crash, and its going to do so without always cleaning up on its way down. One thing I have noticed is that no matter how often I crash my app, the GeForce always works fine on my next run. With other cards, after crashing a few times, the drivers fail to load correctly and I get thrown back to software rendering until I reboot. Developing on the GeForce saves me a heck of a lot of time compared to other cards, since I dont need to keep rebooting.

Note that this is my experience under Win2K. I cant say whether or not this holds true under 9x/ME/XP.

just have to note that my geforce does not work quite well here, and i get some nice bluescreens over time (current driver has less bluescreens than the last one)… i dislike this…

oh, and developing for nvidia-boards without using its ext gets a major speed-drop…

I’ve never had a blue screen with my geforces. Are you checking glError() at all?