Very impressed so far with my Radeon...

I bought a Radeon 64Meg over the weekend. I am very impressed with the card so far. I was close to buying a GeForce GTS, but the high res 32bit benchmarks just swayed it for me. I liked the HyperZ idea too.

The ‘mega demo’ on the driver CD showcases some pretty good stuff. Environmental bump mapping seems very nice (excellent wavey water effects) and the vertex skinning on the human character was most impressive, as was her large - err - polygon count.

The thing was, while looking impressive, it was just a little too slow.

Also the ATI developer website needs some major work - some of the demos don’t show the product in the best light, the fog_coord_EXT demo especially.

Some of the links don’t work and not all the card’s extensions are documented yet - I just hope they hurry up

The 3d texture demos are very impressive though - especially the CSG marble ball/torus/cube demo.

They’ve made a “texture_env_dot3” extension to expose dotproduct bump mapping - which is nice. (The demo to which isn’t downloadable).

This ‘new graphics card thing’ is a bit of a revelation for me because I’ve had a Matrox G200 for years. Now I get to play with things like s3tc and multitexturing. (Welcome finally to the club, Paul)

Has anyone got a Radeon yet?

Can’t stop to chat - I’m got some playing to do!

Sorry to hear you made the wrong choice…

  • Matt

I got a radeon too. I was thinking of buying a geforce MX and wait for the next nvidia offering, but the way I look at my life right now, money will be quite low at that moment, so I figured I’ll buy a good one right now since I have the money for it .

Technically, it should last a little longer than the GTS because it has vertex shader in hardware + 3d texture in hardware + 1 shot dot product in hadware( I heard you could only do it in like 2 or 3 passes on the GTS, tell me if I am wrong).

Anyway, it is a real nice card. I would hope that Ati would release beta drivers a little faster than what they are doing right now, but I only had issues with one game so far and all my program worked without a hitch. Playing SoF at 1280X1024 with FSAA on and full details is quite amazing!!!

I also can’t wait for their documentation to be up to date, I really would like to play with vertex skinning and the vertex streams(I hope the ARB will put is hand down and make an opengl 1.3 so that with can a common interface and not 3 dozens extension ex. a vertex shader is a vertex streams for Ati and a Vertex shader for Nvidia)).

Anyway, a lot of talks for nothing. For my final words I would like to say it is finally nice to be able to stuff that looks good and doesn’t give me 1 fps!!!

Originally posted by Gorg:
Technically, it should last a little longer than the GTS because it has vertex shader in hardware + 3d texture in hardware + 1 shot dot product in hadware( I heard you could only do it in like 2 or 3 passes on the GTS, tell me if I am wrong).

Definitely not true. GeForce can actually do 4 dot products per pixel (A dot B and C dot D in both combiner stages, for those familiar with register combiners), whereas Radeon can only do 3. Many of our bumpmapping demos do multiple dot products in a single rendering pass and perform texture combine operations that are simply impossible on Radeon. 3 texture units doesn’t do you much good when you can’t compute something as simple as T0.a*T1 + (1-T0.a)*T2 in a single pass…

ATI’s also missing a number of other important extensions we support, ranging from the mundane (radial fog, blend color/minmax/subtract) to extremely important (vertex array range and fence, better texture environment operations than those in EXT_texture_env_combine).

And if you think we’re done with new extensions for the NV10 architecture, you’re wrong – we’ve got cool extensions coming down the pipe that will work on existing HW but that we haven’t gotten around to exposing just yet.

Take any claims of HW being ‘future-proof’ with a grain of salt. Especially funny are the claims that Radeon is somehow DX8-compliant, when it’s missing major DX8 features…

  • Matt

<Peeks from under the desk>

The way I see it, all these cards have their strong points and their weak points. Thats what makes life interesting is it not?

As for the Radeon not being able to do ‘simple’ blending ops - apart from the flashy demos, its not going to matter until we get a standardised shader API on the PC for OpenGL/DirectX is it? No game developer is gonna take the time and trouble to program for every different IHVs extension set are they?

As for dotproduct bump mapping, until ATI get their demo back up, I couldn’t tell whether its easier to use NV_register_combiners for the effect or ATIX_texture_env_dot3.

Vertex array range is supported and ‘fence’ wasn’t exposed by you guys until a few months ago. HW T&L pickup by game developers has been slow-to-none-existant anyway.

So to summerise:

The GeForce products (which I’m not knocking by the way) support a bunch of blend modes that the Radeon doesn’t.
The Radeon’s got 3D textures with user definable H/W clip planes and 4 matrix vertex blending in hardware and EMBM which the GeForce hasn’t.

Big deal!
Nobody except us hobbyists and corporate demo writers are using these things anyway!

Lets just chill out before it gets out of hand.

Whats radial fog, BTW?

Paul. <ducking back under the desk>

[This message has been edited by Pauly (edited 10-22-2000).]

Originally posted by Pauly:
As for the Radeon not being able to do ‘simple’ blending ops - apart from the flashy demos, its not going to matter until we get a standardised shader API on the PC for OpenGL/DirectX is it? No game developer is gonna take the time and trouble to program for every different IHVs extension set are they?

Most aren’t, but the chances of pixel shading functionality being standardarized any time in the next year or two is pretty low. The only thing that would result from such an attempt today would be a least-common-denominator extension, which no one would really want.

Personally, I expect to see competing standards on pixel shading for another year minimum, probably longer.

Originally posted by Pauly:
As for dotproduct bump mapping, until ATI get their demo back up, I couldn’t tell whether its easier to use NV_register_combiners for the effect or ATIX_texture_env_dot3.

The dot3 extension is easier to use, but it’s not anywhere near as powerful. We are planning to support EXT_texture_env_dot3, but we didn’t even know the final enumerants they chose until recently (basically, until they posted it on their web page), so we were put in a situation where we couldn’t ship it. Expect to see it soon in a driver update.

Originally posted by Pauly:
Vertex array range is supported and ‘fence’ wasn’t exposed by you guys until a few months ago. HW T&L pickup by game developers has been slow-to-none-existant anyway.

If Radeon supports VAR, that’s new to me. It’s not listed on their web page, and in my (short) experience in playing with one, I didn’t see support for it…

Originally posted by Pauly:
Whats radial fog, BTW?

There are different ways that you can calculate the fog distance based on the eye coordinates of a vertex. The easiest and fastest way is to use the Z directly. More accurate is a distance computation: sqrt(xx+yy+z*z). Unfortunately, this is more computation and therefore slower.

D3D has let you enable or disable radial fog for some time, but unextended OGL has no such control. The NV_fog_distance extension provides for this by allowing one of three settings: eye Z, absolute value of eye Z, and radial.

  • Matt

I feel there is some kind of subtle flame war going on.

I feel it is very stupid to fight over a card and it’s feature.

Anyway, if you want my opinion, I don’t really give a damn. The only thing that matters to me is that I got a good card and the price was fitting right in my budget(wich is tight as hell because of personnal reasons(I could have had a good salary, but no! I decided to do the hard way, wich I actually enjoy!!.. )).

Just for the records, I got it for 200$ cheaper than the cheapest GTS 2 card with tv out wich I really like for wacthing DVD movies on a big tv.

I am not doing game programming so I don’t need all those speed booster(really, speed is not an issue) and cool looking shading(again,really it doesn’t. Right now you are probably asking yourself : Is this guy really working with graphics?? ). No I am not it. Computer graphics are just a hobby!

So everyting is fine in the world and I am absolutly satisfied my Radeon!!

[This message has been edited by Gorg (edited 10-22-2000).]

[This message has been edited by Gorg (edited 10-22-2000).]

‘Subtle flame war’, or as I prefer, ‘a sensible discussion’

The only thing that would result from such an attempt today would be a least-common-denominator extension, which no one would really want.
Yeah, I suppose this is where DirectX has the upper hand. The ARB meet (and agree on things) far too infrequently to push things along conherently - I imagine thats why we get patchy extension coverage from different IHVs. DX8 has given hardware vendors a kinda checklist for them to measure their product against. OpenGL seems quite anarchic compared to MS’s new approach.

I figured that dot3 would be easier to use but naturally with less functionality…

(basically, until they posted it on their web page)
Well, it was an update - this is ATI we’re talking about so I guess we should be thankful

It supports EXT_vertex_array_range, which if I remember correctly is not the same as NV_vertex_array_range - the nVidia version does things with DMA doesn’t it?

Finally, the Radeon supports something called ‘Range based fog’ in D3D, I imagine thats gonna be radial. Maybe they’ll get round to an OGL extension for that.

While we’re swelling the OGL extension database beyond all recognition they could wrap up EMBM too - that’d be nice…

I’ve just got your vertex weighting demo working with ATIs vertex_blend_EXT (after no end of pigging around - their glATI.h defines functions that don’t exist in the driver - nice touch ), so I’m gonna have a proper play with that before I go to bed

Before I go, is there any truth in the rumour that the latest nVidia drivers have some kind of hierarchical Z like the Radeon?

Paul.

[This message has been edited by Pauly (edited 10-22-2000).]

Originally posted by Pauly:
Yeah, I suppose this is where DirectX has the upper hand. The ARB meet (and agree on things) far too infrequently to push things along conherently - I imagine thats why we get patchy extension coverage from different IHVs. DX8 has given hardware vendors a kinda checklist for them to measure their product against. OpenGL seems quite anarchic compared to MS’s new approach.

Without going into too much detail, yes, there are issues that make it difficult for the ARB to do this. D3D has it slightly better, but not much so. MS screws things up pretty frequently.

Originally posted by Pauly:
It supports EXT_vertex_array_range, which if I remember correctly is not the same as NV_vertex_array_range - the nVidia version does things with DMA doesn’t it?

I’ve never heard of EXT_vertex_array_range. I really hope they didn’t just randomly rename the extension to prevent it from having “NV” in the name, since that just causes more pain for developers (more names for the same thing). The same thing happened in the past with NV_texgen_reflection. We don’t ship the EXT version of it and don’t plan to, as more names means more confusion. If EXT_v_a_r is the same as NV_v_a_r, I think it’s safe to say that we will not add EXT_v_a_r to our extension string.

Originally posted by Pauly:
Finally, the Radeon supports something called ‘Range based fog’ in D3D, I imagine thats gonna be radial. Maybe they’ll get round to an OGL extension for that.

Range-based fog is the same as radial fog. There’s already an extension for radial fog in OGL, and it’s NV_fog_distance. Other people are free to implement it…

Originally posted by Pauly:
While we’re swelling the OGL extension database beyond all recognition they could wrap up EMBM too - that’d be nice…

The so-called “EMBM” (hereafter called DX6 bumpenv, since true EMBM is much more complicated, as I will explain) is not one of my favorite features…

If you consider the math behind it, it falls apart pretty quickly. Consider what would be required for true per-pixel environment-mapped bump mapping. You’d want to compute a reflection vector at each pixel, using the standard reflection equation and a normal map for the normals, and you’d want to look that up in some sort of environment map (spheremap, cubemap, whatever).

First of all, DX6 bumpenv is limited to looking up your reflection value in a single 2D texture. That limits you to spheremapping almost right off the bat.

It allows you to use your first texture to perturb the texture coordinates in the second texture. It looks up a (du,dv) pair in the first texture, and it multiplies this pair by a 2x2 matrix. Note that the 2x2 matrix can only be specified per primitive (equivalent to outside of a Begin/End, and thus not per vertex).

If (du,dv) is zero, the surface is flat at that point and you get no offset. So, this means that you would want to compute your second texture coordinate (the one for the spheremap) at each vertex using standard spheremap texgen to get the right results. (It’s worth mentioning at this point that DirectX doesn’t support spheremap texgen, so this is slightly painful…)

Now, you still have to figure how to set up that 2x2 matrix to get accurate environment mapping. Basically, you’re doing a local linear approximation of a nonlinear function. The nonlinear function is:

perturbed spheremap coordinates = reflection_vector_to_spheremap_coords(reflection_vector(N, E))

reflection_vector is quadratic in N, and reflection_vector_to_spheremap_coords involves a square root and some other stuff.

If your surface is flat, N and the slopes of the surface, which are the values encoded in your du,dv texture, are closely related. In fact, if du,dv are scaled correctly, N = (-du, -dv, sqrt(1-dudu-dvdv)). If you conjugate all these functions, and if you are willing to assume that E is constant, i.e. the viewer is infinite, you can express the perturbed spheremap coordinates (s,t) as a function of (du,dv). Let’s do a local linear approximation using some simple multivariable calculus:

(s) = (ds/ddu ds/ddv) (du) + (s0)
(t) (dt/ddu dt/ddv) (dv) (t0)

where (s0,t0) is simply the value of (s,t) when (du,dv) = (0,0).

Lo and behold, this is the bumpenv equation! Look up (du,dv), multiply by a 2x2 matrix, add in a base value for the texture coordinate, and use the result for your next texture coordinate lookup.

But in the process of deriving this equation, we’ve made the following simplifications:

  • The environment map is in a spheremap. (cubemaps are much nicer to use)
  • The surface is flat. (if you’ve noticed, most of the demos that use bumpenv use flat surfaces; not all, but most)
  • The reflection vector is locally linear. (an approximation and can fall apart easily)
  • The spheremap coordinates are locally linear w.r.t. the reflection vector. (another approximation)
  • The viewer is an infinite viewer. (another approximation; can sometimes work, sometimes causes trouble)

In the end, you’ve made enough approximations that, mathematically speaking, it is nothing other than a hack.

Now, here’s it’s saving grace: our brains are REALLY BAD at figuring out whether reflections are accurate!

But the approximations are bad enough that it restricts the cases in which you can actually use the technique, and it does produce lower-quality results. Most bumpenv hardware also has some pretty nasty restrictions on the resolution of the environment map; I think the G400 implementation requires that it must be 32x32 or smaller, or something along those lines.

Fortunately, there is hope in sight. First, although it’s slow, Cass did write a demo that can approximate true EMBM using a SW hack that involves reading back the framebuffer:
http://www.nvidia.com/marketing/developer/devrel.nsf/TechnicalDemosFrame?OpenPage

Also, with 3D hardware advancing quickly, someone’s bound to put the real per-pixel reflection vector calculation in hardware at some point. It wasn’t feasible back when bumpenv was first developed (Bitboys’ Pyramid3D part; popular web culture assigns credit for the invention to Matrox, but they only popularized it).

Originally posted by Pauly:
Before I go, is there any truth in the rumour that the latest nVidia drivers have some kind of hierarchical Z like the Radeon?

Oh, come on. You expect me to tell you what makes Detonator 3 fast?

  • Matt

Argh, stupid frames on our developer website. I posted the wrong URL.
http://www.nvidia.com/Marketing/Developer/DevRel.nsf/pages/18DFDEA7C06BD6738825694B000806A2

That should be better.

[Sure would be nice to be able to edit my messages right about now, but it keeps telling me I can’t…]

  • Matt

ACK!

It was EXT_draw_range_elements not EXT_vertex_arary_elements. SOrry about the confusion, I guess I should have checked properly first.

Thanks for the clafication of the different EMBM methods too.

Paul (who can edit his posts with ease )

What I like in this case is the hobbiest/programmer (Pauly) saying that he thinks his investment into a Radeon was a good deal.

What I like even more is the zealous Nvidia people defending (and slamming) the opposition (No offense meant there Matt, I just think its funny).

I’ll personally take a GF2 over a Radeon especially after seeing the Radeon perform in Mercedes Benz Racing… YUK.

But anyways, let it be knownst to all, mcraighead is a Nvidia guy.

That should explain the “discussion”.

Oh yeah, Matt, you guys never sent me my free GF2? Whats up with that? Go bug Derek for me would ya!

Siwko

Zealous, maybe, but there are those of us who live, eat, sleep, and breathe graphics. I am at the point now where I view an attack on our OpenGL driver as an attack on me.

And I was zealous even before I worked at NVIDIA…

  • Matt

Siwko, just an interesting remark here:

As you said, Matt works for nVidia… So does Cass and so does SDomine, who are also quite omnipresent in these forums… Sorry for the ones I missed if there are others…

The point is, how many 3DFX or ATI guys are answering our questions here ??? When we have problems with an nVidia extension, we drop a line, and one of the three above will pick it up… Who has tried to talk about specific 3DFX (no, just kidding…) or ATI extensions ??? More than that, they even tell you “that was a bug in our driver, upgrade now !” which can save precious time…

Of course, as they work for nVidia, they tend to defend nVidia… But, as a matter of fact, nVidia makes the best chips (I haven’t tried the Radeon yet !) so we can’t blame them for that…

I know you did not mean to offend anyone, but I thought it was worth mentioning that nVidia does quite a good job in these forums to help developers… Now, if I missed someone from 3DFX or ATI who tried to help here as well, I am sorry…

Regards.

Eric

Best Chips ?
You mean faster chips I think, and good Quality/Speed ratio.

Cause for the technology, nVidia chips aren’t the best I know.
About the RadeOn : The reviews I’ve seen all are saying that you’d better take a GeForce2 GTS rather than it, cause Quality is not as high with the RadeOn than with the GeForce2, as for speed…

(That’s really something I’m not happy with since I was willing to buy a RadeOn for it’s BeOS drivers )

Originally posted by Eric:
[b]Siwko, just an interesting remark here:

The point is, how many 3DFX or ATI guys are answering our questions here ??? [/b]

             [img]http://www.opengl.org/discussion_boards/ubb/smile.gif[/img]     [img]http://www.opengl.org/discussion_boards/ubb/smile.gif[/img]     [img]http://www.opengl.org/discussion_boards/ubb/smile.gif[/img]

I know, I know… I was just poking fun at the wonderful and omnipotent (NOT impotent) people of nvidia, who actually seem to care about the livelihood of OpenGL. It was the viscious defense of their product that caught my eye…

And yes Matt, I know that everyone that buys a nvidia card means that your paycheck will be there for the next pay period.

Oh, and yes, I agree that the nvidia cards are superior.

And finally, no, I haven’t explicitly seen anyone for and competing manufacturer around here. They must be scared to let people know who they work for…

In any event, whats this I hear about nvidia considering buying out 3dfx? … hmmmmm???

Any stock tips I should know about here?

(Note: 3dfx was trading at around $4 a share, while nvidia was trading at around $70 a share.)

Siwko

PS - Bring on the nvidia goodness!

Well, thats 3 emails I’ve sent to the ATI developer ‘relations’ in 2 days. No replies yet

I remember I emailed nVidia on behalf of a friend, a Linux question - I had an answer in 6 hours… Yes nVidia really do care about developers - the big fries and us small fries. Thats got to be commended.

Don’t these other companies realise that these little things have a negative effect on their companies image?

Ho hum.

Paul
<Still happy with his Radeon>

No stock tips… giving those would be a nice way to get a one-way ticket to jail.

As for rumors, rumors are just that, rumors, and beyond that, you’ll have to ask NV PR or 3dfx PR.

Sometimes I feel we may not be as responsive as we could to developers, though. I’ve heard stories of people emailing our developer support email address many times and never getting a reply…

In any case, if anyone discovers a real, genuine bug in our OpenGL, send it to me and I will make a real effort to try to get it fixed. I can’t make any promises, but I’ll try. For example, certain missing extensions from the extension string, as mentioned in another thread – I could have fixed that in about 5 minutes.

If I do fix something, the release schedule is beyond my control, but hopefully either (a) some moron will leak the drivers, which always seems to happen no matter what we do to prevent it, (b) we’ll release something in a timely fashion, © it’ll be posted on our developer web site in a timely fashion, or (d) I may be able to get authorization to send a build with the fix to someone, although this is subject to the caveat that we don’t want (a) to happen as a result of (d).

  • Matt

Okay, here’s what I’ve got to say about nvidia so far (and none of its bad)… Enter testemonial:

I found the multimonitor OpenGL bug in the nvidia drivers, and as a corporate developer roped into using industrial systems where you (for some idiotic reason) cannot disable the onboard i810 chipset piece of… anyways, was forced into indirectly supporting multimonitor with our OpenGL app.

I saw Sebastian’s email floating around in the forums, and decided to post in the forums and email him as well.

Before the day was out, Matt had requested more info on the problem. Over the next few days Matt and Steve had both helped me out and Steve work out a fix. Unfortunately, it didn’t make it into 6.31 (Quote from Steve: 6.31 was released about 6 hours before the fix went made it to them…). Its promised in 6.45 which I’m awaiting patiently.

About 3 weeks later Sebastian replys back to my email, not knowing the problem has been fixed.

What I like about nvidia, they actually care that they have the best product out there, and people like Matt, Sebastian and Steve that hang out here and help out the developers prove it. You guys rock.

One final thing (No, no more requests for stock tips or anything), Matt, can you tell me how many/where nvidia branch offices are? For example, if I were to seek employment, but not want to move to California, does nvidia have any branch offices in the northeast anywhere?

Thanks!

Siwko

PS - Insider trading is only bad -IF- you get caught.

Originally posted by Siwko:
One final thing (No, no more requests for stock tips or anything), Matt, can you tell me how many/where nvidia branch offices are? For example, if I were to seek employment, but not want to move to California, does nvidia have any branch offices in the northeast anywhere?

Lots of them. I work in Boston, although that has something to do with the fact that I go to school and work full-time for NVIDIA at the same time. (How do I do it, you ask? I don’t know.)

We also have offices in North Carolina, Austin, and Arizona, plus a few people who are all over the map. And that’s not counting our European and Asian folks.

  • Matt