OpenGL vs Direct3D

I’m a writing a persuasive paper for HS and I’m wondering what experiences people have had with OpenGL and Direct3D. Which do you personally prefer and why? I only use OpenGL. Are there any drawbacks that you have found in your preferred API? Is there a different preference when using different languages, such as Visual Basic or C? Are there any areas of improvement that you think will need to be addressed in future versions? Please tell me whatever you are willing to share. I really would value your input. Thanks.

Bryan

Hi! I’m wondering how to fuel a bush-fire. see, i’ve got several hundred drums of petrol, and this works, BUT i can only turn several hundred hectares of scrub into a smouldering mess. I was wondering… has anyone had much sucess with nuclear weapons?

cheers,
John

[This message has been edited by john (edited 03-13-2001).]

Here we go again!!

John, I think your nuclear weapon idea sounds like a good one. Not only do you get a lot of initial damage, but also the benefits of the whole mushroom cloud and nuclear fallout thing.

Skater_g, do you seriously think you’re going to get objective opinions on OpenGL vs. Direct3d on a board that is primarily to discuss OpenGL? Post the same thread on a Direct3d board and you’ll get opions that are completely different.

So far as differences between the two, OpenGL is easier to learn for many people, but a lot of people claim D3D is easier…

OpenGL has the whole extension thing going for it, while D3D gets a new update every year, (and in some cases those changes require a drastically different approach to the way you program.)

Deiussum,

Thank you for your response. It is all I need for the 4th source for my paper. Hehe…I knew it wouldn’t be the best idea to post the message on an OpenGL board, but since I’m writing my paper to persuade people to use OpenGL, I figured this would be a good place to get some info…some serious info.

In case anyone is wondering, I cannot stand to use D3D. I dislike it with a passion, and I refuse to code with D3D anymore. Sorry though, I shouldn’t have even brought it up…

but, if you dislike something, then you presumedly know WHY you dislike something. I don;t like… oh, say. COBOL because of very well defined reasons: the syntax structure is brain DEAD, the semantics are screwed in the head, and any compiler on a unix system that is called “rmc” is just screaming for trouble. (“rm” being the ReMove command… which is nice, but not when some of us press space bar to oearly=)

but… i know why i don’t like cobol, and i can site reasons why it IMHO sucks… so i don’t need to ask ppl why it sukcs when i already know why!! so, if you’ve used D3D and don’t like it (which you say you don’t…) then… why ask ppl for their opinions when you can form one yourself? “i think opengl is better than d3d for reasons X, Y and Z, with the following supporting evidence”…

feh!

cheers,
Hohn

With OpenGL you can have per-pixel diffuse+specular lighting on Geforce2 family chipsets (GTS, MX etc.) through the register combiners extension, like in the Doom3. I said like in Doom3 but this doesn’t mean that Doom3 will run on a Geforce2. Indeed, Geforce3 it’s more powerful (adds more combiners and constants plus a new extension named texture shader), but this doesn’t mean that Geforce3 it’s a must (for now).
Unlike OGL, DX8 wants only the new advanced NSR from Geforce3 (which it’s STUPID) and makes Geforce2 a Geforce1 with more fillrate.
In conclusion, in the future you may see games with per-pixel ligthing running only on OGL (because of the number of Geforce2 family chipsets on the market).
One other important thing it’s the support of the fence/VAR extension which is MUCH better implemented than the DX8 vertex buffers, so you can have much more triangles per frame rendered (believe me, I’m first a DX programmer and second an OGL programmer).

P.S. Vertex programming extension (or DX8 vertex shader) is a plus of Geforce3 but i’ve seen recently some benchmarks made with 3dMark 2001 and the emulation (which i think it’s done completly on the CPU, but i’m not so sure about that) is almost as fast as on the Geforce3 (this extension is required as a setup of per-pixel lighting).

Originally posted by andreiga:
With OpenGL you can have per-pixel diffuse+specular lighting on Geforce2 family chipsets (GTS, MX etc.) through the register combiners extension

Well, the drivers are supposed to have a way that allows D3D8 apps to get access to the register combiners. Cant say for sure which functionality is available this way, but its something you should be made aware of.

Originally posted by andreiga:
Unlike OGL, DX8 wants only the new advanced NSR from Geforce3 (which it’s STUPID) and makes Geforce2 a Geforce1 with more fillrate.

Just so you know, the Geforce2 IS a Geforce1 with more fillrate. While there are architectural differences between the cards, there is no difference in functionality. Both cards support the exact same features. From a user/developer standpoint, the only difference is speed.

[QUOTE]Originally posted by andreiga:
One other important thing it’s the support of the fence/VAR extension which is MUCH better implemented than the DX8 vertex buffers

I cant argue this for a fact, but if you use vertex buffers correctly (discard contents & no overwrite), I believe the D3D driver should be able to make use of the fences to get optimal results. Maybe Matt can confirm/refute this, but I wouldnt be surprised if he never even comes in here (given the title of the thread).

I don’t know about DX8, but on DX7, VAR was definitely superior. I think DX8 fixes some but not all of the DX7 vertex buffer problems.

  • Matt

quote:

Originally posted by LordKronos:

Just so you know, the Geforce2 IS a Geforce1 with more fillrate. While there are architectural differences between the cards, there is no difference in functionality. Both cards support the exact same features. From a user/developer standpoint, the only difference is speed.

Reply:

Maybe what you say it’s right (since i have a GTS and i never had a Geforce1 to see if the register combiners works), but don’t forget that the NSR is a new component in Geforce2.
One more thing, in Nvidia Opengl SDK they show the power of register combiners and how to use them but ONLY on Geforce2 and Geforce3
(the later with an enhanced version of this extension).

And you can do per pixel lighting on the Geforce1 and up under direct3D by using the dotproduct operation for the texturestagestate… don’t know the syntax at the moment, but it works. You haven’t got the full freedom, that the combiners give you of course.
Or wasn’t there something ??? I remember a hack for the TNT cards, where you could use there combiners by setting specific stages to specific states in the d3d pipeline.
Maybe you can do this with GeForce too :slight_smile:

Lars

Originally posted by andreiga:
[b]but don’t forget that the NSR is a new component in Geforce2. [b]

What features does the NSR provide? I’ll tell you…NONE. Its a marketing term. Everything (feature-wise) in the NSR was available on the GeForce 256. Now, certainly the 256 didnt have 4 pixel pipelines (which is part of the “NSR” thing), but all additional piplines equates to is performance. Everything a Geforce 2 can do (or at least everything nvidia has disclosed thus far) can be done on a GeForce 256.

Originally posted by Lars:
[b]And you can do per pixel lighting on the Geforce1 and up under direct3D by using the dotproduct operation for the texturestagestate… don’t know the syntax at the moment, but it works. You haven’t got the full freedom, that the combiners give you of course.
Or wasn’t there something ??? I remember a hack for the TNT cards, where you could use there combiners by setting specific stages to specific states in the d3d pipeline.
Maybe you can do this with GeForce too :slight_smile:

Lars[/b]

As i said before i don’t know how a Geforce1 is working under OGL (maybe is working the same as second generation), but the point is that under DX everything is set as render states (light vector etc.) and you can do only diffuse per-pixel lighting (you can’t set the half-vector, specular power which are necesary for specular lighting). I have to remember the name of the topic: OGL vs D3D. Of course, if you have a GF3 the same lighting computations can be done under DX8 and OGL, but how many GF3 are on the market?

You can use EMBM in Direct3d to make specular highlights if you don’t have access to pixel shaders.

>to is performance. Everything a Geforce 2
>can do (or at least everything nvidia has
>disclosed thus far) can be done on a
>GeForce 256.

marketing scam, hehe
microsoft pasting in game characters into there screenshots…

now that IS a marketing scam.
pfff.

XBox is a blunder;
yeah, i like the hardware, but personally i think it will be an enormous flop.
ms is going at this in the wrong way,
and it’s going to cost them.
and i think the street is frowning on there actions as well.

-akbar A.

Originally posted by Gorg:
You can use EMBM in Direct3d to make specular highlights if you don’t have access to pixel shaders.

Yes, but EMBM is not suported on GeForce1 and GeForce2, and again, how many GeForce3 are on the market? (i’m not considering the G400 because of the poor performance and ATI RADEON because of the very, very, poor drivers).

KYRO and KYROII (PowerVRS3) supports EMBM.
They are best quality/speed/price ratio on the market.

Originally posted by kaber0111:
[b]marketing scam, hehe
microsoft pasting in game characters into there screenshots…

now that IS a marketing scam.
pfff.[/b]

Marketing scam? Well calling it a scam is debatable. Certainly the features of the NSR werent new compared to a GeForce256. However there was a performance increase, and as someone from nvidia said to me on the topic once (Im paraphrasing) “no its not new, but often the increased performance can make the difference between these types of effects being feasible or not”. Then again, isnt most marketing a scam in one form or another?

As for the XBOX screenshot “scam”, I would disagree with you, but thats way off topic so I wont bother.

Being in the marketing dept at NVIDIA, I would say it’s about “spin” and timing. There was a right time to push hard for NV_register_combiners as a “branded” feature – and that time was GeForce2 launch. NSR was a much more suitable name for marketing.

NV_register_combiners was pushed to developers from the GeForce256 on, because for consumers to enjoy a feature like NSR, developers have to program to it.

Thanks -
Cass

I’m programming OpenGL now, but I’m following D3D, and might start programming it at some point. At least in terms of simplicity, I think that D3D is getting better with every version. I like the easy loading of textures (including video textures), and the idea of Effects. I’m glad that when I’ll finally get into it, it’ll be considerably easier than when I started following it (which was with DX6).