PDA

View Full Version : Well Gurus, how is OpenGL 2.0 coming along?



Robbo
12-10-2002, 05:48 AM
To be frank, I'm going to be moving over to DirectX. I don't want to argue about which is best - but I do feel that the ARB should have cracked this nut a long time ago. I am now in a position where I have to start a new project and, once again, decide whether to use DX or GL.

The issue for me is so simple its amazing. I don't feel I have the time to write classes for different codepaths (VAO\VAR, VA etc.), with DX I can just use a vertex buffer and have done with it. Moreover, having to cover these different codepaths surely increases the chances of bugs in my programs, basically because more code = more bugs.

This isn't the only issue I am concerned about GL. Although there are lots of libraries around for doing all sorts of stuff, I think one of the most useful things about the DX SDK is the utility modules (.x file handlers, mesh objects, etc.) - this enables me to get up and running *much* quicker than would be the case with OpenGL.

Don't get me wrong, I loved coding with GL - but I see it more as a convienience for demo coders\SOTA game writers these days. I don't need bleeding edge tech for my applications, just a nice comfortable API.

So, finally, will GL 2 be able to give me this? Not soon for sure. Its a real shame.

Tom Nuydens
12-10-2002, 06:17 AM
I hope you remembered to wear your flameproof suit...

-- Tom

Robbo
12-10-2002, 06:30 AM
Not wearing one - I don't need one. The way I see it, if I need to be convinced I should be using OpenGL over D3D, after 2 years solid experience with GL (and one successful product launch), then something is wrong. Thats why I've posted here; perhaps I'm missing the point?

I don't need to be cross platform - thats the only thing I see as an advantage with GL. I've weighed up the pros and cons, used both APIs and have come down in favour of DX for now.

fritzlang
12-10-2002, 06:42 AM
I am a guru. And I say: Opengl code is so pleasent to write and read. It is so beautifully designed. GL2 code will be an artform of its own, go read the (premliminary) specs from 3dlabs.
Those who go with ms & dx will get depressed in the long run just by looking at their own code. Ever compared a unix interface to a windows one?

Dont underestimate the power of aestetics,
GL2 apps will look better because of this.

Robbo
12-10-2002, 06:48 AM
Well, thats true, but apart from some obscure hardware setup, GL2.0 is just a series of lecture slides.

I envy the D3D gurus vertex buffers (sigh...)

PH
12-10-2002, 06:50 AM
The state of GL right now is pretty good. As soon as the ARB_VAO spec is completed, I see no reason to switch. With the release of ARB_vp and ARB_fp, things look promising. I do agree that multiple code paths are annoying. And fritzlang's point about the way GL code looks, is important to me too. I've used DX5-7 and have nothing against DX/D3D but I simply like GL too much to switch http://www.opengl.org/discussion_boards/ubb/smile.gif. I like the way it looks and the way it works. GL2 is coming in one form or another ( and hopefully sooner than later ).

[This message has been edited by PH (edited 12-10-2002).]

Coconut
12-10-2002, 07:08 AM
I am confused. You said you are moving to D3D. And you then said you want to be convinced to stick with OpenGL. You don't even know what you want....

knackered
12-10-2002, 07:14 AM
Hey, it's quite possible to support both, y'know. It's a little tricky at first, but once you know both API's then writing an abstract renderer interface is quite straightforward - you really should be doing this anyway (saves you rewriting stuff)...you have to track render states don't you? You have to do this no matter which API you use, so put a nice layer between you and the rendering api to keep things tidy. Clever use of function pointers will speed things up.
You'll always have a choice then - you won't be at the mercy of microsoft because you can just switch your dev time over to improving the GL version of your abstracted interface. There's not all that much to drawing stuff - everything does things much the same.

Robbo
12-10-2002, 08:01 AM
I tried this knackered - however, I ended up having to bend it to suit GL rather than D3D. The main problem was in the storage and handling of vertex arrays. I need to chose vertex formats at runtime - writing procedural geometry which fills the vertex buffers at runtime. So, I ended up with a "vertex buffer" for GL and just the plain old D3D object (with a few bells and whistles - virtual functions falling through to the actual method on this object). I also ended up wrapping up the SetVertexShader functionality too. Basically, I was wrapping D3D but implementing methods for GL - all the while adding one more layer of abstraction and hence, possible confusion for the future maintenance guys.

The point I am trying to make is that, yes, it is entirely possible for me to write a wrapper for both. However, it is quicker and easier just to use D3D. I think things used to be the other way around - it was quicker to use GL and you just couldn't do certain things with D3D. Well, its flipped right over now.

And, no, I'm not 100% sure about this, just as I wasn't 100% sure about using GL in the first place back in 2000 when I wrote the feasibility paper for the product I have just completed. GL just feels "messy" these days - from an aesthetic point of view - compared to D3D 8.1. I know all the extensions stuff is great for demo and SOTA - but I think GL has lost it on the Janet and John stuff. You know, the bread and butter http://www.opengl.org/discussion_boards/ubb/wink.gif

davepermen
12-10-2002, 08:43 AM
i currently use opengl 1.4 + ARB_fp and i can't wait for ARB_vao. my code is not messy, and very future proof (at least, i think gl will survive for quite some while).

extensions are not useful if you want easy, clean code. yes. but once you started to really use dx you see that there is nothing different. eighter limit yourself to a low level of dx or simply cut off a huge possible amount of people. use pixelshaders. and voilà, gf3+/radeon8500+ only. most of the effects _could_ be done on a gf1..
use real pixelshaders (ps2.0) and voilà, you loose everyone, except radeon9700 currently.. the same situation in wich i am. gl1.4, and espencially gl1.5 will be about equal both in usage and features as dx9 is. so i still see no need to move.

the dynamic vertex format for dx is crap btw. i dislike to only have vertex buffers, too. etc.. ARB_vp and ARB_fp are much bether than what dx provides (variables, the automagically bound states, etc..). gl does provide much, dx does have much leaks.

both are great imho. but i see no use to code for dx9 currenly (i would never go to dx8.. forget it http://www.opengl.org/discussion_boards/ubb/biggrin.gif). gl1.4/1.5 provides/will provide me the same feature set. with_OUT_ stupid extension mess and all. and still, i will be able to additonally implement speed boosts, quality boosts or additional features if i want per extension. but thats premature optimisation.

dx restricts me away from that.

and my code is compileable on linux,on mac, etc as well. no need to stay on windows.

and no, i don't need that. but i don't know, if, one day, i win somewhere a mac, or one day, i'll choose linux. and then, i can just continue coding as i did before..


gl2.0 will be great..

knackered
12-10-2002, 09:49 AM
No, I have to agree with Robbo on this - GL is a real mess from a "getting a project finished before a deadline" kind of perspective.
Davepermen, you say that DX has limits too, and that you constantly have to check for certain feature support...the big difference is, however, that if a feature is available in DX, the interface is the same no matter what the vendor. I've harped on about this before, I know...but it's pretty relevant.
Davep, am I right in thinking you're not currently employed in the graphics industry? In which case, I'm sure you don't mind that you can code directly down the arb_fp/vp path - but nobody in the real world has cards that support these extensions...so it's pretty pointless talking about them in this context.
Don't take that the wrong way, davep - I've heard you talk about various graphics topics, and you really do seem to know your stuff - but you're not the most practical person when it comes to these issues. http://www.opengl.org/discussion_boards/ubb/smile.gif

jwatte
12-10-2002, 01:59 PM
You're "starting a new project"? That means you "finished an old project"? If the old project already uses OpenGL, and has working code, then re-using that code (which presumably is already debugged) is surely easier than trying to write a DX wrapper from scratch?

Also, OpenGL gives you some semblance of portability. That may or may not be important to you.

Robbo
12-10-2002, 02:23 PM
No jwatte - thats really part of my point - if I use D3D, I will have a lot less code in the engine period and I won't have to worry too much about specific code-path optimisations.

The kind of classes I would inherit for a new project from my current one include math, mesh, vertex buffers, materials etc. Most of these things exist in one form or another in the D3D SDK by default, only their versions will have been tested by thousands of programmers on many different setups rather than just by me in the office on whatever hardware I can grab hold of during a test iteration. The only thing I will miss is GL picking, but I only used that before because I didn't have time to implement a proper ray-casting mechanism (which, incidentally, I have done since).

At the end of the day, I am just waiting for someone to come up with the killer argument in favour of GL, either now or in six months time. Don't forget, I will have to answer the question in meetings at work as to why I am wanting to use D3D now when I recommended GL a couple of years ago! I think I can justify using D3D today based partly on my experience of using GL over that time period. I couldn't before of course.

31337
12-10-2002, 03:51 PM
Wait so learning direct x will be easier for you than just using GL in this project? Man I wish I could learn how to use graphics API's as fast as you can...

henryj
12-10-2002, 04:03 PM
Tell you what to do...go use DirectX.

Close the door on your way out.

SirKnight
12-10-2002, 04:33 PM
OpenGL 0wn5 j00!

http://www.opengl.org/discussion_boards/ubb/wink.gif

-SirKnight

zed
12-10-2002, 06:04 PM
um surely supporting VAR/VAO/immedate etc is a 7minute job to code up?
in the final development time for a project this equates to roughly 0.000034% (rounded down)

IT
12-10-2002, 06:34 PM
Rumor has it ... DX9 tomorrow.

pkaler
12-10-2002, 07:06 PM
Back to the original question "How is GL2 coming along?". Can anyone answer this? The last ARB meeting notes, it says that there would be a final spec at the end of 2002.



GL2 working groups has selected glslang as the starting point, and is actively identifying and reducing issues from both the Cg and glslang documents. Still on track for a final spec by end of CY 2002.


Has the December meeting taken place?

davepermen
12-10-2002, 07:34 PM
Originally posted by knackered:
No, I have to agree with Robbo on this - GL is a real mess from a "getting a project finished before a deadline" kind of perspective.
Davepermen, you say that DX has limits too, and that you constantly have to check for certain feature support...the big difference is, however, that if a feature is available in DX, the interface is the same no matter what the vendor. I've harped on about this before, I know...but it's pretty relevant.
Davep, am I right in thinking you're not currently employed in the graphics industry? In which case, I'm sure you don't mind that you can code directly down the arb_fp/vp path - but nobody in the real world has cards that support these extensions...so it's pretty pointless talking about them in this context.
Don't take that the wrong way, davep - I've heard you talk about various graphics topics, and you really do seem to know your stuff - but you're not the most practical person when it comes to these issues. http://www.opengl.org/discussion_boards/ubb/smile.gif

yes, exactly http://www.opengl.org/discussion_boards/ubb/biggrin.gif

but two things:
a) in gl extept for ARB_vao its all the same, for major features (dx does not provide more than the ARB.. gl _does_, if you want to use that additonally => i use gl, so i can code both in an easy, standard way, and use the additional advanced per hardware features if wanted/needed)
b) yes, i currently don't work in the industry. my plan is on getting my stuff working over the next 2, 3 years, the same timeframe robbo had for his last project. in 2 - 3 years dx9/gl1.4/1.5 capable hw is a standard wich is supported by the higher level gaming society. and they are my target.. i wanna use the power of a moder cpu and gpu/vpu to get something really sweet running. at the start of doom3 no hw was planned to run it at really acceptable fps anyways, too..

and i can code fallbacks to older hw if needed. but thats premature optimisation. first i wanna see my stuff working. then i wanna see my stuff working everywhere...

but sure, _currently_ i would use dx for a project, too. but when you _start_ a project, i would use gl again now. as it evolved through its mess thanks to the fancy proprietary extensions it got through nvidia and co the last years.

davepermen
12-10-2002, 09:59 PM
i just found one thing that dx is really supperior in..

rendertarget.

that wgl/pbuffer/rendertexture construct that got created is really crap. rendering to an offscreen target has nothing to do with the os gl is running on. its just a buffer for gl to draw into. rendering to a texture, the same. it has nothing to do with the os. else we would need wglTexImage2D and wglCopyTexSubImage2D as well..

thats the main thing i prefer in dx.

Robbo
12-10-2002, 10:54 PM
"Wait so learning direct x will be easier for you than just using GL in this project? Man I wish I could learn how to use graphics API's as fast as you can..."

Well yes. I worked for ******* Interactive a good few years ago and learnt D3D 6 there. Moving up to D3D 8, I can get up and running very quickly.

rgpc
12-10-2002, 11:14 PM
Originally posted by Robbo:
Well yes. I worked for ******* Interactive a good few years ago and learnt D3D 6 there. Moving up to D3D 8, I can get up and running very quickly.

My first look at Dx was 6 and I had a brief look at 7 & 8. I gave up because all I could remember from 6 was irrelevant in 8...

gaby
12-10-2002, 11:59 PM
I think that many people are missing the fact that the learning time of an API is a really important component. So, it's true that in gaming industry openGL is not the most efficient API, because it is not oriented on feature set, but on low level programming historically made to use with very old SGI hardware : for basics operations, OGL and DX are both efficient, but for an abstraction in a more unified manner of complexe optimised features, DX seems to be really more simple.

All Guru are all time saying that his prefered API is the best, but it seems that to build a advanced rendering wich simply provide the best vertex system, bump mapping + shadows, DX9 will be faster and easier than OGL.

Isn't it ?

Gaby

Robbo
12-11-2002, 12:37 AM
An interesting reply. I think we can all have a grown up debate about this, even though in the past this subject has always sparked a religious war.

Another aspect of this that is not mentioned is the fact that without the gl icd being updated by Microsoft, OpenGL 2 will just be extensions in any case. I don't like using extensions particularly, they feel like hacks (I don't know why, they just do) - I always feel I have to write a fallback just in case the extension isn't present.

Asgard
12-11-2002, 12:52 AM
I have used both APIs intensively over the last couple of years. For me, OpenGL was and is always first choice when it comes to writing a quick demo, because it is easier to set up things and also somewhat easier to remember most of the function names and enumeration names.
Also I hated all DX versions before 8.0. I didn't really like the design that much. But with 8.0 and especially now with 9.0 my opinion has changed. D3D's object-oriented design with version 9.0 is really quite nice, I must say. And the D3DX library is a very useful lib for which OpenGL has no equivalent.
Other plus points for D3D are render-to-texture (one line of code that says SetRenderTarget as opposed to about 100 lines of code in OpenGL to setup a pbuffer, render to it, bind it...) and vertex buffers.

So if I had to make a product that is definitely only ever going to run on Windows, I'd use D3D nowadays.
I'm currently working on a platform- and API-independent engine http://xengine.sourceforge.net and it has renderer libraries for OpenGL 1.3 and Direct3D 8.1. They both do exactly the same, but the Direct3D renderer has about half the lines of code. Also it took me about a week to develop the Direct3D renderer as opposed to about a month for the OpenGL renderer where I had some troubles getting render-to-texture up and running.
During the development of the engine I encountered about 20 driver bugs (both NVIDIA and ATI). Only 2 of them were bugs in the Direct3D drivers, the rest in the OpenGL drivers. Admittedly that doesn't say very much, since OpenGL is more actively developed by the companies with new extensions and all that.
Still I like OpenGL a lot and hope it gets a more object-oriented design with version 2.0. So far I like the drafts for OpenGL 2.0 a lot. But I also like D3D http://www.opengl.org/discussion_boards/ubb/smile.gif

NitroSR
12-11-2002, 05:58 AM
Just use whatever is most convenient for you at the time. The OpenGL API won't be mad at you for not using it for a project or two. The reason so many people are torn over this issue is because there is very little difference in actual output. The only difference has to do with how the API calls are made, but the results are indistinguishible.

When it comes down to it, the MAJORITY of code in a big appication will not be making API calls. It will be performing the logic and performing its purpose. This is the real meat of the project, this is the part that will show your medal.

Coriolis
12-11-2002, 08:27 AM
NitroSR: I think you mean "mettle" http://www.opengl.org/discussion_boards/ubb/wink.gif

Asgard: One of the things I dislike most about D3D is its object-oriented structure. I find it generally easier to read code that is along the lines of "function(data)" than it is to read code like "data->function()", especially if "data" is a non-trivial expression.

Julien Cayzac
12-11-2002, 08:53 AM
woah, 28 posts already ! http://www.opengl.org/discussion_boards/ubb/biggrin.gif
Robbo> Canst Thou teach me, Ô Great Troll Master ? \o/

Julien.

knackered
12-11-2002, 09:33 AM
Originally posted by Coriolis:
Asgard: One of the things I dislike most about D3D is its object-oriented structure. I find it generally easier to read code that is along the lines of "function(data)" than it is to read code like "data->function()", especially if "data" is a non-trivial expression.
You're not a fan of C++ then?

Coriolis
12-11-2002, 10:05 AM
knackered:

I'm a big fan of C++. I like it a lot better than C. I think the problem is that most people confuse C++ with classes.

I use templates when appropriate, and constructors / destructors are quite useful for getting stuff to always happen when you leave a function. The inline options it gives are very nice. It is convenient to be able to make vectors into a class and do normal math on them. Inheritance (if used properly) is convenient.

However, I am not a fan of rampant C++, for lack of a better term. C++ has a nasty way of hiding what is really happening. I really dislike default arguments for this reason. Operator and function overloading are also obfuscatory, though frequently convenient.

Overly object-oriented C++ also has a way of making functions go into source files based on what data they manipulate rather than based on what function they perform, which tends to distribute related functionality and group unrelated functionality.

I like classes for game code, but not much else. Classes are not really useful for rendering or physics, and tend to be horrible offendors at spreading related code into many files according to the data that gets operated on instead of the functionality being done.

painterb
12-11-2002, 10:19 AM
Originally posted by knackered:
Hey, it's quite possible to support both, y'know. It's a little tricky at first, but once you know both API's then writing an abstract renderer interface is quite straightforward - you really should be doing this anyway (saves you rewriting stuff)...you have to track render states don't you? You have to do this no matter which API you use, so put a nice layer between you and the rendering api to keep things tidy. Clever use of function pointers will speed things up.

Do you (or anyone else) maintain any kind of short demo with source to illustrate how you abstract away the two implementations behind one interface? Even if it's just header files, that would be interesting to me to see.

I haven't ventured into supporting multiple renderers (DirectX) yet, but would like to keep options open certainly. And I also do agree that in principle this layer of abstraction could be a useful thing, even if supporting only one rendering API.

pkaler
12-11-2002, 12:02 PM
From Mark Kilgard's OpenGLforNV30 presentation:


OpenGL exposes all NV30 features
Well beyond even what DirectX 9 exposes


So there you go, more features. But that is definitely a double-edge sword and you can cut yourself pretty easy.

I'd target GL1.4/ARB functionality. With that you can do VPs, FPs, cubemaps, texture compression and environment stuff, and pretty soon VAO. You can created an engine with ppl, bumpmaps, and shadows with those extensions. Do skinned meshes on the CPU and you'll have a hell of an engine.

Then pick and choose other extensions you want your engine to support.

Everything is starting to converge pretty good. And I'll agree that the extension hell of the past little while was not fun.

knackered
12-11-2002, 01:08 PM
Originally posted by Coriolis:
knackered:

I'm a big fan of C++. I like it a lot better than C. I think the problem is that most people confuse C++ with classes.

I use templates when appropriate, and constructors / destructors are quite useful for getting stuff to always happen when you leave a function. The inline options it gives are very nice. It is convenient to be able to make vectors into a class and do normal math on them. Inheritance (if used properly) is convenient.

However, I am not a fan of rampant C++, for lack of a better term. C++ has a nasty way of hiding what is really happening. I really dislike default arguments for this reason. Operator and function overloading are also obfuscatory, though frequently convenient.

Overly object-oriented C++ also has a way of making functions go into source files based on what data they manipulate rather than based on what function they perform, which tends to distribute related functionality and group unrelated functionality.

I like classes for game code, but not much else. Classes are not really useful for rendering or physics, and tend to be horrible offendors at spreading related code into many files according to the data that gets operated on instead of the functionality being done.

It's funny that you consider inheritance as merely convenient. Inheritance is simply the *point* of C++. Why do you like classes? There just structures, so why do you point them out for praise when inheritance is not important to you?
You're not making sense.
Inheritance is beautiful - get yourself a decent class browser if you suffer from the old complaint of not knowing what functionality a class possesses without knowing its parents.
How would you elegantly implement a scenegraph using C? (have you seen the C version of Performer? It's horrible).
How could you create an abstracted renderer in C without messy function pointers?
C++ hides things, yes - but if you know C, and you know how C++ implements the things it hides, then you should not worry.
In other words, I think it has a big place in rendering. As for physics, well - ODE (open dynamics engine) is written in C++ (granted it exposes functionality using C interfaces, but that's just lowering itself to the common denominator).
Anyway, this isn't a C vs C++ argument, so I'll leave it now.

Zak McKrakem
12-11-2002, 01:37 PM
Originally posted by PK:
Has the December meeting taken place?
Has anybody any news about that?
Is the ARB_VAO spec finished?

Thanks

Coriolis
12-11-2002, 02:10 PM
knackered: I think you mis-interpreted what I wrote. I don't like classes and dislike inheritance. I like both of them when used appropriately, but I think that people tend to use them way too often. I used to suffer from the same object-oriented extremism myself. I didn't say that you can't use classes with renderers and physics engines, I just said it didn't seem appropriate to me. I also know that the only difference between a class and a struct in C++ is that one defaults to private members and the other to public.

I don't buy into the not knowing what functionality a class possesses argument. It seems pretty weak, because the problem can be just as big with knowing what functions you can call on a structure.

I can find the hidden stuff in C++ as well as anybody, it just makes learning code well a whole lot harder and take longer. Even a simple statement like "a = b + c;" can hide about four function calls. C++ tempts you to make everything into operators; I saw one source base that used the bitwise or operator for vector dot products, which makes no sense and has the wrong precedence.

I don't think I do scene graphs in the generic sense, or at least not anything like the way Eberly describes in his book, which is the only explanation I've seen of it. It seems overly complicated for something so simple. I prefer to stick to more basic data structures than to cram everything into a scene graph. I still do heirarchical structures for representing the data I use to draw my scenes, of course, but i don't think they fit under the moniker "scene graph", except in the loosest sense of the term. I tend to prefer several simple, obvious, easily debugged data structures that each map well to their specific use rather than a universal data structure that has everything crammed into it. I find scene graphs are nice from a pure computer science standpoint, but end up being less nice from a software engineering standpoint http://www.opengl.org/discussion_boards/ubb/wink.gif.

fritzlang
12-11-2002, 02:15 PM
I agree.
If everything is considered being an object, and all objects live in a parent object and so on, its confusing. I like to have object-oriented machines living in a procedural world.

fritzlang

Nakoruru
12-11-2002, 02:53 PM
PainterB (RE: Could someone show me how to wrap DX and OGL)

No one shoud wrap OGL and DX in a low level interface which is barely more functional than either library alone. That is rather pointless, because it introduces a layer of complexity without adding any functionality.

What you would do is create a higher level interface which does what you need for the game or application you are programming. The functions would look more like 'Draw(ThisGeometry, ThisStateDescription)' than like 'SetStateX();SetStateY();SetState();SetStream();Dr aw();'

I use the term 'StateDescription' instead of 'Shader' because I am thinking more in terms of Quake 3's high level pass and state description language (which it calls shaders), instead of like low level vertex and fragment shaders. A state description language is a lot like CgFX or RenderMonkey.

Your 'wrapper' may even be as high level as 'DrawBSP' or 'DrawModel'. It definitely should have far more than 1 or 2 OGL or DX calls per call.

Basically you want to build somthing that is less than a whole graphics engine. I would call it a Renderer. The renderer would be able to read abstract state descriptions and geometry descriptions and then translate these into OpenGL or Direct3D calls. It would do cacheing and redundant state checking.

Even if you only used one API, it would **still be worth it** to abstract OpenGL or DirectX out of your engine code this way. Good for OpenGL so you can support future extensions, good for D3D because of future versions.

I still do not understand your comment about future maintainers having a problem with this. In this system, all the OpenGL code is in one place, instead of scattered throughout the engine.

It just makes sense to seperate out low level OpenGL calls from the higher level graphics code. (e.g., what does visibility determination have to do with OpenGL?)

dorbie
12-11-2002, 03:17 PM
Inheritence is ONE of the points of C++, the other big one is encapsulation of data with methods, the other reason for liking classes. Abstraction features like overloading and virtual functions are also important. Most strong OO proponents make the mistake of thinking one solution fits all problems. Like all things it has it's place.

[This message has been edited by dorbie (edited 12-11-2002).]

pkaler
12-11-2002, 03:35 PM
C++ is philosophically agnostic. Pick your idiom: functional, procedural, object-oriented, generic. Pick your tool: functors, structs, classes, inheritence, polymorphism, templates, exceptions. TMTOWTDI. And you don't take a performance hit for the features that you don't use. That's the thought process when C++ was designed.

It is a huge language and you don't have to use all of it. This becomes even more evident when the creator of C++ doesn't consider himself an expert.

Now back to my original question about this months ARB meeting. Any word on VAR????

Coriolis
12-11-2002, 03:51 PM
I forgot to mention that function pointers are no more messy than virtual functions. They are essentially equivalent with different syntax. Function pointers are more powerful, but virtual functions have a "cleaner" syntax according to some. The only real advantage of C++ is that the compiler automatically sets up the function pointer for you. The disadvantage from the efficiency-minded is that it takes two pointer lookups instead of one to find the function pointer; granted, this will be a negligable difference on modern machines, so I don't care too much about that.

You can do C++ multiple inheritance and virtual methods in C by using nested structs and function pointer tables. C++ just gives it a shorthand syntax and adds protected / private members. C++ also makes them use the same namespace, and C cannot do that; this is arguably a good thing, and arguably a bad thing.

C++ compilers are a lot better than they used to be, but on the other hand Carmack at QuakeCon said that the DooM renderer ran slightly slower when converted to using the vector class than it ran when still using the vector macros you can see in the quake source. He also said it ran worse from the GCC compilers than from the MSVC compiler because microsoft's compiler was better at optimizing C++.

pkaler
12-11-2002, 04:00 PM
Originally posted by Coriolis:

C++ compilers are a lot better than they used to be, but on the other hand Carmack at QuakeCon said that the DooM renderer ran slightly slower when converted to using the vector class than it ran when still using the vector macros you can see in the quake source. He also said it ran worse from the GCC compilers than from the MSVC compiler because microsoft's compiler was better at optimizing C++.

Part of the reason is probably that Carmack is better at coding macros that are game specific than the STL implementors are at coding up generic containers.

GCCs main goal is to be cross-platform. MSVC doesn't have to worry about that. I wonder why they don't use Intel's compiler?

Coriolis
12-11-2002, 05:06 PM
PK: By vector, I meant set of 3 numbers usually used to denote a point in space or a direction, not the STL vector class.

We tried using the Intel compiler on our last project, and it took about 10 times as long as MSVC and had a ton of new warning messages and some new errors, so we decided to stick with what had worked in the past. Our code compiled without a single warning at warning level 4 on MSVC, too.

pkaler
12-11-2002, 08:24 PM
More warnings and errors is a good thing, not a bad thing.

Assuming that the Intel compiler is as compliant as MSVC, it means the Intel compiler will help you more bugs earlier.

mcraighead
12-11-2002, 09:41 PM
Originally posted by PK:
Any word on VAR????

It'll be done when it's done.

- Matt

Humus
12-11-2002, 11:36 PM
Originally posted by PK:
More warnings and errors is a good thing, not a bad thing.

Assuming that the Intel compiler is as compliant as MSVC, it means the Intel compiler will help you more bugs earlier.

Certainly. I found lots of errors in my code when I ported it to Linux. I had lots of copy-paste errors from older code where I had changed the return value of functions and forgot to change all return statements etc. Like this,
bool foo(){
return NULL;
}
MSVC won't complain, but gcc gives a warning about a coversion without a cast.

Tom Nuydens
12-11-2002, 11:55 PM
I'm impressed. Robbo managed to start a "C vs. C++" war and a "GL vs. D3D" war in a single thread! Respect!

-- Tom

dorbie
12-12-2002, 12:19 AM
PK, agreed, although my points are valid particularly w.r.t. C vs C++. I'm not the one boiling the language down to a single feature or claiming that it is inherently better because of it. Horses for courses.


[This message has been edited by dorbie (edited 12-12-2002).]

Robbo
12-12-2002, 01:51 AM
Depends how you read it Tom N. This isn't a post-reply count competetion and I certainly am not a troll. Lots of interesting ideas in there if you read all of the posts.

knackered
12-12-2002, 02:02 AM
Originally posted by dorbie:
PK, agreed, although my points are valid particularly w.r.t. C vs C++. I'm not the one boiling the language down to a single feature or claiming that it is inherently better because of it. Horses for courses.


[This message has been edited by dorbie (edited 12-12-2002).]

You really don't like me, do you dorbie? I'll take your undivided attention as a complement.
My assertion that inheritance is the point of C++ was not specific enough - I meant polymorphism and inheritance are the point...and it is inherently better because of it.
Templates and operator overloading are just periphery.

GPSnoopy
12-12-2002, 02:48 AM
I've never programmed in DirectX before so I can't really compare it against OpenGL.

However I do understand why someone can prefer the safety of DirectX when it comes to supporting multiple hardwares.
OpenGL extensions can be a real mess and is still a mess.

Some people here are talking about the new ARB extensions coming along that should solve these problems, but I think they tend to forget that by the time these ARB extensions are available new proprietary extensions will be out too for future hardware.

Of course extensions allows you to do things you can't currently do under Direct3D, but you pay a price for that (more code, more bugs, less unified programming, etc...)


About Carmack's comment: if his new C++ code is running slower than his old C code, then he is doing something wrong. (He makes me think of a guy who's never drive a car before but who can tell you whether he likes automatic gear or manual)

C++ was designed for safer and easier programming styleS, while being as fast or/and faster than C.
If you think something is more complicated or slower in C++ than in C, then you should seriously reconsider what you're doing and how you're doing it.

MickeyMouse
12-12-2002, 03:03 AM
Originally posted by GPSnoopy:
About Carmack's comment: if his new C++ code is running slower than his old C code, then he is doing something wrong. (He makes me think of a guy who's never drive a car before but who can tell you whether he likes automatic gear or manual)

C was faster for him in that case simply because macros are fastest for things like intensive math. Nothing can be faster than a combination like:
typedef vector_2 float[2]
with "methods" for vector_2 "class" like
#define vector_2_add(a, b) a[0]+=b[0]; a[1]+=b[1]

[This message has been edited by MickeyMouse (edited 12-12-2002).]

Humus
12-12-2002, 03:54 AM
I'm certain any decent compiler will be able to take a Vector class with an overloaded += operator down to the exact same code as that macro.

Ysaneya
12-12-2002, 04:26 AM
Aaah that old myth. C faster than C++ for math operations ? Why don't you test yourself instead of taking your word on a stranger ? You might be very surprized..

Y.

Nakoruru
12-12-2002, 04:26 AM
I guess the fact that no one has commented on my advise for building an OpenGL/D3D wrapper must mean that it is good, common sense, advise which has no place in a flame war ^_^

jonasmr
12-12-2002, 06:25 AM
Originally posted by knackered:
Templates and operator overloading are just periphery.

Gibberish!
Templates are just as important as polymorphism and inheritance.
Without them, the standard library would suck just as much as the JAVA standard library - resulting in needless overhead for any generic code.

GPSnoopy
12-12-2002, 07:12 AM
Originally posted by MickeyMouse:
Nothing can be faster than a combination like:
typedef vector_2 float[2]
with "methods" for vector_2 "class" like
#define vector_2_add(a, b) a[0]+=b[0]; a[1]+=b[1]


try this:




typedef v2 float[2];

inline void vector_2_add(v2 & a, const v2 & b)
{
a[0] += b[0];
a[1] += b[1];
}


With this code you'll avoid any mistakes that you might do using macros: it's type safe, ensure that const variables aren't modified, etc...

But also it'll be just as fast as your version, and maybe even faster because you're giving more details to the compiler about the variables properties.

I've never seen a code in C that cannot be done more easily and faster in C++.

MZ
12-12-2002, 07:21 AM
Going slightly back on topic...

davepermen:
i just found one thing that dx is really supperior in..
rendertarget. (...)
I agree, but it is not only problem of simplicity of usage, but also lack of certain feature:

PK:
From Mark Kilgard's OpenGLforNV30 presentation:
>> OpenGL exposes all NV30 features
>> Well beyond even what DirectX 9 exposes
This wasn't entirely true, as GL lacks DX' ability to share depth + stencil for several render-target color buffers. In DX8 you can:

pMyD3DDevice->SetRenderTarget(/*Color-buffer*/, /*Depth-and-Stencil-buffer*/);So you can use it to change color-buffer and retain your current depth data.

I always missed this in GL, so i have to cope with CopyTexImage (therefore making WGL_RTT & pbuffers useless)

This problem is not specific to nv30, but when you start to use float buffers on nv30/R300, it is going to become more painful then ever (CopyTexImage won't fix it anymore) :
- you can't avoid creating separate pbuffer & using WGL_render_texture.
- you can't use CopyTexImage to copy framebuffer depth to GL_FLOAT_R texture.

This means you will have to render depth data of entire scene at least twice. Example:
- 1st depth buffer for standard framebuffer - depth still needed when blending
- 2nd depth buffer for GL_FLOAT_R pbuffer - needed to capture scene depth in texture
- 3rd depth buffer for GL_FLOAT_RGBA pbuffer - for hi-precision shaders

So, you will have 3 depth buffers, wasting both space and time (as they have to be filled with the same data)

I realize this is artificial interface-only issue, but it actually makes important HW feature (kind of flexiblity in setting up buffers) unexposed in GL.

[This message has been edited by MZ (edited 12-12-2002).]

Coriolis
12-12-2002, 10:40 AM
Granted, you can write a C++ vector class so that each and every function is just as efficient as the equivalent macro. If you gave the vector class the same functional API as the macros, there would be no difference at all in performance. However, when you make a vector class, you overload the arithmetic operators and you let the compiler manage your temporaries. It is less work and usually easier to read, but the code you write is different, so the code the compiler generates is different. The difference is not so much in the compiler as it is in the type of code that the language encourages you to write.

OneSadCookie
12-12-2002, 10:51 AM
I can't speak to MSVC, but the code GCC generates for (for example) vector math using operator overloading is very bad. I doubt MSVC does much better.

The solution, of course, isn't C or macros, but breaking encapsulation. Whether you want to do that is up to you, but we've had to in a couple of performance-critical places.

pkaler
12-12-2002, 10:55 AM
MZ:


This wasn't entirely true, as GL lacks DX' ability to share depth + stencil for several render-target color buffers

I think this must be the first non-religious, well thought out shortcoming of GL (Aside from the extension mess). I haven't used render_texture (hmm...can't even find an GLX_ARB_render_texture) but I've heard of lot of grumblings about this extension.

GPSnoopy:


About Carmack's comment: if his new C++ code is running slower than his old C code, then he is doing something wrong. (He makes me think of a guy who's never drive a car before but who can tell you whether he likes automatic gear or manual)

I'd highly doubt that. The tools for Doom were written originally on a NextStep machine using Objective-C. But this is turning into a poo-slinger so I'm gonna shut up now.

Elixer
12-12-2002, 10:57 AM
Originally posted by mcraighead:
It'll be done when it's done.

- Matt

Hey Matt, you working on Duke Nuke'em forever? http://www.opengl.org/discussion_boards/ubb/wink.gif

I don't see what the big deal is, if you have used openGL in the past, and you are more comfy with it, fine use it. If you want to try a new API, then fine, do it, same goes with C and or C++ ...

Guess this forum still needs a moderator! http://www.opengl.org/discussion_boards/ubb/smile.gif

Robbo
12-12-2002, 11:21 AM
Elixer, its a big deal only because I have to do this at work and support it or make it maintainable for up to 10 years into the future! So, I have to argue technology uses in meetings and papers - you know, there are limits to "suck it and see" when your boss has to justify expense of development (via. programmers time) to his boss etc.

I guess if I was just hacking some tech demos it wouldn't really be an issue at all.

sqrt[-1]
12-12-2002, 03:15 PM
Speaking as someone that works on an engine with both a Direct3D and OpenGL backend, I would have to say that it is generally a case of "the grass is always green on the other side of the fence"
(Ignoring multi-platform issues)

In DirectX:

-Why is there no easy way to tell the DirectX version?

-No scissor support? (Fixed in DX9)

-Why do some things fail silently? (ie texvspec ps instruction on ATI 8500-9000 cards. I only really found this when implement the instruction in OpenGL and found I could not do it)

-Why is there no "easy" multi-stream support or better flexable vertex formats like in OpenGL?

- Why does the entire code base have to be revised on every new DirectX version. This is a great cause of bugs as you may skip over code you think should work properly but the spec may have changed slightly, so that you don't get a compile error but if the code path is execuited, "bad things happen".

-Why is it so much easier to lock/crash/freese the computer in DirectX than in OpenGL? (Perhaps because there is an extra code layer between you and the driver (Direct3D) that can introduce bugs/ not handle your bad code well)

-Why are there so may card capabilities (or caps) to test? (ie I can get a DirectX9 interface but that is no guarantee that I even have a z-buffer, depth tests, alpha blending etc. If I want to be sure I have to test all these caps (and many more) to ensure I am not running on an ancient card. In OpenGL I can just go "if version < 1.2.1 tell the user to update the card or drivers)

-Why is windowed mode in DirectX so hard? (You lose your context, vertex buffers and un-managed textures on window resize. There are also problems with "input lag" and jerkyness issues in windowed mode)

In OpenGL (as mentioned previously):
-Why is there no standard fast vertex buffer support? (soon to be fixed I hope)

-Why is render to texture so hard?

-Why is there no standard 1st generation "pixel shader" support. (This is no real major problem as I can see why it is not a standard as the hardware was not really mature enough for a standard, but at least the we don't have "caps say supported (but not really) case in DirectX)

-Why do I have to manually "load" entry points to functions not in OpenGL 1.1? (Yes I know it is trivial and is the fault of MS but it still has to be done)

But in your case Robbo, you say this is a project that has to exist/maintained for 10 years! Faced with that, I would easily choose OpenGL as I think it requires less maintaince in the long run. (Try running old DX7 games (that were not mainstream) in DX8 and watch them crash and burn. MS does a good job supporting the old interfaces but it will never be perfect.) In 10 years if MS is still around and making DX, ~DX15-DX16, Do you still think it will run a DX9 app?

(I am biased however as OpenGL was my first API-in the days of DX5 it was an easy choice)

mcraighead
12-12-2002, 03:30 PM
Originally posted by Elixer:
Hey Matt, you working on Duke Nuke'em forever? http://www.opengl.org/discussion_boards/ubb/wink.gif

I'm surprised no one else even seems to be prodding me more about that answer...

I could say more, but I'm guessing ARB policy restricts me from speaking publicly about unfinished work products. I will say that the ARB meeting was Tuesday/Wednesday just this week. Beyond that, you'll have to wait for the meeting minutes to come out.

- Matt

zed
12-12-2002, 05:02 PM
>>I've never seen a code in C that cannot be done more easily and faster in C++.<<

try this http://www.bagley.org/~doug/shootout/craps.shtml
c 1st in speed (from the languages)
c++ come in 4th.
u can submit your own examples of the various programs thus u can try to improve c++'s ranking (as it stands now it doesnt even get a medal http://www.opengl.org/discussion_boards/ubb/frown.gif)

personally i use c++ + dont give a flying f about c vs c++ (though the other vs's i do care)

niko
12-12-2002, 08:20 PM
Originally posted by Robbo:

Elixer, its a big deal only because I have to do this at work and support it or make it maintainable for up to 10 years into the future!

Given that '10 years of maintainability' I would definitely take OpenGL. We chose ogl for our product four years ago and couldn't be happier. Because of customer requests (and general dislike for M$) we are moving to Linux soon and because of ogl (and Borland) the porting will be kind of a trivial task. You know, in ten years time frame you actually may want to port your app to another OS. Oh well, maybe dx will be available for Linux by then anyways...

/Niko

Halcyon
12-12-2002, 09:37 PM
Ok, i'm just wondering here....why do we have to use ONLY ONE API? I mean why not use OpenGL for things like research and whatnot and directX for things like games.

According to Carmack, OpenGL is much easier to experiment in and things like that. Sure...but this is one man's opinion, no matter HOW AWESOME he is. I mean if Carmack like McDonalds over Burger King, are we all going to go running to McDonalds. I sure hope not!

Make your own decisions and let others have their opinions. I mean OpenGL has its flaws and strengths. But so does DirectX. If something were perfect, there there would be no competition. It is a matter of taste and not everyone is going to have the same taste.

- Halcyon

Robbo
12-12-2002, 11:24 PM
Originally posted by sqrt[-1]:
In 10 years if MS is still around and making DX, ~DX15-DX16, Do you still think it will run a DX9 app?

(I am biased however as OpenGL was my first API-in the days of DX5 it was an easy choice)

Ok, sqrt[-1], complex as your arguments are (that was a pun on sqrt[-1] by the way!), you have CONVINCED me to stick with OpenGL. One of the main reasons for this is the loss of device objects on window resize! None of our apps run in fullscreen mode. Also, I'm guessing I'm just unhappy about the ****e vertex buffer support with GL. But, using the arguments you give above, I have a good idea now what the disads are of using D3D, which of course I would probably discover half-way through developement once I had decided to use it.

Thanks.

davepermen
12-12-2002, 11:38 PM
robbo, i think if your development time for those apps is some months, you get ARB_vao then as well, so bether prepare to move over to what you like http://www.opengl.org/discussion_boards/ubb/biggrin.gif

oh, and dx vb's are not really near to perfect imho, but thats another topic..

yeah, with gl you make yourself quite save that everything works years after. just don't use nvidia exts, they yet removed 2 of them again (or more? http://www.opengl.org/discussion_boards/ubb/biggrin.gif). (possibly ati did as well, but i just know of nvidia in this topic..).

but i'm sure you wouldn't anyways http://www.opengl.org/discussion_boards/ubb/biggrin.gif

GPSnoopy
12-13-2002, 12:46 AM
For long run project, OpenGL is certainly more maintainable than DirectX.

The thing is that newer versions of OpenGL don't break the interface of older ones, they just add new entry points.
I mean, you can always use OpenGL 1.0 calls under OpenGL 1.1/1.2/1.3/1.4/2.0/etc...

AFAIK this isn't the case with DirectX. A new version of DirectX means a completely new interface each time and you probably can throw your old code to the trash each time if you want to use the new DirectX version...

Note that if you use only one DirectX version and stick to it you won't have any problems, but I don't think you would want to isolate yourself like this.


Zed, C is a subset of C++, which means that everything that can be done in C can be done in C++; thus a C++ code can't be slower than a C code because you can use the exact same code if you want to.
Now C++ offers a lot (lot lot lot) more, most of it is generally faster than in C (ex: std::sort() vs qsort()).

I've seen this website before and I think they're dangerous comparaisons.
It is the perfect example of what you shouldn't do in C++.
Compared to the C versions of the tests, the C++ versions aren't really the greatest. It seems they force themselves not to choose the best solution in C++ and force themselves to use about every features of C++ in 2 lines of code.

But anyway, those kind of tests don't make any sense: they're not even using the same methods in the different language versions.

Take their Sieve of Eratosthenes test, their C version has the top place in it.
On my P3-700, their C version takes 5.6 second to count all the primes between 0 and 16*1024*1024 and use about 16MB of RAM.
A C++ version of the Sieve of Eratosthenes I wrote sometimes ago takes 0.8 seconds to do the same thing and only uses 1.8MB of RAM.
(and no I'm not going to submit it to them)

I still maintain my point: if efficiency is the main concern and that the C++ version is slower or/and more complex than the C version, then someone should rethink the C++ version.

Not that I'm trying to convince you, Zed, 'cause you already seem convinced. But I'm trying to break this myth about C++ being slower.


[This message has been edited by GPSnoopy (edited 12-13-2002).]

Robert Osfield
12-13-2002, 01:02 AM
Originally posted by zed:
>>I've never seen a code in C that cannot be done more easily and faster in C++.<<

try this http://www.bagley.org/~doug/shootout/craps.shtml
c 1st in speed (from the languages)
c++ come in 4th.
u can submit your own examples of the various programs thus u can try to improve c++'s ranking (as it stands now it doesnt even get a medal http://www.opengl.org/discussion_boards/ubb/frown.gif)


C++ have come 4th in this test, no medel, but the scors are 752 for C, 743 for C++, this is a 1.2% difference. I can't comment on the test codes themselves as I havn't looked at them. But you can be sure a little tweaking with a compiler and code style can change the ordering.

In this day and age of Gigga herz processors performance is much less of an issue than it used to be, one shouldn't waste time on a couple of precent here or there.

What remains an issue is tight deadlines, getting all features done in time and done robustly. Using you time productively is critical to hitting project milestones. Whether C++ or C is best for project success its a tough one, each to their own. Me I prefer C++ :-)

Robert.

Tom Nuydens
12-13-2002, 01:48 AM
I really don't understand why anyone would choose one language over the other based on a potential small difference in application performance. Don't you have anything better to worry about? (Productivity comes to mind?)

-- Tom

dorbie
12-13-2002, 01:50 AM
What you're all missing is that C and C++ handling of arrays has always been difficult to optimize in compilers and you should really be using Fortran for performance critical applications.

Robo
12-13-2002, 02:33 AM
Decided to moved post to a separate thread.

[This message has been edited by Robo (edited 12-13-2002).]

arsil
12-13-2002, 02:35 AM
Oh, God. I do not now any big company, that prefers C rather than C++. Object-oriented programming is so much easier and cleaner... If you know, what things kill performance in C++, you can write code, that is much cleaner, flexible and reusable than C version at comparable speed.

About DirectX vs OpenGL. OpenGL has only one advantage: portability. OpenGL has outdated API (no objects - because of portability on language level). Vendor's extensions are even worst than interface changes in DirectX - in DirecX all cards would behave the same with new interface, in OpenGL you have to write different paths for all major cards.

OpenGL 2 should be a revolution. Modern object-oriented API based on high level shaders. This is my dream, because I'm still OpenGL man.


I hope, that my english is understandable... http://www.opengl.org/discussion_boards/ubb/smile.gif

Zak McKrakem
12-13-2002, 02:37 AM
Originally posted by mcraighead:
I'm surprised no one else even seems to be prodding me more about that answer...

I could say more, but I'm guessing ARB policy restricts me from speaking publicly about unfinished work products. I will say that the ARB meeting was Tuesday/Wednesday just this week. Beyond that, you'll have to wait for the meeting minutes to come out.

- Matt

Does this means that ARB_VAO is still unfinished work?
I can believe it.

In my opinion, one of the weak points of OpenGL is that it shows the inability of IHV of creating good, in time, common extensions without MS's whip directing them.

As I see, most of the developers from those companies could do good things but they are in the meetings like puppets of their marketing departments (and you know the inability of the marketing department of getting intelligent decisions. Even for selecting names for theirs GPUs http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

How is it possible that three years after the release of the GeForce GPU, OpenGL doesn’t have a common and efficient way of transfer vertex data to the GPU?
It can be because:
- OpenGL don’t have anybody worried about the evolution of this API to be a real programming interface for current and future hardware. Just a few companies working in different ways (via extensions) to solve their small problems to access their hw for creating a few demos. This should be the ARB but it seems to be inefficient (no api for VAO, no api for first generation ‘pixel shaders’, no innovative extension, just extensions to match D3D features…)
- IHVs don’t see OpenGL as an API for developers to create mass market content, just small demos that work in some hw but don’t work in other with same features.
- IHVs just see OpenGL as an extensible api where they can expose their hw features via exclusive extensions for doing exclusive demos that will never run in competitors hw (even if they have the same or superior features)
- A mix of all the above.

For example, nVidia is a company that create a big amount of OpenGL extensions that they use in their demos but they don’t help in the evolution of the api, just throw their extensions via specs with no examples and wait until the arb decides a new arb extension (with more opposition than help) and then implement it on their driver. (At least they implement them, with the exception of env_crossbar). You only hear negative comments about GL2 because it is proposed by 3Dlabs.
ATI is a company with people working in MS offices and focused in D3D. I see them helping in creating ARB extensions because without of them, people were using nVidia extensions and the programs runs slower and with less features in their hw. But when you talk with them, they recommends D3D over OpenGL.

gaby
12-13-2002, 02:42 AM
Originally posted by dorbie:
What you're all missing is that C and C++ handling of arrays has always been difficult to optimize in compilers and you should really be using Fortran for performance critical applications.

Is it a joke ?

Regards,

Gaby

Robbo
12-13-2002, 03:03 AM
Originally posted by Zak McKrakem:

ATI is a company with people working in MS offices and focused in D3D. I see them helping in creating ARB extensions because without of them, people were using nVidia extensions and the programs runs slower and with less features in their hw. But when you talk with them, they recommends D3D over OpenGL.

Goddmit you people, I just decided that sticking with GL was the thing to do and now you go and sway me the other way again!

However, I think the killer argument is the interface mechanism with D3D - that being with each new release a completely new interface is required (usually) - rather than just additions to the current interface.

Robert Osfield
12-13-2002, 04:30 AM
Originally posted by Robbo:

Elixer, its a big deal only because I have to do this at work and support it or make it maintainable for up to 10 years into the future! So, I have to argue technology uses in meetings and papers - you know, there are limits to "suck it and see" when your boss has to justify expense of development (via. programmers time) to his boss etc.

I guess if I was just hacking some tech demos it wouldn't really be an issue at all.

For me justification of OpenGL over Direct-3D to managers is simple, you start with the word:

"Success"

The software we write we write to be a "Success". The consequence of software that is successful is that :

1) Lots of people want to use it
2) Lots of additional features will
be requested
3) People will want to use it over a
long period

A consequence 1) is that many of those poepe wanting to use it will be on other platforms than you originally developed it for. The customer is always right, so.. you'll need to port one day if you software is to be as successful as your customers want it to be.

A consequence of 2) is that new hardware features will be needed to be handled, but this needs to done without affecting compatability with customizes on older hardware. This means you need an extensible graphics API.

A consequence of 3) is that your API's will still have to be supported over a long period by latest hardware and OS's, yet at the same time you'll still have to maintain users with older systems.

A further consequence of 3) is that both 1 & 2 will be more likely and crucial to continued success.

So which graphics API can do all this for? Well I can only guess and hope where we'll be in 10 years time, but looking at track rocord - OpenGL has been around for 10 years, has been extended successfully over those 10 years, a continues to be at the forefront (see NV30 annoucnemts w.r.t OGL vs DX). It has been successfully ported to many, many platforms. It is support by many different vendors.

From the points 1-3 OpenGL itself seems like its pretty successful product. Reflect upon it, it has been a great API to depend upon over the past ten years. For me track records are important indication of future success.


The only reason for the existance of Direct-3D is to make is easy, comfortable and irresistable to just develop for the lastest Windows releases. Lock the developers in, and you lock end users in.
It has a track record for not porting across to other platforms. It has a track record for incompatibility from version to version... It is developed by company that has a track record for abusing its position. Direct-3D is just another tool to do further its control and manipulation. Microsoft has no real interst in computer graphics.


We all have role in the future success of OpenGL, use it, support it, promote the benifits. It provides choices for us as developers, it provides choices for end users. It provides the vehicle for our success.

Robert.

Keermalec
12-13-2002, 05:02 AM
sqrt[-1], thank you for that list of technical reasons for sticking with OGL, it is rare to find sound knowledgeable advice of this sort. Robbo, I think the choice of DX vs OGL goes a little beyond mere technicalities, however.

I was once complaining to an engineer about how all I had learned about MS DOS (like setting memory in autoexec.bat) was now perfectly obsolete in Windows, and he suprised me by saying everything he had learned 10 years ago in Unix was now still perfectly relevant.

It really made me think: Micrsoft is constatly changing its systems (whether it be Windows or DirectX) with every other new version and programmers have to learn a good part of it all over again. They do this for one of two possible reasons:

1. They are disorganised and are incapable of following a single programming line beyond a few years.
2. They do it purposefully to create business for themselves and those who pay to be a part of their developer network.

The actual reason is probably a combination of the two above. What reason 2 implies for us, however is this:

MS will make DX as good as it can as long as OpenGL is a valid competitor. Once OpenGL is out of the way though (IF that ever happens) you can be sure DX will revert to the same marketing logic as other MS products, ie.-

1. It will have to be paid for in one way or another by developers.
2. It will change frequently and require constant re-training.
3. It will be efficient only in those domains which have a marketting value for MS (ie gaming to the detriment of non-markettable ressearch for example).

Even if DX is better than OGL at some point (and that still has to be proved) sticking with OGL and making it better is the only way to make sure you'll always have a choice.

[This message has been edited by Keermalec (edited 12-13-2002).]

*Aaron*
12-13-2002, 05:39 AM
Hey, Robert, successfull software, such as, say, a scene graph, usually has documentation (hint, hint).

Why complain about nVidia's 1000 proprietary extensions? Just don't use them. What better way to encourage IHVs to support only ARB and EXT extensions than to use only ARB and EXT extensions in your software.

Robbo
12-13-2002, 05:42 AM
You are right Aaron, my current openGL implementation only uses ARB extensions (multitexture and CVAs) and I would never consider using VAR\VAO\RC etc. extensions at all.

knackered
12-13-2002, 06:24 AM
Why not?
C++ can help you out here. Simplified greatly, we have:-




class vertexbuffer
{
virtual int lock() {/*do nothing*/}
virtual int bind();
virtual int unlock(); {/*do nothing*/}

unsigned char* data;
unsigned int size;
};

class vertexbufferNV : public vertexbuffer
{
virtual int lock() {/*allocate VAR and memcpy from system to agp/video if data dirty*/}
virtual int bind();
virtual int unlock(); {/*do nothing*/}

unsigned char* VARdata;
int dirty;
};

class vertexbufferd3d : public vertexbuffer
{
virtual int lock() {/*lock yer d3d vertbuffer object*/}
virtual int bind();
virtual int unlock(); {/*unlock yer d3d vertbuffer object*/}

LPDIRECT3DVERTEXBUFFER8 d3dbuffer;
};

vertexbuffer* glrenderer::createvertexbuffer()
{
if (extsupported("NV_vertex_array_range"))
return new vertexbufferNV;
else
return new vertexbuffer;
}

vertexbuffer* d3drenderer::createvertexbuffer()
{
return new vertexbufferd3d;
}


That's reasonable, isn't it?

Robbo
12-13-2002, 06:42 AM
Bit more complex than that I would say. Your vertex buffer really needs to be created by a "vertex buffer factory" - because in order to create a D3D vertex buffer, you need a pointer to the device interface. You also need to destroy\re-create the vertex buffers you have handed out when they are "lost" (as they will be if they are "default" access vertex buffers when, for example, a screen mode change occurs); you cannot rely on the device interface pointer staying the same throughout any given session.

Oh, I dunno!

[Edit] Ah sorry, I see your d3drenderer (or instance of a renderer) is in fact a kind of vertex buffer factory already!



[This message has been edited by Robbo (edited 12-13-2002).]

Robert Osfield
12-13-2002, 07:06 AM
Originally posted by *Aaron*:
Hey, Robert, successfull software, such as, say, a scene graph, usually has documentation (hint, hint).


Guilty as charged :-)

Still, we're getting there little by little. In a couple of years time lack of thorough docs will just be a older timers memory.

Robert.

MZ
12-13-2002, 07:28 AM
sqrt[-1], I liked your "Whys", and I'd like to add a few:

- Why DX documentation is always so imprecise and incomplete (as contrary to GL ext. specs) in every release, so i have to rely on studying SDK examples or my own trial-and-error apps for some subtle feature testing ?

- Why DX texture parameters (like filtering modes) are not part of texture-object state? There is not much adventage of alleged "object oriented" style of API, when you end up writing wrapper classes anyway.

- Why, since DX7 SDK, half of the API (D3DX + some global datas of core DX) is shipped in .libs (which are linkable only with MS compilers), instead of .h + .dll like all the rest?

- What is a benefit of using GUIDs, in such low-level API?

- Why there are 'reserved' function parameters or struct members, when API is redesigned every year?

- Why DX designers have choosen so nasty convention for naming typedefs & constants?
(like LPLPD3DREADINGTHISISBADFOREYES)

knackered
12-13-2002, 07:54 AM
Originally posted by Robbo:
Bit more complex than that I would say. Your vertex buffer really needs to be created by a "vertex buffer factory" - because in order to create a D3D vertex buffer, you need a pointer to the device interface. You also need to destroy\re-create the vertex buffers you have handed out when they are "lost" (as they will be if they are "default" access vertex buffers when, for example, a screen mode change occurs); you cannot rely on the device interface pointer staying the same throughout any given session.

Oh, I dunno!

[Edit] Ah sorry, I see your d3drenderer (or instance of a renderer) is in fact a kind of vertex buffer factory already!

[This message has been edited by Robbo (edited 12-13-2002).]

You deal with lost vertex buffers in the 'lock()' method....do whatever you want - it's basically a way of telling your class that you're about to use the buffer.
It really isn't anymore complicated than that.

Elixer
12-13-2002, 12:10 PM
Originally posted by mcraighead:
I'm surprised no one else even seems to be prodding me more about that answer...

I could say more, but I'm guessing ARB policy restricts me from speaking publicly about unfinished work products. I will say that the ARB meeting was Tuesday/Wednesday just this week. Beyond that, you'll have to wait for the meeting minutes to come out.

- Matt

http://www.opengl.org/discussion_boards/ubb/smile.gif The waiting game continues.

Just wondering, when you go to these meetings, are they as *cough* exciting as they appear to be going by what the past minutes say, or do they edit out all the good flames? http://www.opengl.org/discussion_boards/ubb/smile.gif

OH, and to keep the trolls happy, I will get back on topic.

You can't really predict the future, and what will happen, and to support something 10 years (which is a VERY long time) means that you have to list the +'s and -'s of each API, and in the end, you will most likely find that supporting both would be the best choice in the long run, even though it is more work in the short run. http://www.opengl.org/discussion_boards/ubb/smile.gif

mcraighead
12-13-2002, 12:27 PM
Originally posted by Zak McKrakem:
(At least they implement them, with the exception of env_crossbar).

I've told that story several times on this board. You can search to read it. Short answer is, we *can't* implement that extension.


Originally posted by Zak McKrakem:
You only hear negative comments about GL2 because it is proposed by 3Dlabs.

Oh, come on, please spare me the conspiracy theories...

My (personal) issues with the GL2 proposals have nothing at all to do with the company that put them together, and everything to do with their content.

- Matt

IT
12-14-2002, 04:22 AM
Originally posted by IT:
Rumor has it ... DX9 tomorrow.

Definitely a rumor... tomorrow never comes, heh?

zed
12-14-2002, 08:41 AM
>>I still maintain my point: if efficiency is the main concern and that the C++ version is slower or/and more complex than the C version, then someone should rethink the C++ version.<<

u can submit your own c++ (or other) versions (i think i mentioned that)

>>Not that I'm trying to convince you, Zed, 'cause you already seem convinced. But I'm trying to break this myth about C++ being slower.<<

the only thing im convinced about is in theory c + c++ + (compiled)basic (ie any compiled language) should ultimately run the same speed, after all the machine code should ulitimatly be the same form all versions, of course the reality is this usually doesnt happen

knackered
12-14-2002, 08:53 AM
As has been pointed out, c++ code is precompiled into c code which is then compiled into machine code. So C++ is actually C by the time it hits the compiler. Amen.

GPSnoopy
12-14-2002, 09:38 AM
Zed, the reason I won't submit my version of the prime number test is because it uses a class I developed for a company, so I can't release it.

The other reason is that these comparisons are purely amateur stuff. (even the author of the website says so)
Also this website is now abandonned. (so says the news)

However, I just looked at their Heapsort test.

Here is their C version (their C++ version is identical!), which is again at the top in this test: http://www.bagley.org/~doug/shootout/bench/heapsort/heapsort.gcc

and here is a C++ version I wrote in 2 minutes: http://users.pandora.be/tfautre/Files/Heapsort.cpp

Both sort 1 millions of doubles at the exact same speed on my PC (using VC 7.0).


knackered, I really doubt modern C++ compilers really do that, even if some of them might still do.
C++ to C conversion was mainly used by old systems when "pure" C++ compilers weren't too good.


PS: I wonder what version of GCC they are using to make the C++ version, which is line for line identical to the C version, worse.

[This message has been edited by GPSnoopy (edited 12-14-2002).]

V-man
12-14-2002, 12:52 PM
Damned! what a mixed up thread.

Everything from DX vs GL, to GL2, C vs C++
Its gonna take a while to this crap!

V-man

JD
12-14-2002, 02:17 PM
Knackered, that was done with c++ front ends back when c-compilers ruled the universe http://www.opengl.org/discussion_boards/ubb/smile.gif The front end app took c++ code and turned it into c code then the c compiler ie. back end, turned c code into asm. C compilers were faster than c++ compilers back then. Nowdays, msvc++ 6/7 turns c++ code directly into asm. To get fastest code from c++ compiler you must 'hint' it.

I don't think c++ lang. was designed for speed rather to handle large programs by creating structuring framework for humans so they can make sense out of their code and not create spaghetti code with c++. Like it was common in c before people used 'c w/classes' approach.

On the d3d vs. gl issue. Robbo, I think in the future you should look up a non-biased msg. board for answers because majority here have opengl experience only as the posts prove it. You should also write down your app requirements and then look at both apis then decide. This is what I did and it worked for me.

zed
12-14-2002, 03:07 PM
gsnoopy your version doesnt compile eg

main.cpp:24: ISO C++ forbids declaration of `doublem_max' with no type
main.cpp: In method `gen_random::gen_random (double)':
main.cpp:12: class `gen_random' does not have any field named `m_max'
..

anyways thats my last word on the topic, c vs c++ is boring (i havent written anything in c since about 1997 anyways so aint an expert, even in c++ im not an expert (and i use it every day!) )
now d3d vs opengl on the otherhand ...

OldMan
12-15-2002, 01:39 AM
For sure C++ is nothing slower than C, at least with a good compiler. I work in a lab where we develop dedicated high performace oparating systems.. and we do it in C++, not a single cycle slower than our competitors that use C and asm

GPSnoopy
12-15-2002, 04:29 AM
Zed, sounds like you didn't get the file correctly.
The error you're reporting sounds like the tab was lost in the following line:
"const double <tab> m_max;".

OldMan, I can't agree more with you. http://www.opengl.org/discussion_boards/ubb/smile.gif

knackered
12-15-2002, 05:24 AM
Bloody hell, I'm out of date. Knowledge gained from my unix days....

Halcyon
12-16-2002, 08:43 AM
In my opinion, C++ is waay better just because of the added functionality (classes, templates, operator overloading, memory allocation opperator new) and IF there were a slow down, the advantages of C++ outweigh the disadvantages

JONSKI
12-16-2002, 11:00 PM
No one's twisting your arm, people. Just use the API, language, platform, etc. that YOU want to use.

Anyways, let's get back on the OGL2/DX9 discussion, like the subject indicates.

richardve
12-17-2002, 12:08 AM
Originally posted by JONSKI:
Anyways, let's get back on the OGL2/DX9 discussion, like the subject indicates.

I don't see any 'DX9' in the subject, do you?