OpenGL: high or low level?

OpenGL has been designed to allow applications to be written easily so they look the same on various hardware. It is thus a high-level library. It hides the hardware. An application has no knowledge about a feature being hardware accelerated or not.

Direct3D, on the other hand, is part of DirectX, which is a interface for accessing hardware. Design is thus totally different. It is designed as a low-level layer for programming 3D hardware.

While it is very convenient to create an application independently of the hardware platform it will run on, there is an important issue: performance. What if filtering is not hardware accelerated? The application will be very slow if bilinear filtering is used. Of course, bilinear filtering could be disabled in the application preferences. But wouldn’t it be easier if the application could do it automatically by having access to hardware information?

Current automatic hardware management is great and should be kept. What I suggest is to be able to access hardware information to be able to choose the best configuration to achieve best performance.

Another possibility is to be able to specify what could be automatically simplified if hardware is not powerful enough (e.g. filtering, shaders,…).

Although OpenGL is supposed to be high-level, more and more low-level features are being added. Some examples: vertex buffer objects, vertex and fragment shaders,… Where are we going to? To a high-level library designed with low-level features (like an object-oriented language where everything has to be done is assembly, but with another syntax)? For me, the more high-level, the more the library can do the optimization based on the hardware instead of the programmer. For an example of high-level shader design, see my post of 2006-06-20 called “Some OpenGL improvement suggestions” .

Hi Huges,

Both OpenGL and Direct3D are HAL’s - Hardware Abstraction Layers. There are feature differences here and there, but principly there are at the same level. Performance is generally similar between OpenGL and Direct3D, and certain types of usage OpenGL is actually faster, there certainly isn’t any performance penalty in the OpenGL being cross platform.

OpenGL is not designed to be high level, but its design certainly is clean enough to facilitate high level lbraries that leverage. As you get higher level you generally get more domain specific, which OpenGL isn’t - its not a game API, its not a CAD API, its not a flight sim API, its just a general purpose, but high effective low level graphics API.

If you want high level features then just use a high level library that sits ontop of OpenGL.

Robert.

What I mean by high-level and low-level here is that for one the hardware is almost invisible (how to get hardware texture memory size in OpenGL?) while for the other, the hardware is totally described to the application.

I see more OpenGL as a graphics library (that is hardware accelerated “behind your back”: the application is unaware of it) and Direct3D as a graphics hardware access library (that can also be used without looking at the hardware). The base design is very different for me. See also Comparison of Direct3D and OpenGL on Wikipedia .

Apart of that, you are right, both are designed for hardware-accelerated high performance graphics and performances are similar. I prefer OpenGL, mainly because it’s an open standard and because it runs on other platforms than Windows, but I think Direct3D has some features that could be useful for OpenGL.

Originally posted by Hugues De Keyzer:
but I think Direct3D has some features that could be useful for OpenGL.
That would be instancing, and trust me it’s comming to openGL soon, well, some day at least.

See also Comparison of Direct3D and OpenGL on Wikipedia.
I’ve read the article, and I must say I don’t really agree with many parts of it. OpenGL and Direct3D are much more similar than this article suggests, especially in the hardware abstraction level provided by both APIs.

I’ve read the article, and I must say I don’t really agree with many parts of it. OpenGL and Direct3D are much more similar than this article suggests, especially in the hardware abstraction level provided by both APIs.
That’s not entirely true.

I wrote certain bits of that article, particularly the historical perspectives. And the differences between GL and D3D were once much more pronounced.

Even now, there are some fairly significant differences between the two due to exactly what Hugues is refering to: design priority differences. It’s OK to have two different libraries designed around two different principles.

I would say that D3D has probably abandoned its principles (particularly D3D10, which abandoned caps flags) much more than OpenGL. So D3D grew towards GL more than GL grew towards D3D.

So D3D grew towards GL more than GL grew towards D3D.
You can say that again.

Although OpenGL is supposed to be high-level, more and more low-level features are being added.
That’s because reasonable people realize that ultimately performance has to take priority. Abstraction is useless if it’s slow, and that’s unfortunately the razor’s edge (pretty or fast, can’t always have both). The hideous morass of pre-D3D9 API called Direct3D is a great example of a confused patchwork of hurried abstraction and optimization.

The hideous morass of pre-D3D9 API called Direct3D is a great example of a confused patchwork of hurried abstraction and optimization.
I wouldn’t go that far. A lot of people seem to forget that the horror of execute buffers died with D3D 3.

D3D 5 was not a good API. But it was something that could be used, and it was fairly self-consistent. It was a lot like being limitted to immediate mode in OpenGL. D3D 7 was a bit better, what with texture management and all.

And the difference between D3D8 and D3D9 wasn’t terribly much. There were some new functions, and new shader functionality. But that was it. So I would say that Direct3D was useable after D3D5, and good past 8.

Heavens to Betsy, if you can’t badmouth Direct3D in an OpenGL forum, where can you? :smiley:

Actually, I would have said pre-D3D10, but it hasn’t been officially released yet :wink:

Wait until you have tasted D3D10 :wink:

I’ve been exposed to it for the last couple of weeks and I must say I begin to like it more and more. It is lean and mean and doesn’t have that old stuff around. Though the API is still changing here and there, it is a nice experience to work with.

Originally posted by ScottManDeath:
Wait until you have tasted D3D10 :wink:

Wait until you can only run your graphics application under Window Vista…

No thankyou.

What we really need is the OpenGL ARB to add extensions to OpenGL to expose all this loverly new hardware on all platforms. I’d be happy with vendor specific extensions in the itermim.

Also has there been any progress on OpenGL ES/OpenGL 3.0?

Robert.

I wrote certain bits of that article, particularly the historical perspectives.
Strange, that’s exactly the part of the article I agree with :wink:

I mean, the facts in the article about the direction from where these two APIs evolved are correct. But I don’t agree with some of the conclusions in the article, especially the point of OpenGL being designed around user expectations instead of hardware features. That may have been true in the past, but today (legacy features aside) OpenGL is exactly what the hardware can do, and “user convenience features” are left for third party libraries.

Originally posted by Robert Osfield:
Also has there been any progress on OpenGL ES/OpenGL 3.0?
Wasn’t it OpenGL 2.0 Pure?

OpenGL 2.0 Pure, as it was originally specified, is pretty much dead.

AFAIK, a few vendors are working on a set of extensions to redesign the API fundamentally. As I understood it, you’ll have the choice of using these extensions exclusively instead of the now existing core interface. This would be the equivalent of what used to be called “OpenGL 2.0 Pure”.

Then the further plan seems to be splitting the OpenGL 3.0 spec into two parts, one part with this new “clean” interface, and the rest specified as being “layered” on top of the new interface (but still part of core GL 3.0).

Then you can simply choose to not use the “layered” part, and you’ll have a nice, clean API, or you can use the “layered” part for backwards compatibility.