Regarding Spec files

Right now, I am developing OpenGL bindings for the C# language (see the Tao Framework and OpenTK libraries) and I would like to make some suggestions about the format of the spec files in the next OpenGL versions, in the interest of promoting OpenGL to programming languages other than strict C and C++.

The current enum and enumext spec files contain some very useful information regarding OpenGL enums: enums are given real names (e.g. BeginMode enum, which contains the “POINTS”, “LINES”, “TRINAGLES” etc values). While this information is not used by the current (perl) generators when creating the official C headers (and indeed, C cannot distinguish between a const int and an enum), it can be used in higher level languages to generate type-safe bindings to the OpenGL functions. For example, in OpenTK the glBegin function is defined as “void GL.Begin(Enums.BeginMode mode)”, which will only accept the values found in the BeginMode enum (the check is done at compile time!)

This is extremely beneficial, as it not only protects from coding errors, but also because it trmendously increases programmer productivity as well as the discoverability of OpenGL functions (in most C# IDEs, typing “Begin(” will automatically bring up the list of possible parameters).

What I would like to suggest is that future spec files (be they xml- or text-based) retain the information on distinct enumeratations. The Mesa project has unfortunately dropped this information, and should the official specs drop it too, OpenGL in programming languages more advanced than C would suffer from C’s weaknesses.

One more thing that would be beneficial (but not strictly needed) would be the introduction of an “overload” directive in future versions of the specs. OpenGL functions that actually are overloads of each other (e.g. Vertex2d, Vertex2fv, Vertex4i etc. are all overloads of the imaginary function “Vertex”) would be marked as “overload [base function name]”, were [base function name] is the function name stripped of all decorations (i.e. “Vertex” would be the base function name of Vertex3f). While this “stripping” can be done programmaticaly using the current specs (something I plan to try soon), it would be easier and less error prone to have this “base name” included in the future specs.

Last, it would be nice if we had the source code of the official generators. There is some perl source code in the SGI repositories, but it seems it may not be the latest version.

So what do you think of these issues?
a) Will the enumerations be retained in future specs, or will they be dumbed down to “const ints”?

b) Is it possible/feasible to add an overload directive to some future spec files?

c) Should the source code of the official generators be made available?

d) Should there be a set of “official” C# OpenGL bindings? I am willing to maintain a working, up-to-date version if required.

e) Is there someone specific I could/should contact in the Khronos group regarding these questions?

Edit:
f) Does anyone know where a current version of the gl.tm file can be found? The spec files are useless without known the mapping of new types (e.g. BufferTargetARB). (see Edit2 at the end)

For anyone interested, the source code for the C# generator is available in the Tao Framework source code (which generates low-level C-like bindings), and the experimental OpenTK library (which generates higher-level type-safe OpenGL bindings).

Edit2: It seems that you have to be a member of the Khronos group to access the official OpenGL implementation. This is extremely disappointing, since two of the core tenets of OpenGL - freedom and openness – are not being followed.

At the very least, the source tree of the official implementation should be made publicly available. If this is not currently possible (and there is no compelling reason why it should not be), the up-to-date equivalents of the specification files found in the sample SGI implementation should be added to: http://www.opengl.org/registry .

I hope this matter is given the attention it is due.

Funny, I did it two times already :slight_smile: My last binding is very object-oriented, i converted all core GL functions and token to XMl and hand-edited the enums. At the end I had well-defined enums and functions that use that enums, which allowed to write things like GL.Begin(GL.PrimitiveMode.TRIANGLES). If you wish, I can send it to you, togethe rwith my code. There still no extensions present… I also looked into OpenTK, but they just use the original enums from the spec files which stopped being maintained at GL version about 1.2 #

My point on it is to create a well-defined xml database that describes the API, if not for current GL then at least for Longs Peaks (I really exect it to be there), as not all people use C.

a) The enumerations has to be present, they make design much better structured. Or at least a list of tokens a function may accept.

b) am sure that the perl scripts to generate C headers from the specs are available somewhere, I’ve seen them once.

d) I couldn’t care less. “Most official” bindings are, without doubt, Tao.

f) Why do you need it? All handles are just 32bit unsigned ints (GLenum)

The gl.spec, enum.spec, and enumext.spec files used by Tao and internal builds of OpenTK are up-to-date with OpenGL v2.1.

The answer to (f) is that while handles are GLenums, things like CharPointer or LineStipple are not. It is the gl.tm file that indicates CharPointer is actually a GLchar* and LineStipple is a GLushort. Without it, you have to find by hand what the new types are - which is indeed what happened when I tried to update Tao to use the latest specs (what types are BufferTargetARB? BufferOffset?)

In any case, please, do send the database and code (stapostol at gmail dot com). An xml based database would be very nice - maybe we can work together towards this?

a) Will the enumerations be retained in future specs, or will they be dumbed down to “const ints”?
Considering the open-ended nature of the Longs Peak API, I doubt that many functions will have a specific list of valid enumerants. Some will, but the glAttribTemplate*() functions almost certainly will not.

It seems that you have to be a member of the Khronos group to access the official OpenGL implementation. This is extremely disappointing, since two of the core tenets of OpenGL - freedom and openness – are not being followed.
I wasn’t aware that there was an official implementation at all. And the “core tenants” of OpenGL are not “freedom and openness”; you’re confusing OpenGL with a free or open-source project, which it is very much not.

So what do you think of these issues?
a[/QB]
Wow, lots of stuff in there. I will answer a subset of those questions but the board editor is too painful to do it all inline. First, I’m the Khronos API Registrar, and any issues having to do with the registries and spec files can be directed to me. You can use the public Khronos Bugzilla (and I wish people would, because I don’t keep up with every post on the forums and can easily miss things).

I just added the typemap file to the public registry. We’re working on configuring our internal Subversion tree to export part of the tree in read-only form and when that’s figured out, will be able to expose the entire specfile/header build structure more easily.

In Longs Peak, there are still enumerants, but due to the way templates work, it may be trickier to check them. Attribute of objects are specified through a generic TemplateAttrib*(template, name, value) sort of call, so unless semantic knowledge of legal name-value combinations is pushed up into the compiler itself, it’s hard to see how to do the compile-time checks you can do in strongly typed languages with the existing OpenGL 2.1 APIs.

The overload directive is a nice idea. Once we move to Longs Peak it will also be time to move to a new XML schema for the specfiles - it’s a big job and the benefits for our internal use of the 2.1 specfiles are small so I’ve avoided it thus far.

C# bindings would be great. If that’s something you would be interested in adding as a project in the OpenGL SDK , likely that would be a good fit.

@Korval:

I wasn’t aware that there was an official implementation at all.
It seems the sample implementation by SGI used to be considered “official” (if only due to the history of SGI and OpenGL).

And the “core tenants” of OpenGL are not “freedom and openness”; you’re confusing OpenGL with a free or open-source project, which it is very much not.
Make that tenets - a spell checker can only do so much :wink:

Last time I checked, the Khronos group moto is: “Open standards for Media Authoring and Acceleration”. And as for free, the licensing terms state that OpenGL is free for software developers as well as hardware developers on open-source platforms.

@Jon Leech:

I just added the typemap file to the public registry. We’re working on configuring our internal Subversion tree to export part of the tree in read-only form and when that’s figured out, will be able to expose the entire specfile/header build structure more easily.
Thanks for your prompt response, this is great news!

In Longs Peak, there are still enumerants, but due to the way templates work, it may be trickier to check them. Attribute of objects are specified through a generic TemplateAttrib*(template, name, value) sort of call, so unless semantic knowledge of legal name-value combinations is pushed up into the compiler itself, it’s hard to see how to do the compile-time checks you can do in strongly typed languages with the existing OpenGL 2.1 APIs.
I haven’t followed the development of Long Peaks, but from the little information I’ve been able to gather they do seem very interesting. In any case, if less compile-time checks are the price to pay for a cleaner and more flexible API, so be it (considering the vast majority of C and C++ programs would not be able to benefi from them). That said, well-defined enumerants would be beneficial for the functions that can use them, and unless enum.spec (or the equivalent) becomes a big list of constants, higher level languages would be able to use them.

I often drink a core of tenants super while using OpenGL.

And Stefen, is there a reason why you want to use .NET? I myself also started coding some demos with it, but I soon found out, that the Java VM is a bit faster. Using the gears demo from Brian Paul, I tested three versions: C#, Java and C. The results in FPS were following:

C# Java C
4300 4700 8700

In real-time applications I expect this margin to be even greater. A Quake2 implementation in Java runs with 80% speed of the original, that’s not bad, is it? For example, Java is much faster with math then .NET. Vector multiplication by a matrix was about 40% faster on Java then on .NET and 100% faster then code produced with my freepascal compiler (WTF?). I will still have to see how a assembly optimized library will perform. Also Java VM is more portable, has lots of languages for it and seems to score nicely on 64-bit platforms. To make it short, they have a superior JIT compiler. Maybe this will give you some food for thoughts. Sorry that I was OT. I will send you the ml database shortly, have patience please.

And Stefen, is there a reason why you want to use .NET?
Maybe because Java isn’t as nice a language as C#? Or maybe because the cost of calling out to native code in .NET is substantially superior to (never mind easier to implement) that of Java?

There are any number of reasons to not use Java.

Vector multiplication by a matrix was about 40% faster on Java then on .NET
How was this test done? Was it in actual interpreted code, or were they shelling out to a runtime library?

To make it short, they have a superior JIT compiler.
Just about everything I’ve heard about .NET suggests that its JIT facilities are superior to Java. Which makes sense, as .NET was designed around JIT, and Java’s JIT is something of an afterthought.

just use c/c++.
I don’t get it, if we’re talking rendering code why would you use anything other than c/c++?
I can understand it at the application level, but not at the level that interacts with OpenGL. Complex scene traversals are expensive enough as it is even when you’ve got the level of control needed to trim down branch and leaf nodes to their bare minimum cache footprint, but doing something like this in c#/java???
You should write your renderers in the most efficient language possible (while still being maintainable), and then expose the high-level functionality of your renderer to other languages.

> just use c/c++.

Well, I don’t want to. I know it sounds stupid, but it is how it is. Your arguments are very strong, even more, I admit that you are right, but I program as hobby, so I feel free to experiment. :slight_smile:

@Korval:
The routines are implemented in host language, so the vector by matrix multiplication was written entirely in Java / C# / pascal. I just threw away two hours fighting with sse :slight_smile: and implemented a fairly optimal sse routine, which still is slower (abot 5%) then the Java version. I am pretty sure my tests are right, still, I can’t believe it myself. The benchmarking is done in the same way with every program. Still, the pascal version is much faster on smaller arrays, the results I observe were taken using 10000 vertices with random data and a single matrix with random data. Still, the SSE version could be pushed up even more by unrolling the loop, but I let it be.

It is true that the call overhead should be higher for Java then for .NET, bu still my Java app runs teh OpenGL app 8% faster with exactly the same code.

All benchmarks on the internet I’ve seen suggest that modern java VM is superior to .NET in almost all parameters, plus is often on pair with C++ in many applications.

I agree that Java is not a nice language, but there are plenty of cool languages for Java VM.

I would like to point out that I am NOT a java fanboy. As a matter of fact, I hate the language :slight_smile: But I believe that the VM and JIT paradigms are superior to the standard native code compilation.

I don’t get it, if we’re talking rendering code why would you use anything other than c/c++?
Rapid prototyping, usually.

Then again, if you’re really doing rapid prototyping, I’d suggest downloading Ogre and giving it a .NET interface. But there are times when you’re doing prototyping for which Ogre is either unsuited or just overkill.

In the general case, however, I agree with you.

I don’t get it, if we’re talking rendering code why would you use anything other than c/c++?
There are many reasons actually.
a) after many years, I decided I do not want to fight with these languages anymore; I still use C++ for my thesis, but I prefer C# or Python for almost everything else.

b) I can distribute one executable that runs on Windows/Linux/MacOS (32 or 64 bit) without recompilation.

c) I do not actually need the extra speed. As a hobbyist, I do not mind if my code runs 20% slower - even a 5 year old machine (with GLSL) should be fast enough to run my programs - and if it is absolutely necessary, I can always rewrite a bottleneck in C++ with inline assembly.

Why not Java? I had no compelling reason to use (what I consider) an inferior language, even if the Jit is faster/more mature.

Fair enough, you couldn’t get away with it in a commercial project of any significant complexity though.
As for (b), you’d just build it on all supported platforms and supply all the shared libs with your download.
As for ©, as hardware specs increase so do customer expectations. Running quake3 (highly optimized big batch bsp renderer) at 80% of its speed using java is quite bad - you’re 20% slower than the C version, which is a huge difference. Scale that up to something like a CAD editor or CAD visualisation and you’ll notice that 20% will increase to 60 or 70%, I’m sure.
So what’s the point? Start off scalable and you’ll have no problems in the future.

I thought Quake 3 is a small batch renderer, often sending 2 triangles at a time. I hope they adandom this micro culling thing in their Doom 3 code.

dunno, haven’t looked at the source - presumed it culled with bsp constructing client-side index arrays, and batched them by lightmap/diffuse.

I think it does cull each pair triangle or 3 at a time. I use GLIntercept on some other game that was based on Quake 3. It caused the game to hang for minutes so I killed it, but the resulting files shows all calls to glDrawElements with a few indices.