Suggestions/feedbac for the Unofficial OpenGL SDK.

A month or so ago, I released the Unofficial OpenGL SDK. It contains various components for getting started with OpenGL (function loading, FreeGLUT/GLFW, GLM, image loading, etc), all under a single, unified build. This project really grew out of my tutorials, as I needed to add more and more substantial code for things like image loading.

I haven’t really gotten much feedback on it, so I was interested in what the community thinks. Does it work, what can be improved, is the documentation clear and up to par, etc. Were there any problems in using it?

I’m also interested in how to deal with meshes. Right now, the GL Mesh component consists of an immediate-mode replacement. I’m currently working on a generalized static mesh class and some functions to create various kinds of meshes. But I don’t really know what to do about mesh loading.

In my tutorials, I wrote a loader for a simple XML format I defined. I generate these files either from Lua scripts (for simple meshes like spheres, cubes, etc) or with a specialized converter from Collada. I really don’t feel like writing and supporting mesh loaders from a variety of formats. I don’t even really support all of what Collada can do with regard to meshes. All my converter supports is what Blender3D writes to Collada files.

So, what would be a good way to handle this? Is this something I should be trying to handle, or should I just skip mesh loading?

Have you seen Open Asset Import Library - assimp?

Could this be added to the list of external libs that encompass your openGL SDK? This would make it so you don’t have to write your own importer for general file formats.

An immediate-mode replacement is nice but what’s the use of it? It’s convenient to draw a triangle but it doesn’t really scale beyond this triangle. A mesh library is really interesting but it would be more useful if it focus on how programmer work with meshes.

Have you considered using GLI? http://www.g-truc.net/project-0024.html#menu
The project is certainly not mature yet, so I am not saying you should.

In any case such SDK will be useful, it probably needs to reach a higher maturity but it’s only time that would give it such feature.

Have you seen Open Asset Import Library - assimp?

I’ve seen it. I’ve been wanting to incorporate a portion of Boost in the SDK ever since I wrote the GL Image component. And Open Asset seems like a decent excuse. The documentation seems fairly solid overall.

An immediate-mode replacement is nice but what’s the use of it? It’s convenient to draw a triangle but it doesn’t really scale beyond this triangle.

Define “scale”. It’s the simplest way to handle things like text rendering, debug drawing (show me a vector, arrow pointing this way, etc), showing GUI elements, and so on. The setup work necessary to render a line between two arbitrary points is pretty significant without immediate mode. With it, it is trivial.

It’s not intended to scale to rendering large meshes. Not everything you want to render is a large mesh, after all.

Have you considered using GLI?

I saw it, but I wrote my own instead. I did so for several reasons:

1: GLI has too narrow of a focus. It centers around the texture2D. That’s great… unless you’re not working with 2D textures. I see no need to have that restriction. Why are 3D textures, cubemaps, arrays, etc forbidden? I know that GL Image currently only implements 1D and 2D textures, but I’m going to start cubemaps soon.

2: Relative difficulty to use. There is no quick and easy way of just telling it to upload a texture2D to OpenGL. My GL Image library takes of 3 lines to go from a file to an OpenGL texture. And the last one is just a deletion of the glimg::ImageSet object, so if you stick it in a smart pointer, it only takes 2. The SDK is intended for new users. And while I’m totally fine with new users seeing the details of how texture uploading actually works, I would rather they do so in a proper, structured learning environment. IE: my tutorials.

3: Undocumented. This is a deal-breaker for me. The SDK is for a relatively new user, so clear and strong documentation is absolutely paramount.

The other issues are just personal pet-peeves of mine. I don’t get why you needed the GTX vs. core concept here. In GLM, that made sense, as you want a demarcation between what GLSL provides (GLM core) and what additional convenience functions you’ve added.

Also, it’s header-only. Again, in GLM, that made sense, as most of the functions were tiny and templated. texture2D is not a template. gli::load is not a template. I just don’t feel that I should have to compile the entire C++ source to a DDS loader, in every .cpp file that I include it in, just to be able to load DDS’s. This is what static libraries are for, after all. Not everything needs to be header-only.

Overall, I just felt that it wasn’t ready for the SDK. GLM is a much more mature tool, while GLI seems to be simply whatever you needed for your examples. Plus, writing an image loader wasn’t that hard.

[quote]An immediate-mode replacement is nice but what’s the use of it? It’s convenient to draw a triangle but it doesn’t really scale beyond this triangle.

Define “scale”. It’s the simplest way to handle things like text rendering, debug drawing (show me a vector, arrow pointing this way, etc), showing GUI elements, and so on. The setup work necessary to render a line between two arbitrary points is pretty significant without immediate mode. With it, it is trivial.

It’s not intended to scale to rendering large meshes. Not everything you want to render is a large mesh, after all.
[/QUOTE]

Erm so your reason is because immediate mode is easier?
I disagree strongly but that’s hope to you.

[quote]Have you considered using GLI?

I saw it, but I wrote my own instead. I did so for several reasons:

1: GLI has too narrow of a focus. It centers around the texture2D. That’s great… unless you’re not working with 2D textures. I see no need to have that restriction. Why are 3D textures, cubemaps, arrays, etc forbidden? I know that GL Image currently only implements 1D and 2D textures, but I’m going to start cubemaps soon.

2: Relative difficulty to use. There is no quick and easy way of just telling it to upload a texture2D to OpenGL. My GL Image library takes of 3 lines to go from a file to an OpenGL texture. And the last one is just a deletion of the glimg::ImageSet object, so if you stick it in a smart pointer, it only takes 2. The SDK is intended for new users. And while I’m totally fine with new users seeing the details of how texture uploading actually works, I would rather they do so in a proper, structured learning environment. IE: my tutorials.

3: Undocumented. This is a deal-breaker for me. The SDK is for a relatively new user, so clear and strong documentation is absolutely paramount.

The other issues are just personal pet-peeves of mine. I don’t get why you needed the GTX vs. core concept here. In GLM, that made sense, as you want a demarcation between what GLSL provides (GLM core) and what additional convenience functions you’ve added.

Also, it’s header-only. Again, in GLM, that made sense, as most of the functions were tiny and templated. texture2D is not a template. gli::load is not a template. I just don’t feel that I should have to compile the entire C++ source to a DDS loader, in every .cpp file that I include it in, just to be able to load DDS’s. This is what static libraries are for, after all. Not everything needs to be header-only.

Overall, I just felt that it wasn’t ready for the SDK. GLM is a much more mature tool, while GLI seems to be simply whatever you needed for your examples. Plus, writing an image loader wasn’t that hard. [/QUOTE]
[/QUOTE]

There are some good points and I agree with you. GLI does actually much more but it’s true that it’s not mature at all. I have been working for a long time already on a version 0.4.0 which support 1D, 2D[ARRAY], 3D and cubemap.

Erm so your reason is because immediate mode is easier?
I disagree strongly but that’s hope to you.

My reason for what? I think you misunderstand something. I don’t intend for immediate mode to be the only mesh rendering supported by the SDK’s GL Mesh component. It’s just all that’s there currently. As I said, I’m working on a static mesh class with some generators for commonly used objects (spheres, cubes, the usual).

I implemented immediate mode because it is useful for certain tasks. It’s a tool that is useful to have; I’m not saying it’s the only way you should render things.

Just to put gasoline on the fires… at the risk of looking like a hippie, if I were to make an SDK, I’d make it SDL oriented and use SDL_image to load image data to get the image pointer which is then fed to GL… similarly for making a window an making a GL context… from an educator point of view, I think it is oodles better to get a newbie exposed to the glTexImage calls ASAP rather than hiding those calls behind a image loader API for GL… another nice point of SDL is that it is reasonably well documented and same source code using SDL works under MS-Windows, Mac OS-X, X11-Linux and also portable platforms: iOS, Maemo/Meego, etc…

Another reason why I like SDL_image more that others is that it does not require to load image from a file (but that is typically how it is used) but allows loading images from “other” things via SDL_rwops (I use this coupled with libphysfs to do loading from zip files, etc).

Just my 2 cents.

at the risk of looking like a hippie, if I were to make an SDK, I’d make it SDL oriented and use SDL_image to load image data to get the image pointer which is then fed to GL…

SDL is pretty much a non-starter. SDL is way too heavyweight for something like this. Also, SDL_image (and SDL itself) has a lot of dependencies. And one of the nice things about this SDK is that it uses a unified build system that works cross-platform.

It’s a “download-and-go” package; you don’t have to download SDL, SDL_Image, libpng, libjpg, zlib, and other stuff to make it work. It consists of 6 libraries, all of which can be built in two command-line commands. It’s neat, simple, and self-contained.

That’s why I use the STB image loader for loading PNGs and JPGs; it has no external dependencies and is entirely self-contained. It builds easily.

from an educator point of view, I think it is oodles better to get a newbie exposed to the glTexImage calls ASAP rather than hiding those calls behind a image loader API for GL…

I agree, which is why I don’t use GL Image’s texture creation functions in my tutorials (yet). Loaded images (glimg::ImageSet) are not OpenGL textures yet. You need a second step to do that, which can be entirely manual if you so desire.

So while you can load an image and create a texture in 2 lines, you don’t have to; glimg::ImageSet has all the functions you need to be able to manually use glTexImage calls.

But at the same time, I’m not going to force someone to make those calls if they don’t want to. It’d be a poor utility if it didn’t take away grunt work, after all. It’d be hard to call it an SDK if it was just SDL repackaged with an OpenGL function loader and GLM.

Another reason why I like SDL_image more that others is that it does not require to load image from a file (but that is typically how it is used) but allows loading images from “other” things via SDL_rwops (I use this coupled with libphysfs to do loading from zip files, etc).

The SDK, in general, is not intended for “production use.” If you’re starting to bring things like PhysFS and such into it, then it’s really outside the scope of the project. It’s a useful utility for taking ones first steps in OpenGL. It is not an optimal library for doing everything you might want. You shouldn’t actually ship something in it.

Most importantly, you can already load image data directly from memory. The STB and DDS loaders both have a “FromMemory” variant. All you need to do is to just load it with PhysFS or whatever, and pass the block of memory to the loader.

Between this statement and your earlier one about manually creating textures, I’m not sure you actually looked at the SDK’s documentation before commenting on it.

But the build system doesn’t produce .dlls. Nor does it place all the includes and libs in a centralized directory so that they’re easy to consume. There’s no equivalent to a “make install” target. It seems like I’m going to have to figure out how to incorporate the build into my own project builds, and do the manual work for that, so how has packaging the components into a “SDK” saved me any work at this point? Maybe you’ll get there eventually, but at present the build doesn’t seem ready for prime time.

FWIW my case use is Code::Blocks, MinGW, and Windows Vista, for now. I think you did well to choose Premake for the build. I have a lot of CMake experience but its scripting language is clunky. I’d rather try building with Lua ala Premake. I’m using Premake less for portability, more just to have a decent build system language to work with.

I’m still looking at the glload code, which is the main issue I’m interested in. More on that later.

I agree that if you don’t keep the licenses to MIT / BSD / zlib, I wouldn’t touch the SDK with a 10 foot pole. Even SDL has figured this out, version 1.3 is zlib.

I didn’t understand this comment; hopefully it was more appropriate in context. Taken in isolation… what are you talking about?? You’re working on a Software Development Kit, how is that not a priori ipso facto targeted at production use? I hope this was just an unfortunate choice of words on your part. On the other hand, this project appears to have been going in some form or another since 2008 and is still in Alpha, so I’m not sure. Also you said

You shouldn’t actually ship something in it.

That doesn’t make any sense. If the code isn’t shippable, why do I care about it? There’s a difference between doing “everything and the kitchen sink” and doing some small job that is shippable. Like extension wrangling on Windows, for instance. On the other hand if you really are only meaning this to be samples for an OpenGL beginner, I would suggest renaming the project accordingly. It would recondition people’s expectations, and hey, 3 years with no 1.0 release is a brand identity you don’t really need to keep around.

But the build system doesn’t produce .dlls.

It’s not supposed to. DLLs introduce a great number of issues: memory ownership, C++ standard library versions, ensuring that the version you built against is the version you’re using (DLL Hell), etc. All of these are easily sidestepped with static linking. Also, none of the components are big/important enough to need them.

The SDK is designed to be self-contained. So while it makes no changes outside of its directory, it also relies on nothing outside of that directory (other than the things it absolutely needs to, like a build system and OpenGL). So if you have a FreeGLUT .dll lying around, it will guarantee-ably not interfere with the SDK’s version of FreeGLUT.

Nor does it place all the includes and libs in a centralized directory so that they’re easy to consume. There’s no equivalent to a “make install” target.

The problem with sticking everything in a “centralized directory” is the assumption that such a directory exists. Sure, GNU/Make has some place for libraries that works… in some way. But users of Visual Studio have no sanctioned centralized location for libraries (and no, the VC directories do not count. Never stick stuff in VC’s directories). Because of that, most “install”-style build systems require manual work for VC users.

The way to use the libraries is outlined here. In a Premake build, it absolutely trivial.

First, you call dofile("path/to/GLSDK/links.lua"). This sets up a special Lua function. Then, in your Lua script, under a project, call UseLibs {"glload", "freeglut"} to specify which SDK component you wish to use. And you’re done; the headers and libraries will be included as directed. You don’t have to hunt for .a files and headers; Premake has you covered.

You can put the SDK wherever you want; if you want to stick it in a centralized location, that’s fine. It’s designed to be self-contained, so that it can be moved from place to place as needed. It affects nothing outside of its directory, and will leave no lingering traces once you delete it.

That may be your personal philosophy of software development, but you’re making a SDK. That means offering all the usual suspects in all the standard places. It’s up to the SDK user whether they want to static link or not. Or do 32 or 64 bit. Or single or multithreaded. It adds up to quite a pile of libs that most people would like the SDK to have already built.

So if you have a FreeGLUT .dll lying around, it will guarantee-ably not interfere with the SDK’s version of FreeGLUT.

Why do you consider this a problem? Any Windows developer who cares about this just puts their own files in their own directories. Which brings up the issue of redistributables.

The problem with sticking everything in a “centralized directory” is the assumption that such a directory exists. Sure, GNU/Make has some place for libraries that works… in some way. But users of Visual Studio have no sanctioned centralized location for libraries (and no, the VC directories do not count.

The sanctioned place is “C:/Program Files/YourSDKName/include” and so forth. That’s how the Windows SDK does it nowadays. That’s where CMake is going to look, if your SDK becomes popular enough to get a Find script for it.

That may be your personal philosophy of software development, but you’re making a SDK.

An SDK that is for new users. The first goal of this project is to be as plug-and-play as possible. The more options you provide, the more confusion your instructions are going to be. And while the power users may be happy with it, the new people will only be confused.

The GL SDK is intended for the same audience as the D3DX libraries. Those don’t exactly have a wide plethora of build options either.

Also, that’s not a “personal philosophy”; there are many legitimate reasons to avoid DLLs for various things. Not every library is the same, and not every library needs to have DLL versions.

C++ does not work well with DLLs. Exporting classes, having C++ standard library types flying across DLL boundaries, type-detection across DLL boundaries, exception handling, these all can potentially have issues across DLL boundaries.

Why do you consider this a problem?

Because it can happen. And when it does, the person it is most likely to happen to will have absolutely no idea what is wrong or how to fix it.

Ease of use matters.

The sanctioned place is “C:/Program Files/YourSDKName/include” and so forth.

Which “Program Files” directory; the regular one, or “Program Files (x86)”? The one on the user’s C: drive or D:?

Libraries that self-install like this only work if you have a build system that knows about the self-install locations. Premake does not. Visual Studio does not. CMake only does if someone writes a script for it.

By giving the user the freedom to install it wherever they like, it allows them to handle the dependencies as they wish, in whatever manor works best for them. The Premake links.lua script makes it easy to connect to those dependencies, simply by pointing the Premake script to that file.

A proper build doesn’t present users with a bunch of options. A proper build just builds everything, and indeed has everything pre-built so that nobody has to do it. Sample code doesn’t have to link to dynamic libs, and having it pre-built would be reasonable.

Which “Program Files” directory; the regular one, or “Program Files (x86)”? The one on the user’s C: drive or D:?

Whatever the ProgramFiles environment variable says.

Libraries that self-install like this only work if you have a build system that knows about the self-install locations. Premake does not. Visual Studio does not. CMake only does if someone writes a script for it.

We don’t care what Visual Studio doesn’t know; that’s what build systems like Premake and CMake are for. Both can and should do Find scripts; CMake just has a standard library for them. “Look in ProgramFiles” is a good guess nowadays. If the guess fails, it’s perfectly acceptable to have the user manually configure it somehow.