Curious to who is sticking with OpenGL now...

I am curious to see who here is planning on sticking with OpenGL or moving onto DX or some other means. Would like to see who is planning on leaving the community and who is staying around. Trying to get a feel for what support is left for OpenGL as a hobbyist game coder.

I will certainly be staying, i have always used OpenGL in preference to the continuously changing DX.
With the rapidly growing support for Linux in Australia i want all of my applications to be cross-platform.
Wine wont help as it just translates DX into OpenGL anyway.

I dont agree that this is the disaster that many people are claiming, vendors will still be able to write a separate high-performance driver for the Forward_Compatible context, and all of the missing DX10/DX11 features can easily be added as extensions.

I do agree that the longs-peak object model should have been the FIRST thing they did though, the planned incremental conversion can be nothing but a compatability disaster.
The claims that it was to allow for people with existing code to make a gradual conversion just doesn’t make sense as they just introduced two separate mechanisms (2.1 upgrades and the profiles) that allows this to happen in parallel with a new API.

Even more worrying is the stuff that seems to have been included for the purpose of preventing anyone from writting a more efficient renderer for slightly older cards.
This was either plain DAFT or a ploy by the hardware vendors to get more people to buy new cards.

Anyway, i have my new 2.1 renderer half finished so at least i dont need to write a separate one for 3.0 as nothing has really changed.

The most important thing for me at the moment is what new extensions ATI and NVIDIA are going to add in the near future,
i could really do with a GPU tesselator at the moment.

I am. I have no choice, because I cannot allow my application to become dependent upon the evil empire. Either people forget how many times they have been shafted by macroshaft, or they haven’t been developing software long enough yet to repeatedly find out and get angry enough to abandon them on principle AND out of self-preservation. I must admit, it took several time to sink into my lame brain.

In other words, I am one of those people who choose OpenGL because it runs efficiently on Linux, at least with nvidia drivers (the only ones I have tried the past several years).

A question. Is now the time to create a new 3D graphics/rendering API for realtime applications [and leave OpenGL for CAD, if recent threads are on-point]? Is this kind of project impossible (unless you are nvidia) because no access to the low-level hardware or drivers is provided by the GPU makers?

BTW, I have seen many people complain that “games” are being sacrificed for the “CAD companies”. I believe that is a very dangerous way to characterize the situation, because the concept “games” tends to trivialize the importance of realtime graphics/rendering applications - which have virtually identical requirements to “games”. I refer to presentation of simulations of physical [and fictional] processes (mechanical, chemical, atomic, optical, you-name-it), vision systems, robotic systems, and everything non-trivial that must be realtime or interactive. ALL of these applications will be thrown out with the bathwater if people can deem them “games” (unimportant).

I cannot allow my application to become dependent upon the evil empire.

You realize that characterizing Microsoft as “the evil empire” and calling them “macroshaft” only makes you sound like a GNUdiot, right?

vendors will still be able to write a separate high-performance driver for the Forward_Compatible context

Except that they will still have to write and maintain the “low performance” context. Do you really expect IHVs like ATi to write two GL implementations?

And it does nothing about the whole “bind-and-set” model, nor does “direct_state_access” (because the bind-and-set model will still exist and implementations will have to conform to it).

Is this kind of project impossible (unless you are nvidia) because no access to the low-level hardware or drivers is provided by the GPU makers?

Believe me, if we could have just done an end-run around IHVs, we’d have done so long ago. Probably after the FBO fiasco.

We (at Unity) are obviously sticking with OpenGL on OS X. Well, we have no other choice here, right?

D3D9 and OpenGL on Windows (keeping OpenGL path in there just because it happens to still work, but D3D9 is the default). Switching to D3D9 by default on Windows was one of the best decisions we ever made (at a cost of about 3-4 man months of porting the renderer to it).

And other APIs on other platforms (iPhone, Wii).

Thats a problem for me.

I try to clear my mind and see whats the best way to continue.

Switching to DX10 cost a bit more. Is probable Vista+VS.net plus losing cross platform feature, which i dont like it.

On the other hand, i have to many things to worry about the new OpenGL3 and his future.

Decisions, decisions, bloody decisions…

I’ll switch to D3D10 for personal stuff, but stay with OpenGL for other stuff. I’ll definitely visit these forums as often as before, it is the best and most interesting community that i have found so far.

Jan.

I’ll continue using OpenGL. I really dont like DX (syntax), and it doesnt give me more features than OpenGL. I know that now, with GL3.0, we have more burden in the API than we expected (we expected exactly the opposite) but that doesnt worry me at all. Long time ago I choose for many GL operations only one way of coding things and ignored the rest, so that burden doesn’t disturb me at all. And one more important thing: I cant loose portability. I need code for linux (in my job)

P.D: I have the hope that after reading the GL community reactions, Khronos will make GL3.1 better, at least removing totally the deprecated stuff and promoting geom shaders to core… Thats the only things I really care about 3.0

I will also continue using OpenGL, reasons for me are that I like developping on the Linux platform and I want to have the ability to make my programs cross-platform.

If driver support comes quickly (nVidia and AMD is enough for me, I do not own Intel graphics hardware, so I don’t care about them that much) and a spec will be released that contains only the non-depricated stuff, OpenGL 3.0 isn’t to bad for my uses. Allright, I would also have liked a clean and modern api, but I can also develop for OpenGL 3 as it is now. On the other hand, the forward-compatible profile gives a clue about the fast-path and because many more extensions are added to the core, developping using new techniques will become more easy.

A few wishes remain: add a few more extensions to the core, so that functionality is on par with DirectX 10.0 (or even 10.1). Come quickly with OpenGL 3.1 and remove all that is now depricated. Release a clean spec, without the depricated stuff. Anything more than that (a modern api) would be wonderfull but not expected and for me not absolutely necessary.

One reason I asked this was to evaluate how many people who know OpenGL well enough to be experts on it, are going to be around to help answer questions for people. Obviously less users equals less chance of getting help or support.

I am not sure as of yet what to do, I will wait and see what ATI does for their driver support, as currently only Nvidia hardware will run my code as ATI doesn’t have the extensions I need.

I think it is time to start looking at the 3d API as a tool and not as a religion and use it accordingly. This means D3D on Windows, GL on Mac and Linux. When using Cg as the shading language, it shoudln’t be too hard to maintain both GL and D3D versions, as the 3d setup differences are minimal.

I do 1 commercial project that uses GL and although I suggested Direct3D to avoid the complications of GL a while ago, it didn’t get accepted. This project doesn’t use shaders or anything advanced .

My personal project, a gaming project is pure GL 2.0. I wish to have a D3D9 renderer and D3D10 added as well.
I don’t ABSOLUTELY need higher than D3D9 but certain things can benefit from D3D10. That will come along eventually. So perhaps the worst would be that I won’t move beyond GL 2.0
Ya, I guess I hang around these forums but not likely to answer questions.

I’m using my own framework. I can port it quickly to GL 3.0, but I won’t. If OpenGL 3.0 would have the new object model and cleaned up API, then supporting it in my framework would be my top priority. I would even consider dropping support for OpenGL 2.0.
But I’m not interested in making transition in two steps (2.0 => 3.0 => new object model). I’ll stick with OpenGL 2.1 for now. Then we’ll see.

For me, OpenGL 3.0 is a blank space between my footsteps.

I’m a hobbyist, and I rather despise Windows (not for ideological reasons - I use the closed-source NVIDIA driver after all. I just find Windows to be buggy and feature-deficient, especially for development).

So I’m gonna stick it out with GL on Linux. I find the pains of OpenGL to be quite a bit better than the pains of developing in Windows.

I have begun researching DirectX 10. I can’t just switch all our technology overnight, but when it comes time to write something new, I would like to have a foundation with DirectX established. The tutorials are really nice, and it comes with lots of tools. It also feels good to be part of the mainstream.

I see no reason to make any changes for OpenGL 3.0. I think we’ll just stick with 2.1 until we are ready to port to DirectX.

Well, thus far, I’d say that the people who are sticking with OpenGL are doing so because they’re sticking with Linux/MacOS X.

Here’s a more interesting question: of those sticking with OpenGL, if there were an alternative that was more D3D like in API, one that was supported across platforms and such, would you take it? That is, are you choosing OpenGL because you want to be cross platform, or because you like it?

The things that put me off from DirectX are the frequent rewrites and the OO API. But I am willing to try it for the first time now.

At home i use linux, so i have to stick in ogl, and well, in fact i like it. At work it’s another issue, as we have 2 GL applications, one that is web based with java and another that is for windows only and desktop based.
I will take a look at dx11 when specs are out, but meanwhile, we will stay with ogl (and even probably switch to gl 3.0) and give them another chance (or perhaps 2 if they release 3.1 in 6 months).

I want cross platform. I don’t mind the GL API, but I can agree the object model would have been nice to have. If GL is getting bloated and driver quality is suffering due to this, yes get rid of that old [censored], and move on. This use old crap till the end of time drives me nuts, when Vista came out, “oh I can’t get Vista my POS scanner I bought back when DOS came out won’t work with it” [censored] man, just keep that old computer and use it with that POS PC, and get a new PC and move on. These people who keep old crap around hinder progress.

I decided I am sticking with GL2.1 with DX10 extensions or GL3.0 if I decide to rework my code with GL3.0 “future mode?”, and when my current project is done, DX11, Larrabee, and maybe XNA with DX11 features will have been out for awhile and I can reevaluate the situation then.

The thing that bugs the hell out of me about DirectX is the way structures are defined to hold vertex data.

Look at this code:

// Define the input layout
D3D10_INPUT_ELEMENT_DESC layout[] =
{
    { L"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
};
UINT numElements = sizeof(layout)/sizeof(layout[0]);

That’s about where I gave up on the D3D10 tutorials. Sure I can understand it, I read the tuts. But that code is so ugly. Lots of opportunity for typos in there.

Once you’ve created that, you use it as an input to a function to create a D3D vertex layout object (A function which uses a FAILE() macro instead of returning NULL for it’s created object). You then need to assign that new object as the input layout.

Your actual drawing code could be somewhere else, and it has no idea what layout is assigned. This is exactly the same problem you have in GL with active objects possibly being changed out from under you.

In GL, the format specification is implied by the data specification. So when you set the data the format is automaticlly correct for what you need.