PDA

View Full Version : scrap opengl32.dll



l_belev
12-01-2011, 12:41 PM
this is windows-specific

I was wondering what do we need opengl32.dll for?
today it only serves as a standard rendezvous point for the gl apps and the gl driver. other than that it is useless and only stays in the way.

Wouldn't it be nice if the ARB approve standard name for a new dll (say arbopengl.dll) that the HW vendors will place in the windows system directory (system32) along with their ICDs.
Of course we still will need to support the old path through opengl32.dll for backward compatibility.

Using the new dll, we may have new cleaner and up-to-date way to setup opengl contexts under windows.
for example:

int glWGetNumDevices();
BOOL glWGetDeviceDescr(int dev, GLWDEVICEDESCR &descr);
GLWContext glWCreateContext(int device, int *attribs);
BOOL glWMakeCurrent(GLWContext ctx);
GLuint glWCreateRenderBufferForWindow(GLWContext ctx, HWND wnd, BOOL single_buffered);
BOOL glWSwapBuffers(GLWContext ctx, GLuint rbuf, int vsync);

glWCreateContext can receive a zero-terminated attribute list that can specify things like minimal required opengl version, compatibility or core profile, etc.
These contexts will not have a default frame buffer.
Instead drawing to a window will be enabled by the function glWCreateRenderBufferForWindow. It returns name of a renderbuffer object (that can be attached to framebuffer objects and combined with depth buffers, etc.). Using glWSwapBuffers will show the current content of that renderbuffer object to it's associated window, automatically doing a vertical synchronization as specified by the vsync parameter. single-buffered ones may not be supported if that is a problem.

what do you think?

EDIT: ah yes, the format of the "window renderbuffers" can not be specified by the app. it is chosen by the implementation in unspecified way (e.g. it may be linked to the current windows video mode as set by ChangeDisplaySettings).
get rid of the pixel formats!

glfreak
12-01-2011, 01:28 PM
I completely agree on scrapping the ancient OpenGL32.DLL along with the window-system dependent context management stuff.

However I think the problem is the mechanics of the window and context management and interaction with the HW is something requires licensing or a specification from the platform vendor since it involves the operating system internals, which they may not give away easily.

Another issue can be driver quality again. This means unless they they come up with a reliable and conformance test, drivers may screw things up at context initialization stage, which is terrible.

I would suggest an SDK approach, and I don't mean the current OpenGL SDK (wrappers + tutorials :)). OpenGL implementation has to be layered on top of a minimal driver (provided by the IHV) and a software rendering path. The GL SDK should always default to the HW accelerated rendering path whenever possible unless a feature is missing, because the driver/hw is not up to date with the current SDK version, then it uses the software renderer.

l_belev
12-01-2011, 01:58 PM
i doubt a licensing is required because the example of OpenCL.

on the question about the driver quality, i dont see how a new dll can change what we have now for the better or for the worse.
it's not like microsoft is requiring from the HW vendors to pass any conformance test when it comes to opengl. in all cases its entirely up to their good will.

mhagain
12-01-2011, 02:16 PM
If you're going to suggest retaining a software fallback path, then PLEASE, and this is more important than scrapping a dll, PLEASE give developers a means of detecting in code when this path will be triggered. That's all I want.

l_belev
12-01-2011, 02:30 PM
do you mean retaining the old microsoft software opengl implementation?
of course it is excluded. the new dll will have nothing to do with microsoft (as is is the case with OpenCL already).

Alfonse Reinheart
12-01-2011, 03:18 PM
OK, let's say you were to get IHV support for this. How would you implement it? Microsoft controls Win32, which is the basis of all window rendering. Direct3D and WGL are both built on top of it. They only reason they can drill through the normal window drawing systems is because Microsoft put specific hooks in Win32 to allow D3D and WGL to drill through it and render to a specific location of the screen.

IHV's can control a lot. But they can't control the OS itself (which is why WGL and GLX are provided by non-OpenGL stuff). You cannot force a Windows window to be rendered via direct GPU commands, not without Windows being aware of it and mediating the process. Why do you think OpenGL32.dll exists to begin with?

The absolute best you might do is layer OpenGL on top of D3D, but that's going to come with its own deficiencies and issues.

OpenCL can end-run around this because it doesn't draw anything. It doesn't need to associate with Win32 in order to work; it talks directly to the driver. So the only way for what you suggest to work would be to have no default framebuffer at all.

That might be useful for some GPGPU applications. But I'm guessing that most people wanting to render stuff want to actually see it on screen.

l_belev
12-01-2011, 04:50 PM
i don't think opengl32.dll really does anything important about the linking between opengl and the windowing system. a while ago i was doing some experimenting with the ICDs and i found they basically do everything on their own. All they need is some form of link to their kernel-space counterparts, which is not hard to accomplish at all.

i'm not very familiar with composition-enabled system but on XP and w7/vista with disabled compisition all that is needed is the client rect and the visible region of a window (the latter can be obtained with the GetRandomRgn function). then the driver just draws over that parts of the screen (the screen is ultimately under the HW driver's control anyway).

edit:
for the composition case maybe something like this may work: since the composition is done on the GPU, then the contents of the windows should be kept in some kind of textures.
then if our driver can find the texture that corresponds to a given window, it can do it's opengl drawing there. remember that the driver has the final word for any GPU resource, including this texture and microsoft's code can not possibly prevent (or even detect if the driver wants to hide it) the driver from drawing there.
then the driver probably will need to "invalidate" that window with some windows api in order to force the window manager to redraw it (doing it's composition) on the screen.

there is the "if" here that the driver can know which is the texture for given window. i don't know if it can obtain that info.

maybe it will not work for other reasons too. as i said im not familiar enough with how the composition works and what are it's relations to the driver.

Hongwei Li
12-10-2011, 10:35 PM
opengl32.dll is light and almost does nothing. The reason it exist is it is the common library for all opengl programs and it is stable. While it is loaded, it then loads the ICD and then relies on ICD for the rest.

I agree removing opengl32.dll. My only concern who will provide the replacement. NVidia or AMD? It is a difficult question.

l_belev
12-11-2011, 05:16 PM
opengl32.dll is light and almost does nothing. The reason it exist is it is the common library for all opengl programs and it is stable. While it is loaded, it then loads the ICD and then relies on ICD for the rest.

It is light that's true, but the obsolete interface it provides enforces very complicated and ugly opengl setup under windows.
here are 2 examples:
in order for us to use wglCreateContextAttribsARB we have to first create a dummy context with it's dummy window, get proc addresses, destroy the dummys. and only then we can create context in the new way.
also while some gl vendors provide some ways to choose among their gpus, there is no reliable inter-vendor way for that.



I agree removing opengl32.dll. My only concern who will provide the replacement. NVidia or AMD? It is a difficult question.

I think it should come with the ICDs. It should inquire about the installed ICDs (even the ones from other vendors) and should be able to load any and all of them at the same time. opengl32.dll can't load more than one ICD. it doesn't even let you choose which one.
The new dll should present to the application the collection of all gpus in the system from all vendors with unified interface.
Of course still only one current context per thread is allowed, but different threads in a process should be able to have current contexts from different vendors at the same time.
This is not hard to do at all. I was able to do it myself by directly loading the ICDs and using their exported functions (DrvCreateContext, DrcSetContext, etc.)

I imagine the new dll to be extremely tiny. It's only job is to find the ICDs (read from a very standard location in the registry) load each of them to get info about it's gpus and probably unload the unused of them.

Maybe we can even do without a central dll. We (the app developers) can just read the registry ourselves and load the ICD dlls. In such case it would suffice if the ICDs expect that they can be loaded directly (without opengl32.dll) and provide some better and up-to-date exported functions.
For example, one problem (though work-aroundable) is that in order to get any of the newer functions, you need to use DrvGetProcAddress, which without current context returns NULL.
Another problem is when some of the new functions (like UINT wglGetContextGPUIDAMD(HGLRC hglrc);) receive as argument a opengl32.dll-ish context handle (instead of driver handle).

Alfonse Reinheart
12-11-2011, 09:18 PM
This is not hard to do at all.

Then why haven't you done it? It sounds like you could write this as a simple wrapper library over WGL. So if it's "not hard to do at all," then go do it.


Maybe we can even do without a central dll. We (the app developers) can just read the registry ourselves and load the ICD dlls.

Because reading through the Windows registry for certain specific registry keys and manually loading DLLs is far less obfuscated than creating two windows. I'm sure it'll be lots easier to teach people which registry bits to poke at and which DLLs and functions to load than to just use OpenGL32.dll.

l_belev
12-12-2011, 05:20 AM
oh not you again!
ok i will reply only this time


Then why haven't you done it? It sounds like you could write this as a simple wrapper library over WGL. So if it's "not hard to do at all," then go do it.

Yes it is not hard. As i said, i was able to do it but i used certain amount of debugging, tracing into dlls and then hacking, patching and hooking because the ICDs are not written with the presumption that anyone else except opengl32 will ever try to load them. This is not appropriate way for every application developer to do it, but the driver vendors can just fix their ICDs to plainly work without opengl32.


Because reading through the Windows registry for certain specific registry keys and manually loading DLLs is far less obfuscated than creating two windows. I'm sure it'll be lots easier to teach people which registry bits to poke at and which DLLs and functions to load than to just use OpenGL32.dll.

Ok, i will state the obvious. Yes, it is less confusing and ugly than the dummy windows and contexts. How would you explain to some newcommer what is the point to all that dummy mess? You would tell him, well, thats just the way it's done, don't ask why.
If you instead read dll names from registry and then load them, then you know what it is all about.
But the main point is that it would allow you to use all gpus (even from different vendors), which you can't with opengl32.

There are more benefits. For example we can get away from the device contexts and pixel formats.

By the way did you ever wondered why opengl in windows works with device contexts instead of directly with windows? (the pixel formats are actually per-window even though you set them to a dc)
Its because back in the time when the wgl api was being designed, microsoft had the ingenious idea to use opengl directly with printers (can you imagine!) Of course that could never see any practical use, but apparently they thought otherwise. Anyway.

Alfonse Reinheart
12-12-2011, 10:24 AM
How would you explain to some newcommer what is the point to all that dummy mess? You would tell him, well, thats just the way it's done, don't ask why.

I would explain it like this. (http://www.opengl.org/wiki/Creating_an_OpenGL_Context#Proper_Context_Creation ) I would assume that the person is reasonably mature and simply tell them the truth.


For example we can get away from the device contexts and pixel formats.

DCs I can understand. But what's wrong with pixel formats? They're more or less identical to how glX does it with its GLXFBConfig and XVisualInfo objects.

l_belev
12-12-2011, 12:44 PM
since we have framebuffers and their flexibility, pixel formats are mostly ghost of the past. nowdays one would typically choose some minimalistic pixel format (no depth buffers, etc.) just to get to the actual opengl. then use framebuffers, which allow you to dynamically allocate whatever buffers you want and to combine them freely. at the frame end just blit the frame image to the default framebuffer (0).

There is only one problem.
Suppose that:
b) for your application it would be sufficient just to draw directly to the window (i.e. it does not need to have the frame image in texture for post-processing or whatever), and
a) your application works in fullscreen

Then of course it still can use framebuffer and then blit to the window, but otherwise the blit operation could be substituted for a much cheaper flip operation (swap buffers).

In this special case (special because nowdays who doesn't use some kind of post-processing) using pixel formats could give some (although not great) performance gain by eliminating one frame copy operation.

Now if we can resolve this little issue, the pixel format will become totally pointless.

The glX api is the same in this regard, yes. But (as wgl) it is older than the framebuffers, so the above applies to it too. I consider GLXFBConfig-s obsolete too.

I be very glad if we have new apis that somehow make it possible to draw to a window within the framework of the framebuffers.

For example
=================================
GLuint <preffix>CreateWindowbuffer(<HWND/Window> wnd); - create an windowbuffer object that corresponds to the given window and can be used as a renderbuffer object (can be attached to framebuffers, etc.). The size of the new windowbuffer will be the current window size (client rect on windows). The internal format is decided by the implementation but it can be queried later.

void <preffix>DrawWindowbuffer(<HWND/Window> wnd, int vsync); - causes the current image content of the windowbuffer corresponding to wnd to be displayed in wnd. This may result in a blit or a flip operation depending on outside factors. After this operation the contents of the windowbuffer are undefined. An automatic vertical sync may be performed as specified by the vsync param.

void <preffix>ReadWindowbuffer(<HWND/Window> wnd); - causes the current image shown in the window to be captured in the corresponding windowbuffer.

void <prefix>UpdateWindowbufferSize(<HWND/Window> wnd); - sync the windowbuffer size to the window's. The windowbuffer contents are undefined after this operation. Windowbuffers dont track their window's size automatically. A DrawWindowbuffer and ReadWindowbuffer operation on windowbuffer with desync-ed size have undefined result - it may copy the just the intersection or may refuse to copy at all and generate error, but should in no case result in a crash. This is intended to relieve the driver from the burden to automatically track window size (e.g. intercept window messages on windows)

does this look good?

Leith Bade
12-12-2011, 08:28 PM
I like this idea. D3D's DXGI system for choosing adapters and creating device contexts is much better than OpenGL wacky system.

But if you are going to rewrite a fundamental part of OpenGL we might as well clean up OpenGL ie add DSO and remove the old non-DSO functions.

One interesting side effect of the proposed system is that it may be required to drop some old GL extensions like pbuffers etc. Perhaps require GL2 minimum core only or something like that.

It would be a good idea to update the GLX and apple APIs to also match this new spec as well. This would allow very easy porting.

l_belev
12-13-2011, 05:20 AM
lest we scare the vendors with the too ambitious suggestion, which may translate in too much work for them :)
still i think the "windowbuffers" should be pretty easy to implement

Leith Bade
12-13-2011, 01:07 PM
If only it were possible to write our own GL drivers... (well technically you can with AMD GPUs since they document device control interface).

One day I will write a Linux 3D demo that directly talks to the GPU via register poking.

l_belev
12-17-2011, 09:21 AM
here is some proof-of-concept implementation of my idea for another dll.
it works directly with icds and dont use opengl32.dll
it implements the example "glW" api that i suggested.
as it is only a "proof-of-concept" i didn't bother to do extensive error checks; if something goes wrong it will probably crash.

it attempts to find all usable gpus in the system no matter the driver/vendor.
for ATI drivers it uses some funcs from WGL_ATI_gpu_association to find the gpus and their corresponding displays. then to create a context for given gpu it uses a window on the gpu's display. this is a feature of the ATI driver.
for nvidia WGL_NV_gpu_affinity could be used but since that extension is only present on quadros and i dont have a quadro, i left the support for it unfinished.
if none of these extensions is available, the dll assumes single gpu per driver as there is nothing better to do.

there are 2 msvc projects, the second is a test application that uses the dll.

Alfonse Reinheart
12-17-2011, 10:35 AM
The proof of concept shows some of the problems with your windowbuffer management code. Specifically, no vsync. Also, you're constantly reallocating the backbuffer, which will no-doubt fragment video memory. ATI and NVIDIA both recommend creating rendering surfaces first and not reallocating them later.

Since this API will never be implemented by IHVs, there's really no point in having this buffer interface. Just use double-buffered contexts; they resize themselves properly and (probably) without excessive memory fragmentation. Plus, you get proper vsync.

The rest of it is actually reasonably worthwhile (though obviously glWCreateContext is missing key features, like aspects of the context, specifying the window's size, etc).

l_belev
12-17-2011, 10:56 AM
if the driver would implement that interface, it can implement the vsync too. obviously i cant do that without the driver's support.

im not reallocating backbuffers, where did you see such thing?

glWCreateContext should not specify window size because it has nothing to do with any windows.
the idea is that the contexts are detached from the window-system.
only glWCreateWindowbuffer and friends work with windows.

by the way the concept in question that was to be proven is that we can get away from opengl32 - it is not needed for anything.

Alfonse Reinheart
12-17-2011, 01:17 PM
im not reallocating backbuffers, where did you see such thing?

The glRenderbufferStorage call you make whenever the user calls glWUpdateWindowbufferSize. That reallocates the buffer that effectively works as the window's back-buffer. This will have to get called after every window resizing.


by the way the concept in question that was to be proven is that we can get away from opengl32

Fair enough. So can you prove that this API is actually an improvement over the current one? Because thus far, it isn't.

Show me how `glCreateWindowBuffer` would allow me to render to an SRGB framebuffer. Show me how `glCreateWindowBuffer` would allow me to render to a multisample framebuffer. Or both.

Admittedly, what I do with OpenGL doesn't care about multiple GPUs and such, so from my perspective, all I'm avoiding with your API is the creation of a phony window/context. But what I do with OpenGL does care about having proper gamma correction and multisampling. And your API doesn't have provisions for that. So from my perspective, however slightly nicer your API is, it is not as useful.

Also, your API doesn't take into account the possibility of SLI/Crossfire-based rendering. Not everyone wants to manually manage GPUs and whatnot; some of us just want to draw stuff to the screen. And if there are two GPUs, we're fine with drivers doing behind-the-scenes work to make that happen. Your API requires selecting a GPU.

Most importantly of all, why do IHV's have to implement this? With WGL_AMD_gpu_association, you can wrap all of this in a library easily enough.

Yes, creating contexts in OpenGL on Windows is ugly. But you only have to write that code once. Stick it in a library with a nicer API, and it's all fine. You can even call it the "GLW" library.

The only part of your API that needs specific IHV support is the window-buffer stuff: the separation of framebuffers from rendering contexts. Everything else would work perfectly fine as a library layered on top of wgl.

And frankly, the window-buffer stuff is what makes the API bad. Simply allow the user to create window-less contexts or contexts with a window. They pick which one they want up-front for that context, and that's the end of it.

In short, why does this API need the window-buffer stuff? And if it doesn't, why does it need to be implemented by IHVs?

l_belev
12-17-2011, 01:48 PM
The glRenderbufferStorage call you make whenever the user calls glWUpdateWindowbufferSize. That reallocates the buffer that effectively works as the window's back-buffer. This will have to get called after every window resizing.


This one? This is hardly "always". glWUpdateWindowbufferSize was intended to be used only on window resize which is rare event. What do you think happens right now when an opengl gets resized? Exactly the same, the driver reallocates it's buffer because there is nothing better that can be done.

But anyway bear in mind what i do in this code is wrapping the current driver interface. If a driver implement it directly can do it otherwise.

Also bear in mind that this interface is only some example of what i think would be good. Of course if the ARB would like to standardize a new interface, i very much doubt it will be exactly this one. So dont focus too much on little details but try to see the overall idea.

Anyway i dont need to prove anything to YOU exactly. I dont for a moment doubt that you will dislike anything i propose. Your behavior of a forum troll is well known to me, i dont expect anything constructive from you and i dont care about your opinion at all.

l_belev
12-19-2011, 03:37 AM
here is a link to the sample i uploaded as it got buried under pointless posts
http://www.opengl.org/discussion_boards/...ilename=glw.zip (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=download&amp;Number=421&amp;filename=gl w.zip)

Alfonse Reinheart
12-19-2011, 11:18 AM
glWUpdateWindowbufferSize was intended to be used only on window resize which is rare event.

That depends on what application you're writing, yes? I'm sure Blender3D windows are resized more frequently than, for example, full-screen videogames. And non-full-screen videogames often, as a matter of courtesy, allow themselves to be resized.

Not all applications work the same way.


What do you think happens right now when an opengl gets resized? Exactly the same, the driver reallocates it's buffer because there is nothing better that can be done.

That's up to the driver. It could allocate a bigger buffer internally than the current resolution. Indeed, it could allocate a buffer the size of the desktop, just in case. It could reserve the desktop resolution's worth of space for the framebuffer; if the application starts to need to use that empty space (allocating lots of textures), then it can dip into it, but only as a last resort. It can play games with these things.

Yes, in the worst-case yes, every size change reallocates the buffer and fragments memory. But drivers have a lot of leeway in allocating things.

To be fair, if this window-buffer stuff was implemented by the driver, it could still do all of these things. Except that the user can't. The user's providing the back-buffer, which is where the problem comes from. Having to directly manage the back buffer makes it impossible to handle this.


I dont for a moment doubt that you will dislike anything i propose.

Let me tell you something about me. I do not generally notice who people are. I don't really read names; I answer posts. I talk about what is said, not who said it. The only reasons I even know you from other posters on the forum are:

1: You consistently refuse to capitalize the word "I".

2: You consistently try to make things personal when I'm talking about the merits of your idea. You've gotten it into your head that I'm out to get you.

I'm out to get ideas I think are bad or non-productive. If I seem to be "trolling" you, it is only because, from my perspective, you are consistently posting ideas that I find to be bad or non-productive. I'm not out to get you; I'm out to get bad ideas, and if you post a lot of them, we will talk frequently.

If you posted an idea I found to be good and there was actually a chance that the ARB would implement it, I wouldn't argue against it. For example, this thread (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&amp;Main=58898&amp;Number=3057 34). The idea has actual merit, fills a need, and is something that the ARB might actually implement (unlike replacing OpenGL32.DLL, which the ARB can and will not do). You will notice that my contribution to that thread consisted primarily of asking a question about the frequency of version updates.

So no: I don't dislike anything you propose. Just the bad or non-implementable stuff.

kRogue
12-19-2011, 01:34 PM
.... Just the bad or non-implementable stuff.


ROFL. I sometimes wonder what criteria he uses to determine if something is not implementable or is bad...since it does not look like he implement GL for a living..[disclosure: I do not implement GL for a living either, I just use it for a living]

At any rate, just an FYI for people, the resize window bits is really small fries. Indeed, some libraries that do the cross platform thing for you of getting a window, making a context, do not trigger the actual window resize until after the user is done resizing.... other apps may not do the resize at each resize even, only after several or after several frames... an operation done once in a blue moon, is not exactly anything to really freak on.

Bits that I really do not like on opengl32.dll are things like the craziness of making a GL context to just get a function pointer to make a GL context a different way (glX is guilty here too!). Other things I do not like: can't make windowless and/or framebufferless contexts.

There is this great quote from an extension from NVIDIA written long ago:



Additions to the WGL Specification

First, close your eyes and pretend that a WGL specification actually
existed. Maybe if we all concentrate hard enough, one will magically
appear.


that bit of wisdom beauty is from as long as ago as a decade (the first version of the spec was 2001).

So yeh, that just amazingly sucks, there is no spec for wgl, it is a Microsoft API, etc, etc.

For me all I really want is cross-process support for GL data and something like a pixmap and a native API to say "present this pixmap" for window contents.

oh well... EGL did not turn out well in my eyes at all...

l_belev
12-19-2011, 02:12 PM
If you posted an idea I found to be good and there was actually a chance that the ARB would implement it, I wouldn't argue against it.
You talk as if you are someone whose council is highly prized and sought after by the ARB. Don't be silly dude. The fact that you flood this forum (half of the total posts are from you) and keep pestering the people does not mean anyone actually gives a damn about your opinion - unlike me most people just ignore you. In fact, from your posts, your lack of intuition about the matters it is quite apparent to me. It surely is so for anyone who would bother to read your posts careful enough. That makes it impossible that ARB would have anything to do with you.
I am sorry for getting personal, but still im a human being, not a cold machine and so im susceptible to anger and get annoyed sometimes. And you are doing your best efforts to annoy people.

I wish the moderators could somehow limit your trolling activity in this forum. Unfortunately you are doing it too skillful. You are very careful not to break the formal rules, yet you manage to do the job brilliantly.

Now that i think of it it really looks like you are doing some payed job, your unmatched activity in the forum for years and years. Whenever anyone comes up with some good idea you are always very fast to trash it at all cost, using absurd arguments when you cant find any good. Like someone is paying you to spoil the opengl community.

kRogue
12-20-2011, 01:41 AM
I wish the moderators could somehow limit your trolling activity in this forum. Unfortunately you are doing it too skillful. You are very careful not to break the formal rules, yet you manage to do the job brilliantly.


..or just a "filter Alfhonse" button.



Whenever anyone comes up with some good idea you are always very fast to trash it at all cost, using absurd arguments when you cant find any good. Like someone is paying you to spoil the opengl community.


*OUCH*. Once in a blue moon, his insistence on tearing something down finds an issue.. which is like 99.99999% time correctable. There have been times when he pissed me off too. But oh well, such is life. Though in hind sight, it can be funny to be read... like this one: kRogue loves GL_NV_shader_buffer_load (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&amp;Number=268887&amp;page=3).

What does concern me is that it is possible that good ideas and comments get ignored because of his excessive postings...

l_belev
12-20-2011, 03:16 AM
What does concern me is that it is possible that good ideas and comments get ignored because of his excessive postings...


This is exactly my concern too, nothing else.
And that is what i came to suspect to be his ultimate goal.

Very often his "arguments" are so absurd that i cant believe he really thinks he is right. Then i wonder what his motivation might be?

glfreak
12-20-2011, 02:21 PM
I agree on the need for trashing the opengl32.dll along with the antique wgl and glX. Time to have a one and only one API for window and context management. For now let the GL SDK provides only the unified context/windowing API, which is layered on the IHVs implementation.

OpenGL rocks and is a five star API, working with it is amazing vs. other headache APIs that bring projects down to dead end. Lets give it a gift for the Christmas and make it more wonderful!

elFarto
12-21-2011, 04:55 AM
There was (http://www.khronos.org/assets/uploads/developers/library/2011_GDC_OpenGL/OpenGL-Ecosystem_GDC-Mar11.pdf) mention of a desktop EGL library. However I haven't seen it mentioned since.

Regards
elFarto

V-man
12-21-2011, 05:34 AM
I prefer something like EGL because it exists on all the embedded systems. Why do they even call it egl? Just convert those function to gl functions and put it in the same specification.

I'm not sure why glW is being proposed in this thread. Please make OpenGL cross platform and get rid of wgl/glX/agl and all that craziness.

l_belev
12-24-2011, 03:21 PM
I'm not sure why glW is being proposed in this thread. Please make OpenGL cross platform and get rid of wgl/glX/agl and all that craziness.


That would be best but first we will have to get rid of all the different OS-es plus their different window-systems and leave only one :)
For OS i would choose unix, but as for the windowing-system i don't know. I don't like the X-winows very much, especially it's network transparency which has no meaningful use and only adds overhead.

kRogue
12-24-2011, 05:59 PM
I'm not sure why glW is being proposed in this thread. Please make OpenGL cross platform and get rid of wgl/glX/agl and all that craziness.


I am not at all sure that a "cross-platform" thingamajig for glX/wgl/egl that makes users happy is really possible. Some will jump up and down and say that EGL is just that, except that I am not happy with it :whistle: Issues such as "window handle", "display handle", "pixmap" crop up (in EGL it is left as a typedef in EGL.h dependent via #ifdef on the platform) and EGL is strongly married to essentially double vs single buffered surfaces which are not always the right thing to use. Additional ugly issues pop up all the freaking time in integration with compositing window managers, etc. Lastly, a generic API is likely to suffer from these bits:

Shoots for the lowest common denominator Convoluted bits to be cross platform API where certain concepts either do not make sense on some platforms, or certain concepts are very painful to expose in a cross platform API

On a quasi-related note, oh yes, X11 just sucks.


The concept is usually simple: give me a surface to which to render, give me a context with which to render. The latter concept is pretty cross platform, but the former is ugly as it can be tightly coupled to the concepts of the system and hardware (for example the bit-depth pain of X11-visuals to bits of an EGLSurface for example), fullscreen vs windowed and other bits tied to the hardware of a platform. Sometimes it can be show-horned, but often not...the whole thing can get wicked hairy. Then the sicked hairy is cruel in that the context possible for a surface may depend on the surface...

l_belev
12-25-2011, 03:37 AM
by the way my fictional "glW" interface partially solves the cross-platform issue in that at least the context creation is not tied to the concrete windowing system. And if you only need the gl context for offscreen rendering, you can keep total platform independence.
Its only when you need to show something to a "window", you will have to know about the OS/window-system you are on.

In that sense the functions CTX glWCreateContext(int device, CTX share, int *attribs), glWMakeCurrent(CTX ctx), .. could just be named glCreateContext, glMakeCurrent, .. and the opengl's platform independence will increase considerably.