Quote Originally Posted by kRogue View Post
No; all of those use X to do exactly the following:
  1. Create -one- window
  2. Poll X for events


All the drawing is done to a -buffer- by the toolkit.
No, that's not how they work. You can use "nm -D" to examine the import symbols for a library to confirm that they do actually use X/XRender/cairo drawing commands (beyond "put image"). Or if you want to be sure, you can look at the source code.

Quote Originally Posted by kRogue View Post
OpenGL resides on the XServer. The OpenGL implementation is then required to be able to take commands from a remote device (the client).
The X server takes commands from the client; that's what X servers do. The OpenGL driver takes commands either from the X server (indirect rendering) or the client (direct rendering). To the driver, there's no difference; the X server is just another local client.

Quote Originally Posted by kRogue View Post
Huh?!! AMD has released the specs to the GPU's (outside of video decode); Intel's GL driver for Linux is entirely open source.
Those are relatively recent moves, considering the amount of effort involved. Video hardware is very complex.

Quote Originally Posted by kRogue View Post
Lets take a real look at why it is not there: the effort to make remote rendering just work is borderline heroic. The underlying framework (DRI2) does not work over a network.
.
DRI doesn't need to work over a network. The X server is running on the same system as the video hardware.

Quote Originally Posted by kRogue View Post
Regardless this proves my point: remote rendering is such a rarely used/wanted feature that it is not implemented really.
It is implemented, at least up to 1.4. The main issue is that some of the new features require extensions to the wire protocol, and that isn't something that's just a matter of people putting in the time coding it, because it relates to interoperability.

The ones which are just adding new functions are relatively straightforward; mostly, you just need an opcode.

Where it gets more complex is for things like the buffer API itself, as well as VBOs, where the wire protocol for existing functions such as glDrawArrays has to change depending upon whether there's a buffer bound to the GL_ARRAY_BUFFER target (in which case, it just needs to send the offset) or not (in which case, the relevant portions of the client-side vertex arrays have to be included in the request).

In the long run, VBOs are simpler (bulk data transfer only occurs via the buffer API), but right now you have the issues with the combination of old and new features. I wouldn't be that surprised (or bothered) if the developers decided to make life easy for themselves and stopped extending GLX and just added "GLX 2.0" for core profile only (i.e. you get either 1.4 or 3.0+ core, not the kitchen-sink-included version).