Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 3 of 3 FirstFirst 123
Results 21 to 23 of 23

Thread: OpenGL and how the driver works ?

  1. #21
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    474
    Quote Originally Posted by kRogue View Post
    No; all of those use X to do exactly the following:
    1. Create -one- window
    2. Poll X for events


    All the drawing is done to a -buffer- by the toolkit.
    No, that's not how they work. You can use "nm -D" to examine the import symbols for a library to confirm that they do actually use X/XRender/cairo drawing commands (beyond "put image"). Or if you want to be sure, you can look at the source code.

    Quote Originally Posted by kRogue View Post
    OpenGL resides on the XServer. The OpenGL implementation is then required to be able to take commands from a remote device (the client).
    The X server takes commands from the client; that's what X servers do. The OpenGL driver takes commands either from the X server (indirect rendering) or the client (direct rendering). To the driver, there's no difference; the X server is just another local client.

    Quote Originally Posted by kRogue View Post
    Huh?!! AMD has released the specs to the GPU's (outside of video decode); Intel's GL driver for Linux is entirely open source.
    Those are relatively recent moves, considering the amount of effort involved. Video hardware is very complex.

    Quote Originally Posted by kRogue View Post
    Lets take a real look at why it is not there: the effort to make remote rendering just work is borderline heroic. The underlying framework (DRI2) does not work over a network.
    .
    DRI doesn't need to work over a network. The X server is running on the same system as the video hardware.

    Quote Originally Posted by kRogue View Post
    Regardless this proves my point: remote rendering is such a rarely used/wanted feature that it is not implemented really.
    It is implemented, at least up to 1.4. The main issue is that some of the new features require extensions to the wire protocol, and that isn't something that's just a matter of people putting in the time coding it, because it relates to interoperability.

    The ones which are just adding new functions are relatively straightforward; mostly, you just need an opcode.

    Where it gets more complex is for things like the buffer API itself, as well as VBOs, where the wire protocol for existing functions such as glDrawArrays has to change depending upon whether there's a buffer bound to the GL_ARRAY_BUFFER target (in which case, it just needs to send the offset) or not (in which case, the relevant portions of the client-side vertex arrays have to be included in the request).

    In the long run, VBOs are simpler (bulk data transfer only occurs via the buffer API), but right now you have the issues with the combination of old and new features. I wouldn't be that surprised (or bothered) if the developers decided to make life easy for themselves and stopped extending GLX and just added "GLX 2.0" for core profile only (i.e. you get either 1.4 or 3.0+ core, not the kitchen-sink-included version).

  2. #22
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    474
    Quote Originally Posted by Alfonse Reinheart View Post
    Mapping buffers for example would absolutely murder performance for a networked renderer compared to even a much slower client-side GPU.
    That depends upon whether you're relying upon it being effectively free to map the bits you don't use. If you map a buffer range then read or write the entire range, mapping isn't going to be any different to using gl[Get]BufferSubData(). If you map an entire buffer read-write and only touch a small portion of it then, yes, that's going to hurt networked performance.

    Either way, if you assume that transfer of data between the application and video hardware is relatively cheap, that's not going to work remotely. But assuming that communication overhead can be ignored has never been wise, as processing side seems to consistently outstrip communication. When video hardware could just about manage blits and logic ops, the communication channel was an 8 MHz ISA bus. Now we have PCIe x16 but the GPU is practically a supercomputer.

  3. #23
    Junior Member Regular Contributor
    Join Date
    Dec 2009
    Posts
    197
    Quote Originally Posted by GClements View Post
    The ones which are just adding new functions are relatively straightforward; mostly, you just need an opcode.

    Where it gets more complex is for things like the buffer API itself, as well as VBOs, where the wire protocol for existing functions such as glDrawArrays has to change depending upon whether there's a buffer bound to the GL_ARRAY_BUFFER target (in which case, it just needs to send the offset) or not (in which case, the relevant portions of the client-side vertex arrays have to be included in the request).
    The funny thing is that the GLX protocol for buffer objects is fully defined by the ARB, including the MapBuffer and MapBufferRange calls, in the respective ARB extension docs. What is missing are the opcodes for most of the "easy" functions.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •