Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 3 123 LastLast
Results 1 to 10 of 23

Thread: "gl-streaming" - Lightweight OpenGL ES command streaming framework

  1. #1
    Junior Member Newbie
    Join Date
    Oct 2013
    Posts
    10

    "gl-streaming" - Lightweight OpenGL ES command streaming framework

    Hi,
    I wrote an OpenGL ES command streaming framework for embedded systems - "gl-streaming".

    It is intended to make it possible to execute OpenGL programs on a embedded system which has no GPU.

    It is a server-client execution model of OpenGL like the function of GLX, but it is completely independent of X server and GLX, so it runs on a embedded system which doesn't support X and GLX.

    The client executes a OpenGL program, but does not execute OpenGL commands.
    It simply sends the OpenGL commands to the server over network, so the client system does not need to have a GPU.

    The server recieves OpenGL commands, executes them, and displays graphics to the monitor connected to the server system.

    gl-streaming is...

    "fast"
    It runs at 60 frames per second.

    "simple"
    The source tarball size is below 30KB !

    "lightweight"
    The gl_server consumes only 2MB RAM!

    "low latency"
    Its performance is suitable for gaming.

    source code & demo
    https://github.com/shodruky-rhyammer/gl-streaming

    If you are interested in this project, please post a comment.

    Thank you.

  2. #2
    Member Regular Contributor
    Join Date
    Oct 2006
    Posts
    352
    Looks extremely interesting!
    [The Open Toolkit library: C# OpenGL 4.4, OpenGL ES 3.1, OpenAL 1.1 for Mono/.Net]

  3. #3
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    250
    This look effectively fine

    What about the possibility to transfert the graphic output from the server side to the client side via something like a glse_SwapBuffers() call at the end of each frame ?
    (for to can display back each GPU computed picture on the server side to the screen on the client that haven't a GPU)

    And about to regroup set_server_adress/set_server_port and set_client_adress/set_client_port ?
    Code :
    void set_server_address_port(server_context_t *c, char * addr, uint16_t port)
     
    {
     
      strncpy(c->server_thread_arg.addr, addr, sizeof(c->server_thread_arg.addr));
     
      c->server_thread_arg.port = port;
     
    }
     
     
    void set_client_address_port(server_context_t *c, char * addr,  uint16_t port)
     
    {
     
      strncpy(c->popper_thread_arg.addr, addr, sizeof(c->popper_thread_arg.addr));
     
      c->popper_thread_arg.port = port;
     
    }
    Last edited by The Little Body; 10-23-2013 at 02:54 PM.
    @+
    Yannoo

  4. #4
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    250
    About glse_SwapBuffers(), I have see this into gl_client/main.c

    Code :
    gls_cmd_get_context();
     
      gc.screen_width = glsc_global.screen_width;
     
      gc.screen_height = glsc_global.screen_height;
     
      printf("width:%d height:%d\n",glsc_global.screen_width,glsc_global.screen_height);
     
      init_gl(&gc);

    => we can perhaps add something like some minimalistics GLUT-like calls for to can set a screen_width and screen_height that are adapted to the size of the client screen for the recopy of each GPU picture (that are computed from the server side) to the client side ?
    @+
    Yannoo

  5. #5
    Junior Member Newbie
    Join Date
    Oct 2013
    Posts
    10
    Quote Originally Posted by Stephen A View Post
    Looks extremely interesting!
    Thanks!
    The gl_client program can be run on an ordinary PC, so please try it if you have an Raspberry Pi.

  6. #6
    Junior Member Newbie
    Join Date
    Oct 2013
    Posts
    10
    Quote Originally Posted by The Little Body View Post
    This look effectively fine

    What about the possibility to transfert the graphic output from the server side to the client side via something like a glse_SwapBuffers() call at the end of each frame ?
    (for to can display back each GPU computed picture on the server side to the screen on the client that haven't a GPU)

    And about to regroup set_server_adress/set_server_port and set_client_adress/set_client_port ?
    Code :
    void set_server_address_port(server_context_t *c, char * addr, uint16_t port)
     
    {
     
      strncpy(c->server_thread_arg.addr, addr, sizeof(c->server_thread_arg.addr));
     
      c->server_thread_arg.port = port;
     
    }
     
     
    void set_client_address_port(server_context_t *c, char * addr,  uint16_t port)
     
    {
     
      strncpy(c->popper_thread_arg.addr, addr, sizeof(c->popper_thread_arg.addr));
     
      c->popper_thread_arg.port = port;
     
    }
    Thanks!
    It's good to simplify like that. I'll improve.

    Quote Originally Posted by The Little Body
    => we can perhaps add something like some minimalistics GLUT-like calls for to can set a screen_width and screen_height that are adapted to the size of the client screen for the recopy of each GPU picture (that are computed from the server side) to the client side ?
    It's possible to get the rendered image by gl_client, like glGenBuffers(glclient.c) does.
    But, bandwidth may be a challenge.
    I'll try to implement glReadPixels and check the performance.

  7. #7
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    250
    Quote Originally Posted by shodruk View Post
    It's possible to get the rendered image by gl_client, like glGenBuffers(glclient.c) does.
    But, bandwidth may be a challenge.
    I'll try to implement glReadPixels and check the performance.
    For to minimize the size of data needed for to transmit the rendered image on the network, we can:

    1) compress the rendered image on the server side
    2) transmit this compressed image on the network
    3) decompress this compressed image on the client side

    For example, each image can to be compressed on a JPEG format if the client can handle this directly in hardware
    (wavelet/filtering + huffman compression can to be employed instead if the client haven't the necessary hardware for to handle JPEG pictures)

    Or to use a MPEG, MJPEG or anothers motion picture compression schemes if the client side have the necessary hardware for to handle one of them
    (on the PSP platform, we can use the MJPEG hardware decompression engine for example)
    Last edited by The Little Body; 10-24-2013 at 12:43 PM.
    @+
    Yannoo

  8. #8
    Junior Member Newbie
    Join Date
    Oct 2013
    Posts
    10
    Quote Originally Posted by The Little Body View Post
    For to minimize the size of data needed for to transmit the rendered image on the network, we can:

    1) compress the rendered image on the server side
    2) transmit this compressed image on the network
    3) decompress this compressed image on the client side

    For example, each image can to be compressed on a JPEG format if the client can handle this directly in hardware
    (wavelet/filtering + huffman compression can to be employed instead if the client haven't the necessary hardware for to handle JPEG pictures)

    Or to use a MPEG, MJPEG or anothers motion picture compression schemes if the client side have the necessary hardware for to handle one of them
    (on the PSP platform, we can use the MJPEG hardware decompression engine for example)
    Raspberry Pi has hardware accelerated h264 decoder and encoder, so these may possibly be useful to reduce transfer bandwidth.
    But, h264 encoder usually causes long latency, and reading data from GPU memory is usually very slow.
    Furthermore, h264 decoding is very heavy task for non-accelerated clients.
    So, MJPEG may be rather efficient in some cases.

  9. #9
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    495
    Quote Originally Posted by The Little Body View Post
    For example, each image can to be compressed on a JPEG format if the client can handle this directly in hardware
    JPEG compression is lossy, and something like gl-streaming can't automatically know whether this is acceptable. You'd need a glHint() (or a custom equivalent) to allow the application to control whether lossy compression can be used. Clearly, it shouldn't be used for depth or stencil buffers, or integer colour buffers. In general, JPEG compression is a poor fit for anything with hard edges (e.g. wireframe).

    Quote Originally Posted by The Little Body View Post
    Or to use a MPEG, MJPEG or anothers motion picture compression schemes
    MJPEG is just a container for multiple distinct JPEG frames. It doesn't offer any additional compression over JPEG itself.
    MPEG offers significant compression, but is also lossy, and requires that the frames are samples of a single animated image (also, the relative timing must be known for motion estimation to be useful). Consecutive calls to glReadPixels() don't have to use the same source rectangle, nor can you be sure that they even refer to the same "scene" (the client is free to use a framebuffer as a "scratch" area for e.g. generating texture data). IOW, consecutive calls to glReadPixels() don't necessarily constitute a single video stream.

  10. #10
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    250
    We can limit the compression to only make the Huffman part for to have a lossless compression

    And I think that a RGB[A] to YCbCr[A] color space conversion, instead to directly compress/decompress on a RGB[A] color space, can a little improve the compression ratio of the Huffman part

    Note that with a relatively smooth quantization step (that only loose one or two bits on the standard 8 bits depth for each color component for example), the Huffman compression can to be very more efficient
    (this make a little loose, but I don't think that this can be really distinguishable)
    Last edited by The Little Body; 10-25-2013 at 01:50 PM.
    @+
    Yannoo

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •