Remote X Display using VBO and GLSL?

Hi all,

I’m a dev on an Windows/Linux app that uses VBOs and GLSL shaders. It seems that, to my dismay, these do not work with remote display (i.e. X tunnelling over SSH)

Is there a way to enable them? (i.e. Some Xorg extension?)

Any help would be appreciated!

Thanks,
Jon

SSH tunnels GL via GLX protocol over a virtual X connection it sets up between the client and the server. GLX is a serialization of OpenGL calls over the wire. This is the method used for an app to communicate with the server using an indirect rendering context (as opposed to a direct rendering context, where the GPU is in the same box and X is local – uses DRI on Linux). For more details, see the GLX protocol mentions here.

I guess at issue is whether some of the newer OpenGL features have been defined with new GLX protocol (to serialize them for indirect rendering contexts) in-tow. IIRC, they haven’t.

Here’s a mention from google:

ARB Meeting Notes, December 7-8, 2004:

WG has resolved ARB_draw_buffers, ARB_occlusion_query, and ARB_vertex_program / NV_vertex_program interactions. Remaining issues are protocol for shading language functions, and details of vertex buffer objects. Jon will assign new GLX opcodes for Ian out of the registry.

Problem with VBO protocol in the past was that the clientside GLX library converted them to immediate mode commands. We can’t do this when the data exists on the server side, however. Problems: array enables are not transferred to the server, because they’ve been considered client state only. DrawArrays protocol doesn’t encompass e.g. multitexture, and probably doesn’t encode enough information about enabled arrays. Also need support for the several additional array drawing commands that have been added to GL.

How to Create OpenGL Extensions:

If you want the extensions to work with the X windowing system (i.e., with GLX), then you must request GLX opcodes and define GLX protocol for it.

Meaning a little unclear, but appears that defining a GLX protocol for an extension may be optional if the extension writer assumes a direct rendering context.

Also note that SSH tunnels X and GLX protocol via a virtual X connection inside the SSH data connection (i.e. ssh -X, and your shell has $DISPLAY set to something odd like localhost:10.0). You can get similar results by routing your X display directly to the other box (outside SSH) by manually setting $DISPLAY to <machinename>:0.0 and enabling external connects on your X server. I’m not recommending this, but I thought you might like to know a little more about what SSH is doing for you.

I’m no expert though. Hopefully someone else that knows more will chime in here. If you don’t get a response here, I’d advocate posting in the beginners forum. There you should be able to get the attention of someone that knows more than I about this.

Thanks for the reply.

Based on another post on this forum, it looks like there aren’t GLX opcodes for those extensions. These certainly aren’t new features though.

So the question becomes, is there a reasonable way to work around these limitations? VBO is easy enough to work around, but GLSL…?

Should I be recommending to our clients that they use VNC? (or some other remote-display solution?)

Dark Photon has already said what matters most, but I’d like to mention some other aspects:

  • Not all extensions need a GLX protocol. Extensions that do not add new entrypoints don’t need it (typical examples are texture compression extensions).

  • IBM had started work on defining a GLX protocol for VBO but AFAIK the attempt was abandoned. Since Nvidia hasn’t defined a GLX protocol for VBO or shaders our only hope is Mesa. If someone want to do the work of defining a GLX protocol and supporting it there we might still see indirect OpenGL 2 some day.

  • Defining a GLX protocol for VBO is tricky since it uses entrypoints from vertex arrays. VBOs are stored on the server while vertex arrays are stored on the client, so there are commands that will have to be either send to the server or broken down into pieces depending on its arguments. AFAIK this problem exists for no other GL functionality.

  • Defining a GLX protocol for shaders sems like a large task due to the number of entrypoints, but less complicated than VBO.

Philipp

Never used this, but it looks like it can solve your problem : http://www.virtualgl.org/

EDIT: some explanations here :
http://forums.nvidia.com/index.php?showtopic=77110&st=0&p=441128&#entry441128

Interesting. Link: http://en.wikipedia.org/wiki/VirtualGL
Transparently redirects rendering to server-side pbuffers and then just pushes images across to the client. I’ll have to try that.

Thanks for the pointer to virtualgl - looks interesting - I’ll have to give it a shot!

Can both of you post feedback about it ?

I do not have access to 2 linux machines with hardware OpenGL, but I would like to know if this virtualGL is good.

I tried out VirualGL this morning to good effect.

I installed the 32-bit version on our Fedora 6 “server”, and on a 64-bit Redhat Workstation 5 “client”. (I’ve put these in quotes because the machine used as a client here is a much more powerful machine in all respects.)

The GLSL and VBO commands worked, since the server did the rendering, and the transmission speed of the rendered results seems quite good (though there seems to be a “hiccup” every few seconds.)

Thanks again - I’ll be recommending this to others.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.