GLX protocol

What the heck is GLX protocol? It appears in all extension spec but I think not in the pdf docs.
Is this something to do with sending TCP/IP packets for doing rendering over a network?

Seems useless to me. Everyone should have a video card now in their machine.

GLX is the Unix/Linux protocol for openGL under X-windows.

X-Windows typically works by client/server rendering. (but you can have direct rendering interfaces now?)

Originally posted by V-man:
Seems useless to me. Everyone should have a video card now in their machine.
simple example: you want to work with a particular tool, which is not (or even cannot be) installed on your local computer.

one solution could be that you move yourself and your coffee-mug to another computer on which it is installed and spend the rest of the day there.

another solution would be that you stay where you are, log in to the remote computer, start the tool on the remote computer and send the graphical output over the network to your local computer.

you see, it does not necessarily mean that one of the computers does not have a graphics card; in fact they both have one (although maybe with different capabilities).

You can try to read the overview of the glx pdf specs you can easily find (I guess on this site). It might help you understand it more even if it’s short.

As other said glx is a layer between X (the Unix graphical layer) and GL. And as you might know on Unix almost any programs are client-server stated. And this is the case of X: you have to connect to the display.

This probably comes from the time where cpus were expensive and also graphic cards, so terminals could ‘rent’ some cpu time from the server in order to do some computations.
But this also has proven interresting points in design.

Nowadays most machines have the X server on their own. But this allows us for exemple to run 2 graphical windows managers on a single computer, but not only threw a physical network.
This does not only concerns TCP/IP, Unices have the capability to create logical networks.

A simple call to netstat will show you all those kind of unix network protocols.

"another solution would be that you stay where you are, log in to the remote computer, start the tool on the remote computer and send the graphical output over the network to your local computer. "

There exists many software for getting the graphcial output of another machine. There is rDesktop for example. It just a question of taking a screenshot and sending it over a network.
GLX need not be involved.

We have direct rendering now. Does this mean this GLX thing is bypassed?

I know you can run multiple X servers.

I would like to know if this GLX protocol thing is needed these days. Who uses it? What software uses it? And how can I use it? (to learn)

I would like to know if this GLX protocol thing is needed these days. Who uses it? What software uses it? And how can I use it? (to learn) [/QB]
GLX protocol is used in networked environments, as said above. There are probably not a lot of users these days, but the value of network transparency is significant and the cost of maintaining it is small. You can use it by getting two machines, one of which is running a GLX-capable X server (almost any Linux distribution with OpenGL installed, and I think Cygwin and Exceed on Windows), and one of which is running a GLX-enabled GL library dynamically linked with an app using OpenGL. Point $DISPLAY at the server machine and run the program on the client machine.

GLX protocol is mostly not needed in a direct rendering environment, though there’s still some protocol exchanges between client and server happening in the GLX API implementation. That has no significant performance impact since the GL calls aren’t using the protocol.

Other solutions such as rDesktop that you allude to can be useful as well. They have different performance tradeoffs and may or may not be more appropriate for a given class of application. In general indirect rendering is less appropriate if you are sending a lot of immediate-mode geometry or frequently loading textures / doing readbacks. OTOH if your geometry can be put in display lists or VBOs, the protocol overhead may be quite small.

It might seem a little less confusing if you realise that GLX really two things: an API and a protocol. Any OpenGL app running on X is going to use the GLX API, to create contexts, connect OpenGL to a window, swap buffers etc (although most programmers will use GLX directly since toolkits like GLUT hide the gory details).

When doing indirect rendering, all API calls have to be encoded into a network connection, and the protocol specifies the encoding. It’s only really of interest to people writing drivers and X servers and the like, since everybody else just uses the API and doesn’t need to worry about the underlying protocol.

Originally posted by V-man:
It just a question of taking a screenshot and sending it over a network.
GLX need not be involved.

hm…ok. the application is a 3d model viewer, which allows me to zoom/rotate the model. of course, i want more than one frame per second, let’s say 10 fps is the minimum. so the remote computer needs to transfer about 40 mb/s via the network (24 bit * 1280x1024 pixels * 10 frames/s) if it uses screenshots.

that’s why a long time ago-even before there was opengl- some people invented the x protocol. its advantage is that instead of megabytes of graphical output the remote computer sends only the commands which create the output. the graphical output is created on the local computer after it has received the command. and glx provides an extension to the x protocol, meaning that not only xlib commands (for example including 2d drawing capabilities like in windows gdi) can be sent via the network, but also opengl commands.

here’s an example: the viewer on the remote computer loads a model with 100000 quads. it uses a display list to draw the quads, which works well in this case since the model data is not changed. remember that all gl commands are sent to my local computer- this means that the display list is created on the local computer, too. so the advantage is that the viewer has to send the vertex data only once, at program start. when i zoom or rotate the model, the remote computer has to send only some glRotate, glScale etc and, of course, the glCallList.

and, to your question: you don’t have to learn the glx protocol. the glx protocol has to be implemented in the x server (the program on the remote computer which sends the x/glx commands) and on the x client (running on the local computer, receiving the commands). that’s the wonderful thing: when you program an opengl application, you do not have to care if it runs on a local or on a remote machine. there are no extra commands in the code necessary to make it run on a remote machine. it is controlled by an evironment variable (which can be changed on the command line) and managed by the x server.

So what does a program need to do to send it’s output to another PC. It’s by using the call to XOpenDisplay or …?

That’s done in the X and GLX libs & drivers, you have nothing to do to enable this.

Originally posted by V-man:
So what does a program need to do to send it’s output to another PC. It’s by using the call to XOpenDisplay or …?
exactly. you have two options:

  1. XOpenDisplay(NULL) : the program connects to the server which is specified by the environment variable DISPLAY (by default your local machine). if you change this variable on the command line (e.g. setenv DISPLAY 192.168.255.128:0.0), XOpenDisplay(NULL) will make the program connect to the computer with the address 192.168.255.128. this requires that 192.168.255.128 has an x server running, of course.

  2. XOpenDisplay(“192.168.255.128:0.0”) specifies the remote computer directly. using a static string like “192.168.255.128:0.0” doesn’t probably make any sense, but you can pass a char*, too.

i once made a tic-tac-toe game for a linux network, which used both options. the program connects to the local machine (XOpenDisplay(NULL)) and to the remote machine, too (XOpenDisplay(“whatever:0.0”)). with these two connections, the program can open windows on both machines to display the game board, and can receive mouse events from both machines.

Originally posted by RigidBody:

i once made a tic-tac-toe game for a linux network, which used both options. the program connects to the local machine (XOpenDisplay(NULL)) and to the remote machine, too (XOpenDisplay(“whatever:0.0”)). with these two connections, the program can open windows on both machines to display the game board, and can receive mouse events from both machines.

So actually the X server and not the client is the renderer? You stated the opposite a few posts earlier.

oh right- seems that i mixed up something…the client is the application on the remote computer. it uses xlib, in which the x protocol is implemented, to send commands to the x server running on my local computer, which does the rendering.

the tic-tac-toe game connects to the local server and to any remote computer’s server, so it is a client to two x servers. with the two connections, the program can create windows on both the local and the remote computer, and can receive mouse/keyboard events from both computers.