Strictly speaking, this isn't specific to GLX. The same issues would apply to using a graphics card in a system whose CPU has a different byte order to the GPU.
Actually no. The OpenGL standard requires that, if the client writes a string of bytes as a "GLuint", then the server must interpret those bytes as a proper "GLuint". So whatever bit fiddling that the server needs to do must be built into whatever processes the server uses to read that memory.

FWIW, I have trouble understanding why there seems so little interest in exploiting one of the features which really sets OpenGL apart from DirectX.
Because:

1: It requires having more than one computer.

2: Doing so requires being Linux-only.

3: It relies on the asymmetric computing situation, where your local terminal is weak and a central server has all the processing power. This situation becomes less valid every day. Between GLES 3.0-capable smart phones and Intel's 4.1-class integrated GPUs, the chance of not being able to execute OpenGL code locally is very low.

It's very difficult to exploit this feature unless it's explicitly part of your application's design requirements. It may differentiate OpenGL from Direct3D, but it's such a niche thing that very few people ever have a bone-fide need for it. It's nice for when you need to do it, but you can't say that it's a pressing need for most OpenGL users.