OpenGL-3 and later: GLX Opcodes

One of the major tasks of recent OpenGL development was overcoming the bandwidth constraints between “client” (CPU) and “server” (GPU). Given modern techniques like Vertex Buffer Objects, Vertex Array Objects, Uniform Buffer Objects it is possible to render very complex scenes with only very few data sent from CPU to GPU.

These savings in required bandwidth also means, that it was (is) perfectly possible to do remote rendering with OpenGL. ATM I’m developing a small library (for microcontrollers in the ARM7 range), that implements the X11 and GLX protocol for remote rendering.

However I noticed that in the gl.spec, glx.spec and glxext.spec many of the modern OpenGL functionality is missing (either “glxopcode ?” or not given at all). Where the glxopcode field in the spec field has been left ‘?’ one can usually find them in the extension specification. But this is kind of unsatisfactory.

Unfortunately this also means that Mesa’s indirect OpenGL dispatcher is missing those functions, while technically they (some of them) are specified for GLX.

And of those not specified for GLX I don’t see any reason why that is so, especially now that we have those HUGE bandwidth savings available.

So what is the status about this. And please tell me, that X11 is outdated, should be abandoned, etc. So far I have seen no widespread, “modern” graphics system that is on par with X11.

So what is the status about this. And please tell me, that X11 is outdated, should be abandoned, etc. So far I have seen no widespread, “modern” graphics system that is on par with X11.

In regards to X11… it is old. That does not mean it is outdated, but one of the core issues of X11 is that it seeks to solve a problem that nowadays does not have a great deal of commercial demand: network transparent display. In all brutal honesty it is ridiculous for a portable device (such as the N900 for example) to support X11. There is next to no commercial demand for an application to run off of the device but be rendered (not just drawn) on the device’s display.

Additionally, getting an X11 driver right is really hard work, an amusing post here: [-on-open-source.html"]Linux Hater hates Linux Graphics](http://linuxhaters.blogspot.com/2008/06/nitty-gritty-[censored) although out of data about some bits of the Linux graphics stack(and hence wrong) has a word up on why the NVIDIA X11 driver works out so well, and others often have issues (atleast at the time of that posting). I worked on one project for a device where the number of developers dedicated to getting an X11 driver working well exceeded the number of developers for the rest of the project that was the entire stack integration and apps. That was insane, and a waste of man power resources that could have been used elsewhere.

Another ugly issue about X11 is that it does a lot of stuff when very often all that is wanted is a framework to get events and to do compositing (which really means a unified memory manager).

For what it is worth, there is no GLX protocol for OpenGL ES1 or ES2 and much of GL3 (indeed even much of GL2) has no official GLX protocol… VBO’s I think have some experimental protocol in MESA… but it has been experimental for a very, very long time now.

In the last few years there has been some craze about some new kind of paradigm, I don’t know if you’ve heared about it, but propbably you did: It’s called Cloud Computing – seriously: I fully understand that X11, as it is, with its many roundtrips and bandwidth demands (which can be reduced a lot, if you know, what you do), isn’t so well suited for Cloud Computing.

However it can do things, webbrowser based apps can do only with a huge bulk of overhead. For example: I own a oscilloscope with a RS232 interface. The original software for it is crap, so I wrote my own. But I got annoyed, that I had to install that. Also I wanted it to connect to a network. So with an ATMega644, with the Ethersex firmware which I added some X11 support I built an interface. Now I can control that scope by X11, without the overhead of HTTP (did you ever look at the complexity of HTTP, what is allowed and must be supported by a HTTP server?).

You see, that is simply not true. There are so many details to care about when rendering, one can get right only, if one is very close to the device, knows its properties. Think

  • colour management
  • subpixel antialiasing
  • resolution dependent glyph hinting
  • multihead setups where each device has a different colour profile, subpixel arrangement, pixel pitch.

How about the following: Say I got a Device with a AMOLED display, which subpixel arrangement is very different from what’s on TFTs: http://en.wikipedia.org/wiki/AMOLED
And I connect this device to some large screen for a presentation, say some Plasma-TV or a video projector which as a completely different subpixel arrangement and colour profile. With the contents of both displays the same.

A client with renders its output to some framebuffer which is then composited by a unified memory manager and let’s just be honest, we’re talking about Wayland here simply can not meet both devices demands for optimal rendering on the same time. The different subpixel arrangement kills the opportunity for rendering crisp fonts (see this article why http://www.antigrain.com/research/font_rasterization/ ). Colors will not match and due to the different pixel pitch also the font hinting will get confused.

But the biggest drawbck is, that the client itself has to care about all those details. This is not, how you do it properly. It has a reason why expensive, high quality printers ship with their own PostScript/PDF processor, or that you can buy (for some substantial money) specially tailored RIPs (Raster Image Processors) for specific printer models used in professional media. That programs know about the scrunities of the device they’re responsible for.

You can’t expect a client to care for this. Even if you implemented some modularized interface to plug this into clients, some will get it wrong or do it inefficient.

Now I’m fully aware that X11 does nothing of what I stated above. But an abstract graphics system, like X11 is, is far easier to get right, than scattering the battlefield across all clients.

Personally, as a application developer, preferrably I’d like to just open a window of given colour space (don’t care about pixel/visual formats) and send it an abstract representation of the scene. Not something as complex as Display-PostScript, but still scene based, so that the graphics system doesn’t have to bother my application for each and every update (by which I also mean, pushing a window onto a device with different display properties). In addition to that one also wants some primitive based rendering, like OpenGL or OpenVG.

Apropos windows: The way Windows are (not) managed by X11 is clearly superior to anything else I know. Unfortunately ICCCP and EWMH got messed up - we can’t get rid of ICCCP but one can easily ditch EWMH, which I did in one of my other projects (an experimental window manager, for which I also patch into the toolkits).

Shame. You know that we could get rid of this whole driver mess, if vendors decided, that one talks to their GPU by X11 and on the PCIe just present some channel to send a X11 command stream over? Technically this is not so far off, of what’s done these days anyway, just the command stream uses a different protocoll and there’s some on CPU tasks performed like shader calculation. But I don’t see any reason, why one couldn’t do this all on the graphics card.

So anyway, both the client and server don’t have GLX opcodes for modern calls. Thus, you have 2 options:

  • try to create and implement those opcodes for both server and client
  • actually write a simple client/server combo: client sends data , server creates all drawcalls and uses GL/whatever to display on a screen. Bandwidth savings, that are orders of magnitude higher than GLX :slight_smile:

if vendors decided, that one talks to their GPU by X11 and on the PCIe just present some channel to send a X11 command stream over? Technically this is not so far off

It’s so far off, it’s not even funny :). Memory-mapped registers, FIFOs, buffers/textures/queries. A standardized cmd-stream is absolutely impossible, unless you fancy extreme slowdowns and increased price of gpus. Just to support some flavour of linux.

In the last few years there has been some craze about some new kind of paradigm, I don’t know if you’ve heared about it, but propbably you did: It’s called Cloud Computing – seriously: I fully understand that X11, as it is, with its many roundtrips and bandwidth demands (which can be reduced a lot, if you know, what you do), isn’t so well suited for Cloud Computing.

Lets try to keep this civil, ok? Cloud computing is about: crunching numbers. Now if you wish to present it very pretty like, then there is an answer: WebGL, this makes sense a great deal since the browser (and associated API’s) is often the interface for cloud computing.

(did you ever look at the complexity of HTTP, what is allowed and must be supported by a HTTP server?).

Um yes, but lets keep this civil ok? As a side note every UPnP device has to have an HTTP server, but you can make a bet what is much less work.

Now I’m fully aware that X11 does nothing of what I stated above. But an abstract graphics system, like X11 is, is far easier to get right, than scattering the battlefield across all clients.

As a side note, writing an HTTP sever is orders of magnitude of less work than getting a good X11 implementation. Getting an X11 implementation wrong is really easy, getting it right is horribly difficult. Additionally, many the X11 extensions are grafted onto X11 in a most unholy way which makes the implementation particularly nasty (XComposite is the one I am mostly bellyaching on here about).

Here is a very important observation: very, very few toolkits (particularly those cross platform ones) use X11 to do any drawing at all. They mostly use X11 to make a windows and get events. Drawing is almost entirely handled by the toolkit by themselves. Querying color management and resolution is not exactly brain surgery and does not require anything as massive as what X11 pulls in. Likewise, multi-head support is not rocket science to make an API and underlying implementation for each of these cases. Additionally, having a large monolithic API to do everything likely means it will never work well on any device, where as having several very special purpose one’s means that for a fixed device only those that are needed would be supplied and supported.

Besides, under the area of essentially color management, if you are drawing with GL, the GL implementation decides (often in conjunction with the display system) the color mapping, sub-pixel magic and dithering.

A client with renders its output to some framebuffer which is then composited by a unified memory manager and let’s just be honest, we’re talking about Wayland here simply can not meet both devices demands for optimal rendering on the same time. The different subpixel arrangement kills the opportunity for rendering crisp fonts (see this article why http://www.antigrain.com/research/font_rasterization/ ). Colors will not match and due to the different pixel pitch also the font hinting will get confused.

If you need sub-pixel color-control in your rendering, you don’t want to render with GL either. You do not get that control… so I guess you really want that a system provides a drawing interface well matched to the physics of the display, in particular about font rasterization. My knee-jerk, sloppy response is that the GL and display controller work in tandem together. Asides, on embedded devices, if you are not rendering your UI through the GPU on a mid-to-high end device, the competition will eat your lunch in terms of performance and power usage. Drawing fonts with sub-pixel positioning is “automatic” with a variety of text-rendering techniques in GL…I have spent way too much time on these techniques. For the color jazz to match the physics of the display my opinion, as I likely already stated, is to have the windowing system and or GL implementation aware of the physics of the device.

I cannot tell if you know or do not know what Wayland really is. But to be clear: Wayland is NOT a display system. It is only a protocol to allocate offscreen buffers and to communicate events… Wayland can live on top of X11, and on the other side of the spectrum, X11 can be a Wayland client. What a compositor or/and application do with those buffers is up to the system integrator, again back to me saying on the close to the metal issues, essentially an implementation issue of the display controller (take bytes and produce voltages [or something else] for the physical display).

As a side note, most of the portable devices sold (be it phones or tablets) have an insane pixel density too…

At any rate, getting a high quality, high performance X11 implementation is insanely hard. Much harder than just fesseng up what GL (and most applications and API’s do): write to buffers and the system presents the buffers. X11’s main pain comes from it’s network transparency, indeed that was it’s whole point almost, that is not a real issue most of the time at all. As such, that functionality although neat (very nerd-nifty) is a huge engineering burden that is rarely used. That cannot be a good thing.

Wow, I suppose USB3 is a miracle then: A standard protocol to talk to high speed devices, one driver fits all chipsets, etc. I understand that USB3 delivers only about AGP3 speed, but it does this bidirectionally.

Seriously, those “issues” you mention there, they are none. There are other, widespread technologies that demonstrate the feasibility of standardized, high bandwidth protocols.

I’m fully aware of that. Part of its duties is managing buffers, aka memory of a shared resource. Thus the relation (and my quotation) of unified memory managers.

Could you give some sources, where this is either specified or at least mentioned in some documentation. Because as far as my knowledge of OpenGL goes, unless you explicitly request sRGB textures and framebuffers, the RGB values are simply passed through, and blending happens linearily, despite the fact that most textures come with images that don’t follow a linear transfer function. Personally I’d really like to be able to work in a linear contact color space in OpenGL (preferrably XYZ). Which means: One can actually do it, if one does all the color transformations in the shaders.

USB3 is just a data-transfer standard. GPUs are not something so ridiculously simplistic, and don’t work in similar ways. That’s why there are 20-100MB drivers, which run on a fast cpu, to translate and schedule commands and data. If you want a standard protocol, that goes all the way to the pcie, you’re effectively forcing the gpu’s controller+firmware to have those 20-100MB drivers. You’re forcing IHVs to make that controller as capable as an i7, to match performance requirements. A useless waste of transistors, an unnecessary drastic increase in chipset price, size and heat.

So is GLX, and that’s what we’re talking about. And actually GLX is much simpler than USB. (Technically it’s a remote procedure call protocol). And I wouldn’t be surprised if the complexity of tasks to make USB3 surpassed the complexity of a GPU. After all, a GPU does the very same, simple operations, just a lot of time for each pixel. OTOH on a USB bus each device is different, has different latency requirements, transfer sizes vary strongly. Heck personally I’d rather write a GPU driver than a USB3 driver.

Those 20 to 100MiB don’t stem from the complexity of one GPU model, but from the fact, that those drivers are “unified drivers” which deliver support for a whole range of GPUs. NVidia’s current driver’s support everything from the GeForce6 upwards. That’s what makes those drivers to huge, the vast amount of GPUs supported by a single driver package.

…and of course all those application-specific optimizations shipped with the drivers (modified shaders for each and every popular game out there, of course multiple variants, for each GPU model its own) – just to get a few FPS more in the benchmarks.

Now compare this with some Linux GPU drivers:
The Intel intel_drv.so drm/dri module has a mere 610kiB.
The AMD/ATI radeon_drv.so a not much different 944kiB.
In addition to that you need the Mesa state trackers: Those are about 3MB for each driver. I can’t tell you the size of the kernel module, since I’ve compiled into the kernel image (not a module), but looking at the .o files in the build directory it’s less than 1MiB.

So less then 4MiB for the whole driver doesn’t sound nearly as bad as that 5 to 20fold figure you offered.

However the majority of the code accounts for LLVM, which is used for shader compilation, so it includes a compiler toolkit. I think 4MiB code is a reasonable size of a firmware. Here are some Microcontrollers you can buy that satisfy the demands:

http://focus.ti.com/mcu/docs/mculuminary…aramCriteria=no

However ARM is licensing its cores (and actually AMD went in a strategic partnership with ARM recently). There’s nothing that would prevent from integrating a ARM core into a GPU spefically for compiling shaders and GPU management.

I strongly doubt that a driver will require a full i7’s computing capabilities. Heck, most graphics application developers would grab their pitchforks if the GPU’s driver hogged the CPU. Also the most complex task a GPU driver has to perform these days is compiling GLSL, maybe making in-situ optimizations on that depending on the GL state. Since state switches (which means exchaning the shader binary, and the like) are poison for a GPUs performance, this also won’t happen every frame, but only when the shader is compiled, and maybe for the very first frames drawn to collect statistics for incremental shader optimization. I cannot imagine that this task required a superscalar multi-GHz CPU. Probably some ARM7 running at 300MHz would suffice.

And no, there is no single driver in the world that does on-line modifications of the geometry data sent to the GPU for rasterizing, as this would put the whole set of Vertex Buffer Object, Instancing, MultiDraw call ad-absurdum. There is a reason, GPU vendors provide guides describing how to structure the data uploaded to Buffer Objects for optimal performance - because the driver can not, and should not alter it.

I strongly recommend you head over to http://developer.amd.com/documentation/guides/pages/default.aspx#open_gpu – you can download full developer documentation for the Radeon HD2x00 series GPU there; I think you’ll surprised that those things are actually rather simple (at least I’ve seen my deal of hardware that’s a lot more complex). I stated the size of the OSS radeon drivers above. Point is: I’m writing this on my Laptop which has Intel graphics; using Gentoo I hadn’t the radeon drivers compiled/installed. To give you the figure above I just did so: Downloading the sources, compiling and installing was done in well under a minute (on a underclocked for battery lifetime system); while the RadeonHD 2x00 series isn’t the top of the line I think that speaks much of the actual complexity of a GPU driver.

I never really understood what are the advantages of the X11 way of doing GL locally for a remote application :
http://www.virtualgl.org/About/Background
The GL rendering should happen on the same machine as the application, that way ‘on the cloud’ GPU rendering is possible.

Can somebody enlighten me ?

It strongly depends on the application, and the availability of certain operations over GLX. Say you’re using Vertex Buffer Objects and the full shader pipeline (tesselation, geometry, vertex, fragment), which a fully supported by GLX, then you can issue very complex renderings with only very little data transferred for each frame (a few glDrawElements calls with the indices in a element array buffer, textures are in GPU memory anyway). Let’s say maybe about 1k octets per frame. The result is a full resolution, no compression artifacts picture with much lower latency (the fewer data to transfer/compress/etc. the quicker the transmission over the net). Admittingly things like Occlusion Queries introduce additional roundtrips, so such things should be done in LANs only. Bet nevertheless it has a few advantages.

Take my university for a example: We offer so called compute servers; On these also run scientific visualization programs. However those servers don’t have GPUs (heck, they are virtualized machines); instead people connect from their desktops, which have GPUs. Since this happens in the LAN, latencies are neglectible.

To me this kind of setup should be done by hand, ie. the CPU-heavy program runs on the computer server, and communicate directly with a GPU-heavy rendering client on the desktop (maybe even using WebGL as mentionned above). That way you have control of the protocol efficiency and do not rely on GLX extensions being supported or not.

That requires maintaining two different programs, of which the second must be installed (well, not if WebGL is used, but let’s ignore that for a moment) on the desktop machines. The beauty about X11 and GLX is, that works transparently.

The beauty about X11 and GLX is, that works transparently.

By “transparently,” I assume you mean “someone else has to maintain it.” :wink:

Why yes, of course somebody has to maintain it. However someone has to maintain the operating system kernel, somebody maintains the libc, etc.

I expect network transparent graphics as one of the services a modern operating system provides, without me, in the role as a user of it’s services (i.e. application developer) doing all the grunt work. Forcing people to reinvent the wheel again and again just leads to inferior solutions.

That being said: IMHO anybody who considers contributing to modern operating system’s graphics layer should have the following base knowledge:

  • Having worked with, ideally developed for Plan9/Rio

  • Worked through “Computers and Typesetting”. Just today I upgraded my system and found that fonts get rendered different (and with worse quality) – again. EDIT; by that I mean the fonts used for GUI rendering or in the webbrowser. TeX is as crisp and accurate as ever. Only recently (well within a 12 month period) I got in a heated debate with a high profile open source developer, who disregarded TeX as ancient technology. To people like him I tell: “TeX is the only program in the world, that actually gets typography right. Learn from it!”

X11 may have become old, but still I can do things with it, I wouldn’t know how to do with other, more modern systems. Things that are actually rather cool and I consider a vital part of tomorrows operating systems. X11 is not perfect, there’s a lot that could be done better. But any graphics system that stands up to replace it, must be able to do, in some form, what X11 can now.

I expect network transparent graphics as one of the services a modern operating system provides, without me, in the role as a user of it’s services (i.e. application developer) doing all the grunt work. Forcing people to reinvent the wheel again and again just leads to inferior solutions.

There’s a lot of room between “OS feature” and “everyone writes their own stuff.” There is no reason that such a thing couldn’t be just a library that some entity maintains. That way, you don’t clutter up the lowest level of graphics interface with stuff that it really has no business doing (when your graphics code has to talk to your networking code, you know something’s screwy in your design). And you can provide support for different kinds of devices.

You will never see X11 supported on mobile platforms. It’s just too heavyweight and places too much burden on the low-level code. But there’s no reason why you couldn’t have a much more reasonable, higher level rendering system transmit graphics commands to such a platform.

Such a networked rendering library could even support multiple kinds of renderers. For example, you could provide support for OpenGL renderers, WebGL renderers, Cairo renderers, even Win32 GDI calls, or whatever. Because it isn’t built into the graphics layer directly, it is much easier to pick and choose what you need, and it is much easier to have control over how everything works.

Only recently (well within a 12 month period) I got in a heated debate with a high profile open source developer, who disregarded TeX as ancient technology. To people like him I tell: “TeX is the only program in the world, that actually gets typography right. Learn from it!”

I’m afraid that I have some trouble accepting that statement, considering that TeX has zero support for “unconventional” languages (or even just Unicode combining characters). That’s not to say that there isn’t much one could learn from TeX. But to say that it “gets typography right” is a bit like saying ASCII is all we need.

Yes, I know that there are variations of TeX that do have Unicode support. And some even have various attempts to support unconventional languages. But Knuth’s TeX (ie: the actual TeX) does not, which is not surprising, since it predates Unicode and Internationalization efforts. And is therefore “ancient technology.”

My smartphone (OpenMoko Freerunner, with SHR installed) uses X11 for its graphics. So I have do disagree. Of course it’s just some neiche product, but you can do X11 on it. And more than once I used remote X11 to bring either applications from the phone on my desktop for debugging, or other way round, to experiment with GUI on the actual device without having to upload to the phone.

What is it with Unicode, that this is the first thing that comes up with TeX (I could think of many, much more relevant things)? Unicode has absolutely nothing to do with typography. Unicode is a encoding scheme (with some typographic specialties scattered into it). Typography is the art of drawing text. BTW you just drew the very same, weak card, as the guy I was arguing then.

The encoding you supply the text in, being it ASCII, latin-1, EBCDIC, Unicode or something totally obscure, is of zero relevance. Nothing what Unicode does couldn’t be done in, well ASCII and some special commando sequences that define text direction, glyph combination, glyph substituion and some other details. The lack of direct Unicode support for TeX is only a matter if you really insist feeding your source directly to it. It is trivial however to write some filter, replacing Unicode codepoints with their matching TeX commands, eventually reinterpreting some of the semantics.

The main problem one faces processing Unicode with TeX is, that Unicode allows for several codepoint combinations which don’t translate to well into TeX’s semantics – BT;DT.

You know, that TeX is very popular among linguists? It is very easy to extend TeX to “foreign” writing systems. TeX, after all, is a Turing Complete programming system. Packages for Klingon are of the less obscure I know of.

But just telling the lack of Unicode support would disqualify a typesetting system for modern requirements completely misses the point; it confuses text encoding with typesetting, which are totally different things, though not disconnected.

Speaking of “ancient” technology: You do realize, that the protocols you, I, we are using for accessing this forum, browsing the web, sending data over the Internet even predate the creation of TeX (1978)? I’m not speaking of HTTP, but of TCP (1971) and IPv4 (1974). And IPv6 is not so much different as that it was some radically new thing compared to IPv4.

The lack of direct Unicode support for TeX is only a matter if you really insist feeding your source directly to it.

Yes, God forbid that I feed text through a text layout system. That’s clearly madness!

It is trivial however to write some filter, replacing Unicode codepoints with their matching TeX commands, eventually reinterpreting some of the semantics.

I want to make sure I understand your arguments thus far.

You are saying the OS’s graphics layer ought to be directly connected to the networking subsystem at an extremely low level, so to support the small minority of people who just might happen to have a need to render to non-local screens. You’re perfectly fine with saying that OS’s should have to do this. You state, “I expect network transparent graphics as one of the services a modern operating system provides”.

But then you say that it’s perfectly fine for a text layout package to essentially be reduced to reading only the lowest-common-denominator of text. If you want to do anything more than that, like the plurality of the world’s population does, you have to build a translation layer between the layout package and yourself. That direct support for an international standard for encoding text, one that is supported by thousands of applications and virtually every modern operating system directly, is not something that a text layout package is expected to provide without layering and hackery.

Ah, irony. :wink:

Speaking of “ancient” technology

I defined it as such because it doesn’t directly support the things that modern applications need to do. TCP/IP still does (well, except for not enough address space, which is why we have IPv6). Therefore, while they certainly are old, they have not become semi-obsolete (or at least in need of an overhaul).

Would you feed a TIFF through a PNG decoder? Same situation.

If by OS we refer to the whole set of programs that form the foundation for end user applications, then yes. I’m totally okay with separating the networking code from the graphics code, as long as the client applications don’t see a difference between local rendering and remote rendering.

Having to use some library or additional logic for the special case of remote rendering, that follows a different code path, is not transparent.

You know what I’d like be able to do (and with X11 I can do, at least in part): Put my smartphone/tablet beside my Laptop and push over some program from my Laptop’s screen over to the tablet, and it continues working there. With X11 I can using DMX/Chromium (which is a multiplexer, it doesn’t create some virtual framebuffer of something like that) and if devices participate in a DMX setup I can push windows between devices screens (well, actually it’s some kind of big super/metascreen, but the net effect is the same). Now the really cool thing was, if this didn’t require some multiplexer, but was directly supported by the core protocol. Also cool was, if a running program could be transitioned from one system to another (I realize the huge bunch of resources to keep track of and what may go wrong; pushing displays is far easier than pushing processes). You know that one scene in “Avatar” when they link the protagonist for the first time?: One of the operator pushes some diagnostic display from the main screen onto a handheld. Please show me any other graphics system like X11 where you can do something like this today already, without adding extra logic into the applications themself.

I never said X11 was perfect. But it does things, I consider far ahead of anything else there currently is (with the exception of Rio, maybe).

Of course it would be great if TeX had seamless Unicode support. However I didn’t pull TeX for its support of some encoding, but for its typesetting capabilties and qualities. The rendering quality of text delivered by TeX is still unsurpassed. Of course this also means looking at the whole TeX ecosystem. But still, what I’m presented by those “thousands of applications” you mentioned, saying it in all bluntness, what those deliver is an insult to my sense of aesthetics. Maybe those applications do support Unicode, but what they make of it, more often than not is ugly.

Alas, if we really want to argue about TeX commands vs. Unicode. This is about the same like discussing XRender vs. OpenVG.

What I demand of modern applications, is text rendering quality that doesn’t make my eyes hurt. Right now, after updating part of my system, I look at blurry fonts, with messed up glyph hinting, uneven perceived gray levels yuck. It is known for almost 30 years now, how to render text properly (TeX), why don’t we see that in modern day applications?

You know what I’d like be able to do (and with X11 I can do, at least in part): Put my smartphone/tablet beside my Laptop and push over some program from my Laptop’s screen over to the tablet, and it continues working there. With X11 I can using DMX/Chromium (which is a multiplexer, it doesn’t create some virtual framebuffer of something like that) and if devices participate in a DMX setup I can push windows between devices screens (well, actually it’s some kind of big super/metascreen, but the net effect is the same). Now the really cool thing was, if this didn’t require some multiplexer, but was directly supported by the core protocol. Also cool was, if a running program could be transitioned from one system to another (I realize the huge bunch of resources to keep track of and what may go wrong; pushing displays is far easier than pushing processes). You know that one scene in “Avatar” when they link the protagonist for the first time?: One of the operator pushes some diagnostic display from the main screen onto a handheld. Please show me any other graphics system like X11 where you can do something like this today already, without adding extra logic into the applications themself.

Sighs. It appears that you have genuinely no idea how nasty a network transparent display system can be to implement. Additionally, by the above it appears you have no idea of the overhead it implies. The use case you are so fond of there, the use case that generates so much work, is used so, so rarely on a mobile device. That functionality, although cool to see is so rarely used and it also is another surface to secure.

X11 is problematic, Apple realized this over ten years ago. On OS-X, X11 is a tiny thing sitting on top of Apple’s display system.

For what it is worth, the N900 (and the N9) all have X11, but no over the wire hardware acceleration… as one can see from Nokia’s lack of success story, that geek-feature really did not matter for the vast, vast majority of users.

For what it is worth, it is amazing that X11 is still used so much although it originated from such a long, long time ago. The assumption of X11’s design are no longer really true. When the Linux desktop kills of X11, it will be a good thing for Linux, X11 is one of the biggest issues on Linux, especially non-Android embedded Linux. Think as to why Google did not use X11 for their own system, think why did Apple do so too. The engineering burden to performance ratio is way, way too high.

As a side note, the remote jazz of the future is the web browser. SoC’s GPU’s and CPU’s are getting faster, a lot faster, with every generation [there will be funny limits though as time progresses], and yet they are cheap. You want the end user functionality that remote display gives: it is the web browser. It is not run application in on place and it displays in another, but from the point of view of the end user in terms of where the data resides, etc, it is functionally the same.