At least that's the only reasonable explanation I can find.
At least that's the only reasonable explanation I can find.
So we're already 25 years into trying to replace X11, which is already far longer than the period of time in which it was being designed and actively developed on a foundational level. There's got to be some good lessons learned in all of this about software lifecycles or getting it right the first time or knowing when to move on or OS development by way of mailing list consensus. I totally get why it's still used after all these years, it performs an essential task- but I bet the original creators would be the ones most shocked that it's still a thing in 2019.
Couldn't we say the same thing about bash or basically most tools in your typical Unix-based OS? Old code is solid code. If they're to be shocked it's for doing something so right that it's persisted all this time.
But it's funny that you would bring up something as terrible as a shell scripting language like bash to compare to how terrible X-Windows is. Have you ever read through a gnu configure file yourself, or do you just close your eyes and type "./configure"? Who in their right mind would ever write a shell script, when there are so many real scripting languages that don't terribly suck universally available, that don't shit themselves when they encounter a file name with a space in it?
These quotes about X-Windows apply to bash as much as they do to X-Windows:
"Using these toolkits is like trying to make a bookshelf out of mashed potatoes." -Jamie Zawinski
"Programming X-Windows is like trying to find the square root of pi using roman numerals." -Unknown
https://medium.com/@donhopkins/the-x-windows-disaster-128d398ebd47
That's a strong opinion. I'm not going to argue for lack of time, but suffice to say that 99% of my interactions with my computer and sometimes with my phone is with a shell scripting language. Shell scripting is awesome.
> Have you ever read through a gnu configure file yourself,
Yes. Generated scripts make for a boring read.
> or do you just close your eyes and type "./configure"?
I do, from people I chose to trust, whether by running the package build scripts written by the package maintainers of my distribution or from github accounts that I judge as trustworthy.
There is a lot of trust involved in using a computer. I mean, if something nefarious might be in the ./configure script, it's more likely to also be in a precompiled program, since more people touched it.
> Who in their right mind would ever write a shell script,
I do.
> when there are so many real scripting languages that don't terribly suck universally available,
Each language is good for different reasons. Shell languages are meant primarily to be used interactively, as opposed to languages like python or ruby. The fact that you can put your everyday interactions in a file and run that is an added bonus.
> that don't shit themselves when they encounter a file name with a space in it?
I rather not have filenames with spaces if it means having a language that allows me to communicate with my machine in a terse manner, allowing for super-easy automation of all sorts of interactions.
I mean, are you really suggesting to mandate quoting of all strings in a shell language? The quotes are optional. That's good! In a shell language, files are basically your variables, so why would you want more syntax around your variables when working interactively?
It really isn't though. (It certainly can be fun however). Switching from shell scripts, and avoiding to do things manually (like ssh), likely increases your success rate at managing *nix by orders of magnitude. All these "configuration management" tools like Puppet, Chef, SaltStack, Ansible etc. are pretty much just to avoid shell scripts and interactive ssh.
That's only true if your only use of the shell is for configuration, which isn't really intended to be the case. The shell was meant for any use of the computer, not just configuration. For example, I use ssh/scp when I copy some music files from my computer to my phone, or mpv when I want to play some music files. Indeed, I use the shell for nearly everything.
* Significant whitespace in surprising ways (a=1 vs a = 1 or spaces following square brackets)
* Word splitting
* No data structures of note, nor any way to create them in any sort of non-hacky way
* No data types, really, for that matter
* Can't really deal with binary data
* Awful error handling
* Weak math
* Weird scoping rules
Honestly as soon as I have to do anything that involves a conditional I abandon bash and use one of the many ubiquitous scripting languages that has great library support for doing all the system stuff you could do from the command line anyway.
Here's a great list of BASH pitfalls: https://mywiki.wooledge.org/BashPitfalls I can't think of any language other than maybe Perl or C++ that comes close to that
> * Significant whitespace in surprising ways (a=1 vs a = 1 or spaces following square brackets)
Variables in the shell are nicely coupled with environment variables. As a feature, you can do:
a=1 b=2 cmd
to concisely assign environment variables for a single command. How would you recommend that be redone? You'd need additional cumbersome syntax if you want whitespace to not be significant, and that sucks for a language meant to be used mostly interactively: a = 1, b = 2: cmd
Because shell languages are meant primarily to be used interactively, we want to be very light on syntax. We don't want to have to say `var` or something before our variable definitions. We don't want to have more syntax than we absolutely need for our calls. Nothing like `cmd(a,b)`. cmd can be any string. They're just executable files in some directory. We want to include as many of them as possible, and their arguments can be anything, including `=`. Commands get as much freedom as possible over how they're called to fit as many needs as possible. So, how do you differentiate between calls and variable assignments?Under those criteria, the current situation of statements being words separated by whitespace and the first words having `=` in them being assignments seems like the ideal solution.
> * Word splitting
Makes it easier to build commands without working syntax heavy complex data structures. Here's an example where word splitting is useful:
sudo strace -f $(printf " -p %s" $(pgrep sshd))
> * No data structures of note, nor any way to create them in any sort of non-hacky wayComplex data structure lead to more heavyweight syntax, and part of the appeal of shell languages is that everything is compatible with each other because everything is text. If you add data structures then not everything is text.
> * No data types, really, for that matter
Same point as above. Everything being text leads to increased compatibility. I wouldn't want to have to convert my data to pass it around.
That said, you could say that there are weak-typing semantics, since you can do `$(( $(cmd) + 2 ))`, for example.
> * Can't really deal with binary data
Because everything is text to encourage easily inspectable data exchange and compatibility between programs.
That said, while it's not advisable to do it, binary data is workable if you really need to do that. Pipes don't care. I can pipe music from ssh to ffmpeg to mpv, if I want. One just needs to be careful about doing text-things with it, like trying to pass it as a command argument. $() will remove a terminating newline if present, for example. That makes sense with text, but not with binary data.
> * Awful error handling
I don't get this. I think bash has very good error handling. Every command has a status code which is either no error or a specific error. Syntax like `while` and `if` work by looking at this status code. You can use `set -e` and subshells to get exception-like behavior. Warnings and error messages are, by default, excluded from being processed through pipes. What do you find lacking?
> * Weak math
Sure. I'll give you that bash doesn't support fractional numbers natively. zsh does support floating point.
> * Weird scoping rules
It's dynamic scoping, and it does have some advantages over the more commonly seen static scoping. You can use regular variables to setup execution environments of sorts. This somewhat relieves the need to pass around complex state between functions. It's kind of a different solution to the same problem that objects in OOP address.
The only problem is that static scoping became so popular that people now are generally not even aware that dynamic scoping exists or how to use it, so it's now not recommended to use it to not confuse people that don't know that part of the language they're using.
About that, I wish people would just learn more of the languages they use, and not expect every language to work the same, as they're not all meant for the same purposes or designed by the same criteria.
I also think that if a program's job is by good proportion about sourcing shell-scripts to e.g. prepare an environment for them and/or manage their execution, it's also a good idea to make that program in the same shell language. As an example of this, I think Archlinux's `makepkg` was best done in bash.
On why make a program that's about sourcing shell scripts at all, the shell is the one common language all Unix-based OS users know in common, pretty much by definition, so it makes it a good candidate for things like the language of package description files. Besides the fact that software packaging of any language involves the shell, you kind of expect people that call directly on `makepkg` to also want to be able to edit these files, so making them in the language that they're most likely to know is good.
While UNIX was playing with sh, there were already platforms with REPLs, graphical displays and integrated debugger.
In fact Jupiter Notebooks are an approximation of that experience.
So give me a REPL with function composition and structured data, over pipes and parsing text for the nth time.
Thankfully PowerShell now works across all platforms that matter to me.
If that's most of what a program is doing I would 10000 times rather read a script written in bash doing it than a python script that pulls in 800 dependencies just to get halfway to making these sorts of tasks take less than 5 lines for each invocation of a thing that takes less than 1 line in bash.
That's not to say that bash is perfect, but it is very good at what it does.
Just use posix stuff. And move on.
How do you for example parse command line arguments in your shell scripts?
Do you regard sed, grep and awk as dependencies when bash scripting?
With regards Python. The subprocess, os, sys etc modules are standard library modules. There is no dependency overhead in using them. Most smaller Python scripts manage very nicely with the standard library.
And, yet, in ALL this time, not a single substitute arose?
We have multiple web browsers, GUI toolkits, IDE's, etc. all with far larger codebases and yet X11 never got replaced properly. And some really brilliant minds worked on it.
So the question you need to ask is "If X is so bad, WHY hasn't it been replaced?"
To contrast that nvidia generally provided support for a decade. For example the latest release only days ago supports hardware as old as 2012 legacy drivers support hardware as old as 2003.
This is why I have bought nvidia hardware despite other issues however it looks like AMD open source support will be better going forward. This doesn't help anyone with old hardware.
You'll get plenty of results like this that are more recent than 1999.
Most user installs will not encounter either problem. New amd has great support out of the box without installing anything and many distros support installing closed source nvidia or will work well enough for non gaming applications with the open source drivers.
Please note that challenges aren't a result of X they are specifically the result of specific manufacturers drivers. Some of which have been more challenging in the past. For improvements on such in the future look to the manufacturers and support the ones that provide the optimal experience.
Please note issues where users can't set the correct resolution for their hardware ALSO occurs on windows 10.
Link from Winter 2018 https://troubleshooter.xyz/wiki/fix-cant-change-screen-resolution-windows-10/
Though I still use the training I got in the accelerated 1 weekcrash course in unix we all did.
Yet your windows render and take input, life goes on, etc. I am pretty happy with it on systems where it runs. Some of the old criticisms like it being a resource hog - might have made sense in the specs of, say, 1993 or earlier, but even restricting the comparison to what most of us have loaded in Javascript at any moment it's pretty lightweight.
> Have you ever read through a gnu configure file yourself ... Who in their right mind would ever write a shell script
I really don't think it's fair to use machine-generated code as an example of why you shouldn't use a particular language. All of those alleged universally available, "real" scripting languages that don't terribly suck would also look pretty bad if you turned them into a target language for GNU autotools.
I quite enjoy writing fish scripts. Largely because I can actually remember the syntax for conditionals and loops.
otoh, there are many things that could or even ideally should be kept simple such that a bash script is the best practice, most simple, proven, reliable solution.
Can you spot the moniker with spaces in it in this sentence easily without having to go back and re-read? Without context, did I mean "moniker" or "moniker with spaces in it"?
Spaces are a terrible idea in monikers.
I've seen Vulkan called the successor to OpenGL, but reading the spec it seems more like the end game for raster graphics card programming. OpenGL 4.0 was released in 2010, and since then changes have been incremental. We more or less have figured out how to do raster graphics (ray tracing may be a different story), so it made sense to invest tens (hundreds?) of millions of dollars to develop the Vulkan spec, and then many millions more to implement it.
What other technologies are there were we are more or less at the end game? I know Qt5 widgets is considered feature complete for desktop apps.
The number of rough edges, missing bits and outright bugs mean that it's certainly not "finished"... just like all software really.
To this day if you want a common file dialog that works properly across all Qt deployment targets, you need to use QML, as the Widgets version is not adaptive and will display a tiny desktop common file dialog on an LCD display, for example.
And it's no surprise both Adobe and Microsoft have pushed people towards a subscription model for this software: Nobody in their right mind would pay for upgrades otherwise. Arguably Office you need every ten years to ensure you have security updates because of the amount of foreign content you process with it, but Adobe? Psh.
https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
>The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor (collaborative work). Engelbart's presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.
http://worrydream.com/Engelbart/
>Engelbart's vision, from the beginning, was collaborative. His vision was people working together in a shared intellectual space. His entire system was designed around that intent.
>From that perspective, separate pointers weren't a feature so much as a symptom. It was the only design that could have made any sense. It just fell out. The collaborators both have to point at information on the screen, in the same way that they would both point at information on a chalkboard. Obviously they need their own pointers.
>Likewise, for every aspect of Engelbart's system. The entire system was designed around a clear intent.
>Our screen sharing, on the other hand, is a bolted-on hack that doesn't alter the single-user design of our present computers. Our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn't magically transform them into collaborative environments.
>If you attempt to make sense of Engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart's intent. Engelbart hated our present-day systems.
And it's in the direction of multi-user collaboration that X-Windows falls woefully short. Just to take the first step, it would have to support separate multi-user cursors and multiple keyboards and other input devices, which is antithetical to its singleminded "input focus" pointer event driven model. Most X toolkits and applications will break or behave erratically when faced with multiple streams of input events from different users.
https://tronche.com/gui/x/xlib/input/XGrabPointer.html
For the multi-player X11/TCL/Tk version of SimCity, I had to fix bugs in TCL/Tk to support multiple users, add another layer of abstraction to support multi-user tracking, and emulate the multi-user features like separate cursors in "software".
Although the feature wasn't widely used at the time, TCL/Tk supported opening connections to multiple X11 servers at once. But since it was using global variables for tracking pop-up menus and widget tracking state, it never expected two menus to be popped up at once or two people dragging a slider or scrolling a window at once, so it would glitch and crash whenever that happened. All the tracking code (and some of the colormap related code) assumed there was only one X11 server connected.
So I had to rewrite all the menu and dialog tracking code to explicitly and carefully handle the case of multiple users interacting at once, and refactor the window creation and event handling code so everything's name was parameterized by the user's screen id (that's how you fake data structures in TCL and make pointers back and forth between windows, by using clever naming schemes for global variables and strings), and implement separate multi-user cursors in "software" by drawing them over the map.
Multi-Player X11 SimCityNet:
https://www.youtube.com/watch?v=_fVl4dGwUrA
X11 SimCity Pie Menus:
https://www.youtube.com/watch?v=Jvi98wVUmQA
Multi-user menu tracking (added "@$screen" parameterizations):
https://github.com/SimHacker/micropolis/blob/master/micropolis-activity/res/menu.tcl
Opening multiple X11 displays (multiple toplevel "head" windows per screen, each with a unique id, using $win parameterization):
https://github.com/SimHacker/micropolis/blob/master/micropolis-activity/res/whead.tcl
Funny enough, the screen recording functionality added to PowerPoint a few updates ago is as far as I can tell the best simple screen recorder available for Windows 10 and the closest thing to native screen recording outside the game bar. Not sure why that hasn't made it into the snipping tool yet.
https://en.wikipedia.org/wiki/Tab_(interface)#Patent_dispute
https://www.donhopkins.com/home/archive/emacs/to.jag.txt
https://en.wikipedia.org/wiki/Tab_(interface)#/media/File:HyperTIESAuthoring.jpg
https://medium.com/@donhopkins/the-shape-of-psiber-space-october-1989-19e2dfa4d91e
Around 1990, Glenn Reid wrote a delightful original "Font Appreciation" app for NeXT called TouchType, which decades later only recently somehow found its way into Illustrator. Adobe even CALLED it the "Touch Type Tool", but didn't give him any credit or royalty. The only difference in Adobe's version of TouchType is that there's a space between "Touch" and "Type" (which TouchType made really easy to do), and that it came decades later!
Illustrator tutorial: Using the Touch Type tool | lynda.com: https://www.youtube.com/watch?v=WUkE3XLw_EA
SUMMARY OF BaNG MEETING #4, July 18, 1990: https://ftp.nice.ch/peanuts/GeneralData/Usenet/news/1990/_CSN-90/comp-sys-next/1990/Jul/_BaNG-%234-meeting-review.html
TOUCHTYPE Glenn Reid, Independent NeXT Developer
The next talk was given by Glenn Reid, who previously worked at both NeXT and Adobe. He demonstrated the use of his TouchType application, which should prove to be an enormous boon to people with serious typesetting needs.
TouchType is unlike any other text-manipulation program to date. It takes the traditional "draw program" metaphor used by programs like TopDraw and Adobe Illustrator and extends it to encompass selective editing of individual characters of a text object. To TouchType, text objects are not grouped as sequences of characters, but as individually movable letters. For instance, the "a" in "BaNG" can be moved independently of the rest of the word, yet TouchType still remembers that the "a" is associated with the other three letters.
Perhaps the best feature of this program is the ability to do very accurate and precise kerning (the ability to place characters closer together to create a more natural effect). TouchType supports intelligent automatic kerning and very intuitive, manual kerning done with a horizontal slider or by direct character manipulation. It also incorporates useful features such as sliders to change font sizes, character leading, and character widths, and an option which returns characters to a single base line.
TouchType, only six weeks in development, should be available in early August, with a tentative price of $249. BaNG members were given the opportunity to purchase the software for $150.
And probably one huge philosophical change, given that it was originally designed for displaying grayscale images.
Not to Linux :(
(NB: I Am Not An Expert and these are my Uneducated Impressions.)
I don't need an "CPU API" to run code on the CPU in my machine, so why do I need to go through an API to run code on the GPU (hint: it's mostly about GPU makers protecting their IP).
Screen tearing: both Intel and AMD have hardware-backed "TearFree" buffering to prevent tearing. Bad performance: citation required - in my experience, Xorg is way faster. Touchscreen support - in most cases just works out of the box thanks to libinput. No HiDPI support for different scales per monitor - simply wrong - this is trivial with xrandr.
I have a machine with a supported Radeon card (open-source driver), another machine with a supported nvidia card (binary blob driver), and another two machines using different intel onboard graphics chips (open-source driver)
the radeon and the intel (both drivers which work with it) have issues putting out a stable jitterless 60fps without tearing (with tearfree on, and various combinations of with/without compositing/glx/...)
I'm using the intel driver as well, and it's definitely not perfect. But it's pretty close, at least for me - I get hardware-accelerated video decoding with VAAPI, no tearing, and excellent input latency (~3ms).
sadly I can see the jitter/tearing, vs. on Windows where it's perfect
(with 4k or higher frame rates it's far more obvious)
Sure you can. Just pop the widow into floating mode; I think the default is meta-space.
For example here is an Apple developer explaining why X11 wasn't chosen for Mac OS X: https://apple.stackexchange.com/questions/168980/if-os-x-doesnt-use-the-x-server-then-what-does-it-use
(By "Unix operating system" I mean an OS based on a Unix/Linux kernel.)
Conclusion I am not defending X11 architecture, but for a majority the features it has are good enough despite of the architecture and limitations.
That is not quite accurate, although it isn't far off. More accurately; the design mistakes that make/made X11 terrible were being enforced at the driver level.
Take the fact that for many years X was being run as root on linux. Horrific state of affairs for security. Everyone knows it is bad.
Some bright spark tries to write a new window system that runs as an unprivilaged user; and ran smack-bang into the fact that the drivers live in the window system because the Kernel doesn't accept closed source modules and the graphics vendors only support X.
That eventually got fixed with the Intel/AMD graphics graphics open sourcing 2008-2018; at the moment X is becoming a very thin comparability layer for most people and a mandatory pain for Nvidia users as far as I know.
There were a lot of issues like that, and still are with Nvidia. The point is that it isn't replacing X that is hard. The issues is coordinating with Nvidia is hard.
As far as I can tell, Nvidia has two modes of operation with the Linux community, "hostile" and "inept"
Hostile is when they do things like require signed firmware for their GPU's or try to force their will on things like EGLstreams, while everyone else is using GBM
Inept is their situation with things like the Tegra mobile platforms. They simply use nouveau instead I'm told. Even though the two GPU lines of course, share a ton of engineering. For some reason they just decided it's easier to use nouveau on that side, and only that side
It's even better than that now. For me no part of X is running as root. Last I checked the only reason anyone had X as root (other than driver issues) was if they were running a graphical login manager). Since the login manager has to be up and running before any users are logged in, and likely requires X, it makes since that it's got to run in a privileged context, and works most easily as root. In my case I just use xinit to start my graphical sessions after I've logged directly into a (getty) terminal.
I think supposedly GDM can do rootless X, but I haven't tested that.
Every other driver on Linux has a kernel part and a userspace API accessed through /dev devices or syscalls. It was always unclear why X needed to be any different, but I think this was more about implementation than something fundamental to X. Of course "X" is a lot of things to a lot of people, and it depends whether we're talking about the server, the protocol, the client, the extensions, the window manager, the widget library, etc.
https://en.wikipedia.org/wiki/DESQview
>DESQview/X
>Quarterdeck eventually also released a product named DESQview/X (DVX), which was an X Window System server running under DOS and DESQview and thus provided a GUI to which X software (mostly Unix) could be ported.
>DESQview/X had three window managers that it launched with, X/Motif, OPEN LOOK, and twm. The default package contained only twm, the others were costly optional extras, as was the ability to interact on TCP/IP networks. Mosaic was ported to DVX.
>DVX itself could serve DOS programs and the 16-bit Windows environment across the network as X programs, which made it useful for those who wished to run DOS and Windows programs from their Unix workstations. The same functionality was once available with NCD Wincenter.
I suppose for most enterprise stuff what you say is true (how else will we commit to coding targets we won't hit for prices we can't afford on timelines that make dog races look slow? - I suppose I'm a little early in life to be jaded, but I am familiar with bureaucratic nightmares that value image over substance).
> The reference implementation of the protocol (Weston and it's associated libraries) is written in C. That means you could wrap the C code with Rust, which several people have done already [1] However, I get the impression that the results are not very 'rustic', meaning it's like you are coding C from Rust, instead of writing real Rust code.
> To address the problems of dealing with the existing native Wayland implementations, a couple of the Rust Wayland developers have joined together to build a new Wayland implementation in pure Rust called wlroots [2]
[1] https://github.com/Smithay/wayland-rs [2] https://github.com/swaywm/wlroots
wlroots is written in C whereas wayland-rs - a Rust implementation of the wayland protocol (client and server) is written in - Rust.
I'm not familiar with either project, but this just stood out immediately when looking at the Github pages.
But yeah, you would still need the C toolchain with this.
I've been Wayland/Sway on NixOS for a while, and I really like it.
One question though - the article seems to say that wlroots is a Rust project, but it seems to very much be a pure C project? (https://github.com/swaywm/wlroots)
Yeah. It uses meson/ninja to build. No Rust.
For general use it seems as if it'll be something that makes no difference to my day-to-day usage of my computer other than a warm fuzzy feeling that the underlying protocol is "right".
Edit: I also like Sway quite a bit (over i3/X) - its configuration (outputs, input devices, etc) makes a lot more sense and is a lot easier for me than trying to change stuff in different places and in different ways with X.
https://github.com/Aishou/wayland-keylogger
At present Linux desktops aren't very secure against user installed malicious software. It is however fortunate that most software is installed from curated repos.
It's not clear that just switching to wayland is worth much at this point in time.
At best you are hoping that the malicious binary someone tricked you into running didn't also take advantage of an additional vulnerability to compromise everything. Keeping in mind that your adversary has every opportunity to test against the same environment you are running.
The only linux environment that I've aware of that takes isolation really seriously is qubes and even that isolation could be violated in theory.
I want desktop applications to have features that right now require substantial permissions to effect. The primary defense is and will likely remain not to install malicious software in the first place by installing from curated sources.
In the real world, any secure desktop solution is going to require a reliable execution environment ("security is only as good as your weakest link"). If you don't trust the user to properly handle that, then you must ensure they don't do anything stupid or dangerous to themselves by restricting what they can do. For desktop applications this usually means to execute them in a sandbox (such as Flatpak). QubeOS tries to do something similar, but stumbles upon the inherently insecure design of the X Server, and has to work around it running separate X server instances for each unreliable X client.
If you have a compositor that supports plugins, such as Wayfire, you can write a plugin: https://github.com/myfreeweb/numbernine/blob/master/wf-plugins/mod2key.cpp
You can also do things on the evdev level by listening to keys and emitting new keys from a virtual device. My tool for that: https://github.com/myfreeweb/evscript
These manufacturers should be developing proper GPU drivers for mainline with full KMS/DRM/mesa support before they even sell their boards to the public claiming Linux support.
Wayland and Xorg work just fine on Intel integrated graphics, Intel has been setting the standard here for over a decade now.
Don’t know about Linux desktop, but for my embedded use case where I build stuff directly on top of drm, kms and gles, it works fine driving 2 displays, one of them is 4k.
The rant should be directed at ARM itself for not providing documentation for Mali.
It's just the conservatism of distros like Raspbian. 32-bit, old kernel, proprietary blobs, old packages (stable debian).
>These manufacturers should be developing proper GPU drivers for mainline with full KMS/DRM/mesa support before they even sell their boards to the public claiming Linux support.
Broadcom or the Pi Foundation? One doesn't care and one doesn't have the resources.
Since when was an X-Windows extension a "competitor" to X? Display PostScript was simply a proprietary X-Windows extension that fell out of fashion, not a competitor to X-Windows.
NeWS was a competitor to X-Windows that died decades ago, but Display PostScript was never a "competitor" to X-Windows, just like the X Rendering Extension and PEX were never competitors, just extensions.
But at least the article gets credit for calling it X-Windows instead of X11, to annoy X fanatics. ;)
https://medium.com/@donhopkins/the-x-windows-disaster-128d398ebd47
https://en.wikipedia.org/wiki/Display_PostScript
In 1985, two years before DPS was started and four years before NeXTSTEP was released, James Gosling and David Rosenthal at Sun developed their own PostScript interpreter for NeWS, originally called SunDew, which was distinct from and quite different than DPS, and wasn't licensed from Adobe.
http://www.chilton-computing.org.uk/inf/literature/books/wm/p005.htm
https://en.wikipedia.org/wiki/NeWS
Then in 1993, NeXT and Sun developed OpenStep for X-Windows/DPS.
https://en.wikipedia.org/wiki/OpenStep
One of the biggest architectural difference between NeXTSTEP/DPS and NeWS is that NeXTSTEP didn't take advantage of the technique we now call "AJAX," by implementing the user interface toolkit itself in PostScript running in the window server, to increase interactive response and reduce network messages and round trips.
https://news.ycombinator.com/item?id=13783967
NeXTSTEP wasn't trying to solve the remote desktop problem: the toolkit implemented in Objective C code just happened to be using local networking to talk to the DPS server, but in no way was optimized for slow network connections like NeWS was (and AJAX is).
You couldn't run NeXTSTEP applications over a 9600 baud Telebit TrailBlazer modem, but NeWS was great for that. I worked on the UniPress Emacs display driver for NeWS, which was quite usable over a modem, because you do stuff like text selection feedback, switching and dragging multiple tabbed windows around, and popping up and navigating pie menus, all implemented in PostScript running locally in the window server, without any network traffic!
https://www.donhopkins.com/home/code/emacs.ps.txt
https://www.youtube.com/watch?v=hhmU2B79EDU
NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:
used PostScript code instead of JavaScript for programming.
used PostScript graphics instead of DHTML and CSS for rendering.
used PostScript data instead of XML and JSON for data representation.
So NeXTSTEP suffered from the same problems of X, with an inefficient low level non-extensible network protocol, slicing the client and server apart at the wrong level, because it didn't leverage its Turing-complete extensibility (now called "AJAX"), and just squandered PostScript for drawing (now called "canvas").
So for example, you couldn't use DPS to implement a visual PostScript debugger the way you could with NeWS, in the same way the Chrome JavaScript debugger is implemented in the same language it's debugging (which makes it easier and higher fidelity).
https://medium.com/@donhopkins/the-shape-of-psiber-space-october-1989-19e2dfa4d91e
>The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989
>Abstract
>The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.
wayland-rs does not wrap libwayland, but instead offers a pure Rust implementation of the Wayland protocol.
Sway and wlroots are both written in C, but wlroots-rs is a project which wraps the wlroots C library in a Rust wrapper.
There are currently no mature Wayland compositors written in Rust.
Also, this article mainly focuses on the problems with Wayland on the Raspberry Pi, which stem mainly from the old proprietary drivers and the slow pace at which Raspbian gets up-to-date software. For most users, the experience is much better.
Originally it only wrapped libwayland. Now it offers both libwayland and a pure Rust implementation, togglable with the Cargo features client_native and server_native.
The result is that the CPU drawing on a screen-sized framebuffer is fast enough that Raspbian will probably never, ever make the jump to full bells-and-whistles mainline VC4 drawing and atomic DRM composition. It would just end up using a lot more memory and break a bunch of hacks in the proprietary drivers that people have come to rely on for essentially still garbage tier performance and a system that randomly freezes because you ran out of graphics memory (which is forever limited to 256 MiB, memory that you surrender on boot to the horrendous CMA system).
I like the Pi a lot as a PoC platform given the availability and the fully open-source supported driver we have for it. But the HW specifications of all the parts that Wayland cares about make it very clear that the thing was only ever meant to do 1080P when you are piping frames from the video decode engine straight to the compositor and out the pixel valves.
LXQt currently still uses Openbox, but you can replace it with KWin rather easily. I don't know what is then still missing to create a proper LXQt Wayland session, but it seems feasible.
I guess, one does not really need HiDPI on a Raspberry Pi, but yeah, Wayland would be nice.
With Wayland, the ecosystem of window managers will never be as rich, because a window manager has to implement too many things to be usable.
or, much better, use a compositor with a plugin system, so window managers can become plugins! :)
I've used it since its inception, however it has always kind of sucked.
I think it goes to it's basic philosophy: it did not enforce policy.
This let it survive a long time. It was whatever people wanted it to be. But because of this flexibility, it never became great.
It's like the old Lilly Tomlin skit - "I always wanted to be somebody, but now I realize I should have been more specific."
The sway compositor has been standardizing protocols for screenshots, screen recording/streaming, composable desktop components and so on.
Then end result is that it explodes your test matrix. I won't use it while it stays the case, because I don't have time to fight with basic graphic stuff being broken all the time, and it is ridiculous to expect things to be not broken if the test matrix stays like that.
Plus it is starting to be old yet still missing half of the features, be it existing X features (ex: ssh -X), basic desktop GUI needs (ex: whatever is needed to implement Wine), or modern must-have (ex: colorimetry) (at least if was the situation a few months ago, hopefully there have been some progress since)
I love how easy it is to configure via config files, external monitors work great, scaling on HiDPI displays has been totally painless. My days off fiddling with xorg config and xrandr are behind me.
I know the creator of sway hangs out here, so if you're reading this, thank you!
Edit for a little more detail: I used Ubuntu for a long time on my desktop and actually did quite a bit of gaming on it. Mostly stuck to games that have native Linux support but from what I hear the compatibility layer that Valve has released is actually really good. If gaming is your thing.
All this is to say, things that were once supposed to be impossible on Linux are now very much possible.
1) HW accelerated rendering in Firefox, which despite being unfortunately disabled by default on linux can in my experience be enabled (on about:config) without issues. This makes the experience of scrolling much smoother, so I usually do that. I don't think this has any observable effect on battery life for me. However, enabling this in Sway results in some very odd behavior that breaks certain things so I've had to disable it. I can go into more detail on that if you like.
2) Hardware accelerated video decoding. In contrast to the fist one, this makes a HUGE difference in CPU usage and battery life. However this unfortunately cannot be used in any browser AT ALL in Linux, regardless of setup or configuration. The way I watch youtube videos on my laptop is usually with mpv, which does use hw decoding if it is configured to do so.
I've been using Firefox Nightly in Wayfire (also wlroots based) with GL (and even WebRender) for quite a long time now, it works very well, about the only issue left is popover placement is odd occasionally. What issues do you have?
I'm surprised you were able to get WebRender working as well, I think I recall Firefox instantly crashing when I tried that. This was a few weeks ago.
( Ubuntu 18.04 / Unity on a Dell XPS 13 from 2016 )
When the SparcStation I came out, I backported Suntools to its frame buffer on a lark and it was screaming fast. That was pretty fun.
Oddly enough, I still use X11 a lot. I have probably half a dozen ARM systems around my office/lab doing various things, and it is simpler to run an X11 client locally and just kick off an xterm to these machines than it is to try to do some sort of KVM nonsense. It is also more capable than running a web server on the Pi and trying to interact with it via web "application" pages.
Oddly enough, I recently (like about 6 months ago) became aware of "KVM over IP" which, believe it or not, you hook this piece of hardware to the display port (or HDMI) and the USB port, and it sends the video (and HDMI audio), out over the network to a purpose built application on a remote system. Wow, sounds just like X11 but without the willing participation of the OS vendor :-).
The point I'm trying to make is there absolutely is a need for both. A direct to the metal way of cutting out layers and layers of abstraction so that you can just render to the screen in a timely way, with support for UI features. (otherwise we'd just program to Unity or Vulkan and be done). But their also needs to be a standardized way of letting "well behaved" programs export their graphics and I/O across the network to a place more convenient for the user.
Arguing passionately for one use case at the expense of the other doesn't move the ball down the road. Instead it just pits one chunk of the user base against the other half.
https://news.ycombinator.com/item?id=19717416
>How many more times faster is a typical smartphone today (Raspberry Pi 3: 2,451 MIPS, ARM Cortex A73: 71,120 MIPS) that a 1990 SparcStation 2 pizzabox (28.5 MIPS, $15,000-$27,000)?
Remember "2^(year-1984)"?
On Nov 8, 2018, I sent Bill Joy a birthday greeting: "Happy 17,179,869,184 MIPS Birthday, Bill"! (2 to the (year - 1984))
I'd say remote X use has dwindled to a trickle. It's more common to use VNC. Or even a browser (ever checked out novnc?)
I stopped using X completely when I discovered tramp for emacs.
I never seem to be able to get hardware acceleration to work properly in Linux on my MacBook---whenever I open up a webpage, CPU usage goes through the roof, unlike in Mac OS. Wayland seems to help a bit, but there seem to be lots of bugs than can only be fixed by the window manager.
I mean, we've had choices for text-editors, for shells, for programming languages, for GUI toolkits, for desktop environments, for window managers, for remote-desktop servers and viewers, for ssl/tls implementations, for web browsers, and even for kernels.
The only thing everyone using Unix-based systems all had in common was that they had to use X for graphics. I think we had choices for implementations for X (Xf86?) before we ended up just using Xorg, but now we even have the choice to not use the X protocol.
Having choices is good. Having diversity is good. I hope we stop trying to see this one as everyone converging to using only one. We can have both, like we do now. I think that's the best.
The reason X-Windows sucks is that it's not extensible, and Wayland is only incrementally and quantitatively better than X11 (like X12 or Y), not radically and qualitatively better (like NeWS or AJAX).
Wayland has the exact same fundamental problem that X-Windows suffers from. It's not extensible, so it misses the mark, just like X-Windows did, for the same reason. So there's not a good enough reason to switch, because Wayland failed to apply the lessons of Emacs and NeWS and AJAX. It should have been designed from the ground up around an extension language.
Web browsers are extensible in JavaScript and WebAssembly. And they have a some nice 2D and 3D graphics libraries too. Problem solved. No need for un-extensible Wayland.
Years of configuring modules in XF86Config and matching things up between the server and the client and that thankfully going away quite a bit with Xorg would make my conclusion the opposite and yet also suggests it didn’t really matter as much.
I've heard that Nvidia is fixing this in KDE Plasma and maybe Gnome too. (Fuck proprietary drivers though, and fuck Nvidia.)
> huge input lag
that's odd. Gnome's compositor is not the fastest, but it generally works okay for many many people.
> the reasons one should consider a switch nowadays
- No screen tearing ever, every frame is perfect
- Real HiDPI support, different scales on different monitors, many compositors support "Apple-style" fractional scaling (render at ceil(scale) and downscale on GPU)
- Proper touchscreen support, without dragging the mouse pointer along
- Touchpad gesture support (this miiiight have been bolted onto X with XInput2 as well)
I love nVidia's drivers. They "just work" and I don't need to muck about trying to understand why this version of this video driver doesn't play right with that version of drm or this kms setting.
Don't get me wrong, there are benefits to being a proper, native component of a modular display system. But what's the point if none of them work, only support random subsets of the hardware, and crash left and right?
Anecdote for anecdote, you just described my experience with NVidia's dreadful hardware / drivers.
Once I managed to make it work, and that I am now able to select between the NVidia and Intel - I have to tell, that I do not see any difference, in performance. However the performance is below what I was used on the previous computers - it might be due to the high resolution (3840x2160).
Connecting external monitors is a nightmare. It produces a lot of heat. And it made me lose so much time! In this case I do not even care about proprietary or open source drivers - I just want it to work.
I never had a more sluggish linux system since 1995. Even typing in the browser or in the terminal makes me make mistakes, so much lag is there.
I'd normally agree, but it's far easier for me to pick and choose software than hardware, and Nvidia makes the best GPUs by a huge margin (particularly if you care about power efficiency). Not to mention the issue of CUDA, which HPC and ML applications rely on pretty much exclusively and for which there's no support in open source drivers AFAIK.
As for the reasons to switch, I acknowledge your list as objective advantages. Unfortunately, I happen to be among the people who don't care for high DPI (1440p is more than enough for me), touchscreens (leave those to phones) or touchpads (TrackPoint forever), so I guess I'll be sticking to X11 for the foreseeable future. A guarantee against screen tearing is nice, but I've rarely seen it on the setups I run (admittedly, mostly on high end hardware), whereas low input lag, stability and driver support are things I am loathe to forgo.
I'm sorry, are you saying that drivers now match specific DEs? That's a sufficiently hideous layering violation to make me automatically dislike Wayland, iff true.
The reason it's becoming possible to run Wayland compositors on NVidia hardware just now is because the Gnome (and now KDE) teams have just given up and started implementing the NVidia-specific pieces. It's not really a Wayland problem, it's an NVidia problem.
You can run any Wayland compositor on AMD hardware without this issue.
> Even more ideally these memory chunks would just be textures in the GPU.
And that idea is very far from modern world of high-DPI monitors.
In high-DPI UI you cannot operate on textures anymore.
Window surface shall be represented not by bitmap (that is O(N) complex to fill by CPU) but rather CommandLists:
[opFillRect,0,0,100,100]
[opFillPath,...]
[opBlitBitmap,...]
Window compositor shall pass such command lists to GPU for rendering.This way to fill rectangle on window's surface will be O(1) complex operation - just to send [opFillRect,0,0,100,100] command to window (and so to GPU) for rendering.
There are two things to mention here that are of particular importance on mobile systems. Lots of chips now have compositor hardware - separate silicon from the GPU that can read textures, scale them, blend them and push the resulting pixels directly to the screen. And second, the word "texture" is used here to mean "any format that GPU, compositor and video decode engine can read or produce" - in stark contrast to the old framebuffer approach, it is essential that things stay in their "native" format for as much of the pipeline as possible and are never CPU read or modified, which would require conversion.
and
> Further slowing down progress is X-Windows. X is the graphical interface for essentially all Unix derived desktops (all other competitors died decades ago).
I'm confused. What's "X windows" and "X-Windows?" Is he talking about the X Window System?
I ported Wayland to VideoCore4 (the multimedia engine in first Pi chip) back in 2011 - it was part of the Meltemi Nokia project that got cancelled the follow year - shame as it was pretty cool and had half a chance IMO. We worked with the team in Oslo on QT acceleration over GL ES and used EGL below Wayland (coupled with some magic APIs to get the VideoCore HVS to work well). Ported this to an ARM combo chip that had just the VideoCore GPU in it as well (no HVS) - it worked pretty well.
Prior to this however, I made a VideoCore demo that used a PCI bridge chip (from Broadcom) and you could plug it into a Dell laptop running Ubuntu and get accelerated video decode and also X11 window scaling working at 60fps. We nearly sold this into Apple for accelerating their macbook's but IIRC, getting the macbook into low power mode whilst the video was playing on the external chip was going to be soo much work that they gave up.
A even further back, I remember validating the HVS hardware block and writing the initial scaler driver for it (IIRC, scaler.c...) and made a dispman2 port for the driver. Circa 2006!
Great team - one of the most enjoyable set of people to work with I've ever come across.
When we ported Android to the VC4 architecture the first time (~2010), the low memory killer in Linux was subverted to understand to kill Android applications based on their use of the VideoCore GPU memory and it worked pretty well, yet it would still close the primary running app once in a while. Run monkey over Android and all hell broke loose - really tough situations to defensively code fore. For example, for CX reasons, you had to ignore some GLES errors in the primary client due to low memory, then the system had to kill another application that was using the memory, then it would kill the EGL context for the primary application so it would refresh the entire app lifecycle using an almost suspect code path inside Android. Good times! Imagine Wayload has very similar challenges for normal desktop use.
VMCS only comes into the picture if you use video decode, but I think Dave Stevenson from the foundation hacked the firmware side to support importing Linux allocated memory blocks into VMCS so that you can do zero-copy decode and import into EGL (or more likely HVS, the EGL support for the formats is pretty limited).
(I really liked the design of the HVS - having pretty much scripted planes is a fresh approach over similar hardware blocks that have a fixed number of planes each with own idiosyncracies and limitations)
The only issue is that the Raspbian downstream kernel and broadcom's proprietary userspace drivers are a pretty big mess and not compatible with anything.
When not using any of them, you get a much better experience with software compatibility.
I wouldn't mind doing the jump to Wayland if it were as flexible as X, but that doesn't seem to be the case. Correct me if I'm wrong.
It’s more like I’m logging into a remote server (using stuff like SSH), starting applications there and getting the GUI up on my local desktop, like any other local window/app, due to X-forwarding (back to my Xorg server).
It’s not perfect, but it sure is a lot more “natural” and integrated into your desktop than VNC or RDP.
I hope we can keep something similar with Wayland (using maybe XWayland or other compatibility kludges). I think it’s pretty nice.
Anyway, thanks for sharing that. I'll look into it.
Here's some info on RemoteApp for anybody else who's interested.
https://techcommunity.microsoft.com/t5/Enterprise-Mobility-Security/Introducing-RemoteApp-and-Desktop-Connections/ba-p/246803
https://social.technet.microsoft.com/wiki/contents/articles/2345.publish-a-remoteapp-application-on-remote-desktop-service.aspx
It'll be due to that.
For the most part it works really well, though still not flawless after all these years.
Reading those though, I’m kind of underwhelmed.
You have to have a dedicated Windows Terminal Server installation (which I’m sure has expensive and complicated licensing), on that you have to “publish apps” (which seems like a process in itself), on the client you need to subscribe to feeds.
And after about 100 such individual steps... magic.
With X11 I just forward a socket, launch a normal program normally, and everything just works.
That’s just so much simpler, so much easier to work with and easier to understand. I can do it casually, on demand, when I need it. No preparation needed.
That Windows thing... looks expensive and something you which takes planning.
https://en.wikipedia.org/wiki/Andrew_Project
>Initially the system was prototyped on Sun Microsystems machines, and then to IBM RT PC series computers running a special IBM Academic Operating System. People involved in the project included James H. Morris, Nathaniel Borenstein, James Gosling, and David S. H. Rosenthal.
>The Andrew Window Manager (WM), a tiled (non-overlapping windows) window system which allowed remote display of windows on a workstation display. It was one of the first network-oriented window managers to run on Unix as a graphical display. As part of the CMU's partnership with IBM, IBM retained the licensing rights to WM. WM was meant to be licensed under reasonable terms, which CMU thought would resemble a relatively cheap UNIX license, while IBM sought a more lucrative licensing scheme. WM was later replaced by X11 from MIT. Its developers, Gosling and Rosenthal, would next develop the NeWS (Network extensible Window System).
Andrew died! Andrew is dead!
https://www.youtube.com/watch?v=nZMuBIJxmnA&feature=youtu.be&t=2m49s
How often mankind has wished for a world as peaceful and secure as the one Andrew provided.
I thought it was pretty cool the first time I used it.
I know of no other method to forward individual programs GUIs to other machines. Abandoning X is abandoning a power we have.
You're right that it may not be used as much. Last time I used it was to debug why selenium tests running in chrome were failing in a server with a virtual display (Xvfb) when the tests would work in any developer's machine. I forwarded chrome's GUI from a docker container in a server in another room via my development machine. Its window was neatly tiled next to my other windows in my tiled window manager. You wouldn't be able to tell it wasn't running locally. I don't have to mess around with a desktop inside a desktop. Such neatness in UX is a comfort I'd like to keep.
I'm all for making a new, more efficient display server, but please don't take powers away.
A power that literally no one uses.
Well, no one that matters anyway. For values of "matter" equivalent to "uses a modern desktop".
Whatever the case, modern development heavily favors coding for the common case, not supporting a flexible framework that can accommodate fringe cases. And in 2019, your use case is fringe.
I just told how I used it.
> Well, no one that matters anyway.
I don't matter? That's kind of rude.
> Whatever the case, modern development heavily favors coding for the common case, not supporting a flexible framework that can accommodate fringe cases. And in 2019, your use case is fringe.
Are you going to say that accessibility features should also be discarded? Being deaf or blind are not the common case.
Like it or not, you will have to come to grips with the fact that the Linux desktop and graphics-stack discussion is dominated by developers from the age of GNOME, who haven't put much thought in beyond how they personally use their own MacBooks. These are the ones calling the shots, and they've decided that X is obsolete, that network transparency is cruft that should be eliminated, and that Wayland is suited to task. So Wayland will be the supported solution going forward.
> Are you going to say that accessibility features should also be discarded? Being deaf or blind are not the common case.
Given the shit state of accessibility under Linux, I'd say yes, it is fringe to the people building the Linux desktop. If you're disabled, it makes much more sense to get a Mac or Windows machine.
People with opinions like yourself are why gnome software developers removed gui functionality for managing raid arrays from gnome disks with the explanation that people should just use btrfs or zfs. This left you with an easy gui to create a raid array that you wont be able to fix without learning cli tools.
https://web.archive.org/web/20140327002450/http://worldofgnome.org/gnome-disks-3-12-adding-csds-removing-raid-support/
At the time you couldn't insofar as I'm aware in the installer create a zfs or btrfs raid in 2014. Further reports of btrfs eating data were still disturbingly common and zfs wasn't in anyones official repos.
Regarding the "modern" desktop
"That might have been so if he had lived a few centuries earlier. At that time the humans still knew pretty well when a thing was proved and when it was not; and if it was proved they really believed it. They still connected thinking with doing and were prepared to alter their way of life as the result of a chain of reasoning. But what with the weekly press and other such weapons we have largely altered that. Your man has been accustomed, ever since he was a boy, to have a dozen incompatible philosophies dancing about together inside his head. He doesn't think of doctrines as primarily "true" of "false", but as "academic" or "practical", "outworn" or "contemporary", "conventional" or "ruthless". Jargon, not argument, is your best ally in keeping him from the Church. Don't waste time trying to make him think that materialism is true! Make him think it is strong, or stark, or courageous--that it is the philosophy of the future. That's the sort of thing he cares about."
C.S. Lewis The screwtape letters.
With the fast networks we have nowadays performance is quite good.
I dread the time (if it ever comes) when Wayland takes over to the point that X11 won't be practical to use. And I would use EXWM too which would be even more painful.
I use tty emacsclient plenty too (including from my phone) and I could certainly manage without remote X11 and perhaps even without EXWM (ouch) but not looking forward to the prospect of losing it, in exchange for... what? Smoother window transitions and scrolling? I don't want my windows to transition, I want jump scroll and things to pop in place instantly. I disable that shit even on Android.
So I guess I will stick with X11 as long as I can and reminisce about better days once it's gone.
(Emacs having its own remote display protocol would be a way to tackle the first part and maybe EXWM could be re-implemented for Wayland somehow too but neither of those things exist today)
oh my.
emacsclient only tells the Emacs session it is talking to to create a new frame, either on an X11 display or on the tty where emacsclient is running. After that emacs does all the work, emacsclient just waits for emacs to tell it that it's done, it takes no active role in actually displaying stuff.
I would love to be wrong on this, please tell me if I am! I would love it if Emacs actually had its own remote display protocol.
Under screen, I keep long running emacs sessions with multiple shell buffers running for months and sometimes years, with all the files I'm working on opened up. Each shell buffer might be configured for some branch of some code, with an interactive python or whatever shell running the code connected to the database, with a bunch of useful commands and context in its history. It's a lot of work to recreate all that state.
(Digression: Bash has no way to save and merge and recreate and manage parallel history threads, does it? Or does the last shell that exits just stomp on the one history?).
Back in my Evil Software Hoarder days, I worked on the NeWS display driver for UniPress Emacs 2.20, and later on the NeWS display driver for gnu emacs. Here's a brochure from February 1988 about UniPress Emacs 2.20 and "SoftWire" (NeWS without graphics).
https://www.donhopkins.com/home/ties/scans/WhatIsEmacs.pdf
It supported multiple display drivers (text, X11, NeWS, SunView), as well as multiple window frames on each of those displays (which gnu emacs didn't support at the time), and you could disconnect and reconnect to a long running emacs later. In effect it was a "multi user emacs" since different users could type into multiple displays at the same time (although weird stuff could still happen since the classic Emacs interface wasn't designed for that).
Emacs 2.20 Demo (NeWS, multiple frames, tabbed windows, pie menus, hypermedia authoring):
https://www.youtube.com/watch?v=hhmU2B79EDU
Here are some examples of where the rubber hits the road in NeWS client/server programming of an emacs NeWS display driver. They both download a PostScript file to NeWS that handles most of the user interface, window management, menus, input handling, font measurement, text drawing, etc, and they have a corresponding C driver on the emacs side. There's also a "cps" file that defines the protocol (which is send in tokenized binary instead of plain text), and generates C headers and code stubs. Together they implement an optimized, high level, application specific "emacs protocol" that the client and server use to communicate:
Emacs 2.20 NeWS display driver (supporting multiple tabbed windows and pie menus in the NeWS Lite Toolkit):
https://www.donhopkins.com/home/code/emacs.ps.txt
https://www.donhopkins.com/home/code/TrmPS.c
Gnu Emacs 18 NeWS display driver (supporting a single tabbed windows and pie menus in The NeWS Toolkit 2.0):
https://www.donhopkins.com/home/code/emacs18/src/tnt.ps
https://www.donhopkins.com/home/code/emacs18/src/tnt.c
https://www.donhopkins.com/home/code/emacs18/src/tnt_cps.cps
Everyone has a different use case though. I could just as well be using RemoteApp or something along those lines.
I don't use it for reason X, therefore I can project this reasoning onto those that disagree, and dismiss them by calling them outdated. So, obviously your use case is irrelevant.
That doesn't mean that I may not be forced to adopt it due to market forces, but let's not pretend that this isn't a glaring hole in Wayland for many people.
https://www.google.com/search?client=firefox-b-1-d&q=Dictionary#dobs=could%20care%20less
Those solutions don't do what X remote desktop does, though. Namely, those let you share a desktop rather than providing independent remote desktops.
It doesn't scale well if you're talking about dozens of simultaneous desktops, yes, but I rarely have more than three at a time. It's fine for that.
VNC/RDP work well enough, and SSH works well enough for when you don't need a GUI.
Going even further, the remote applications can run each on different machine in the Citrix cluster, so it is possible to load balance on per-app basis.
I'd say wayland is just a new thing that breaks everything in an attempt to break less.