The micro-kernel is mostly unchanged from its previous incarnations. It can be described as an hollowed-out Unix-like kernel design from the 80's, to the point where a fair number of its syscalls are direct equivalent to Unix ones. Basic features taken for granted nowadays, like 64-bit support, SMP or kernel threads are missing. Its code is especially old and difficult to work with.
The driver layer works best with a computer from 20 years ago. There's no support for USB (on x86), NVMe, display controllers, UEFI and so on. I don't think that booting on modern hardware would be even possible at this point without some major overhaul.
The service layer is similarly outdated. The most modern filesystem supported is ext2 and it wasn't very stable from what I remember. The native MINIX 3 file-system implementation is solid, but its design is very similar to the System V file system from the early 80's. The most advanced isolation mechanism available is chroot and there's no support for running multiple, isolated userspace instances, which is a shame for a micro-kernel, service-based operating system.
Replacing the outdated MINIX userland with a source port of NetBSD's userland a decade ago was a colossal mistake in hindsight. The NetBSD source tree required a lot of modifications to make this work and back-porting newer versions of NetBSD's source tree is extremely difficult. Instead, a NetBSD syscall translation layer to achieve binary compatibility would've probably been a far more maintainable solution. Additionally, pkgsrc support wouldn't be a problem either.
Finally, I'm pretty sure no one used it as a daily driver back in the 2010's. While it was reasonably functional through SSH inside a VM, trying to use it on real hardware was an exercise in frustration because of all the sharp edges.
Don't get me wrong, MINIX 3 has extremely cool features like the ability to transparently updating system services on-the-fly or extreme resiliency against device driver crashes. The presentation talks done by Andrew Tanenbaum [1] are in my opinion still extremely good to this day and a fair number of the research papers on the MINIX 3 are worth the read. I'm not trying to discourage anyone from trying it out or stepping up as a maintainer, but there's a reason why it became unmaintained.
[1] https://www.youtube.com/watch?v=MG29rUtvNXg (there are multiple versions of it done over the years)
Source: I'm a former contributor.
I can understand the xv6 codebase in a few days (obviously couldn't say the same thing about Linux), and it's very easy to rebuild, but for me it only stays inside QEMU.
Or perhaps take a more realistic approach: production-grade kernel is complex, thus you are not be expected to understand the whole thing in short time?
Even beyond teaching toys, there are huge projects now (SerenityOS for example) where "runs on bare metal" is barely, or not a consideration. I wonder if this causes stagnation in the niche of device driver development-- if all you have to worry about is a handful of "friendly" VM devices, who is learning to deal with the hassles of persnickety real hardware?
I don't know if I've ever seen much innovative device driver development work going on in toy or learning operating systems before virtio type interfaces. Before that most of them used a couple of the simplest hardware available anyway.
There has long been (and still is) a bunch of different emulated hardware that something like qemu offers (and not too hard to add your own emulated hardware) that you can trivially add to your VM if you want to develop device drivers. A number of real hardware companies nowadays actually contribute their hardware models to qemu and even develop to them internally for pre-release development and CI and regression testing.
So I would argue running in VMs with capable device model support like QEMU has actually made learning and developing hardware device drivers more accessible rather than less.
I imagine things like video and chipset drivers are full of "if we're on stepping A0 or A1 hardware, we have to do this magic wiggle dance to get feature XYZ to actually work" hacks, most of which are only documented within the manufacturer's own propriatery-driver team.
There's probably also a lot of compromise for external confounding variables-- "Our part works to spec 98% of the time, but if you install it in specific systems, they're at a specific corner of the compatibility envelope, and you have to do ABC to get it actually stable."
I'd suspect as a result, something like "Driver for VirtualBox SVGA" is going to be a lot simpler than a comparable level of functionality targeting real hardware. OTOH, we do have stuff like 86box which seems to be trying to be a closer simulation of real hardware.
After that, it depends on what you find interesting. Personally, I think Unix is a fossilized relic whose 50 year old design has been pushed far beyond its use-by date. I'm a fan of Fuchsia OS and especially its Zircon kernel, its design documentation is well worth a read just to shake off the idea that POSIX makes for a great syscall layer in the 21st century.
In the end, it doesn't matter too much, you'll always learn something no matter what the operating system you look into. Try things out and discover what you like instead of relying on what others find interesting. I just wish that people stop thinking that the traditional, monolithic Unix system design is some sort of holy scripture that shall not be questioned.
OpenVMS doesn't have fork(), just vfork() and it can run LLVM and modified versions of bash and GCC. Windows NT used to have a POSIX subsystem. Heck, IBM even managed to certify z/OS to be POSIX compliant.
Just because you happen to need some level of POSIX compatibility for interoperability doesn't mean that POSIX has to be the basis for your syscall layer.
UNIX had a couple of interesting design options and that is about it, time to move on.
It's inspired by BeOS. I'm not old enough to use BeOS at its time, but at least from what I read, it's not "yet another Unix"
https://archive.org/details/plan9designintro
> practical/real world
Definitions of that vary wildly.
Oberon is a personal favourite of mine: tiny, a few hundred KLOC for the entire OS, compiler IDE, UI, in a native-code-compiled, type-safe language.
There is quite comprehensive documentation on wikibooks:
https://en.wikibooks.org/wiki/Oberon
You can run it in a browser:
https://schierlm.github.io/OberonEmulator/
There is an app in the Mac App Store:
https://apps.apple.com/us/app/oberon-workstation/id1057155516
You can run it on Linux:
http://oberon.wikidot.com/
It also runs atop Windows and macOS. Lots of choices, really. :-)
Book and code are available from Wirth's site: https://people.inf.ethz.ch/wirth/ProjectOberon/index.html
But the mirror is a little nicer looking (although older? It says, "The second (2013) edition of the book and source code are published on Prof. Wirth's website. We provide links to the original material here, and local zipped copies, with kind permission from the authors."): http://www.projectoberon.com/
Several emulators for the Oberon RISC CPU are available:
In C with SDL graphics: https://github.com/pdewacht/oberon-risc-emu
In JS and Java, from the live version above: http://schierlm.github.io/OberonEmulator/
In Go: https://github.com/fzipp/oberon
People have used Wirth's Verilog with FPGAs to make actual Oberon workstations.
(My own somewhat embarrassing Python emu: https://git.sr.ht/~sforman/PythonOberon )