Worst I ever did in a CS class. I did not deserve whatever grade he gave me, but I was the only person that stuck it out and didn't drop the class.
Worst I ever did in a CS class. I did not deserve whatever grade he gave me, but I was the only person that stuck it out and didn't drop the class.
It could be because they are coming at it from the "experts bubble" (I'm sure someone will know the proper name) where they can't conceive of the types of problems someone coming at it with no experience will encounter.
Going from a tiny "hello world" OS you've made by following instructions and notes to being dropped into a totally different codebase that uses probably entirely different patterns and conventions seems a little unfair. It's like expecting someone who has played around with some graphics API for drawing pixels to expecting them to make changes to the DirectX shader compiler.
I had a similar experience - a totally new professor gave us a about 5 lectures on parsers, compilers, AST's, BNF etc, and then just expected us to design a whole language that would be compiled to C for the assignment, using classic tools like yacc etc. For reference, the average programming ability of the students in the class was "What's a switch statement?".
Absolutely everybody failed to achieve anything, and the only reason I even passed it was because I took an example language the professor provided and reviewed it and stated how I would liked the language to work instead.
It's all a bit absurd.
I believe you mean the curse of knowledge [0].
> I've been doing this so long that I can't remember which things are easy and hard to pick up for the first time, so you all will have to tell me— if you don't ask questions after each reading, I won't know what to reinforce and you will be lost.
I think that's how it goes for classes that are not frequently taught or haven't been taught before, in almost any subject area.
We did pascal (probably a subset of the language, cant remember) and then c--. we did an interpreter + runtime for one, and compilation to some sort of bytecode for the other. dont remember which was which.
If it were GPL they'd have to give back their modifications to the community..
Microkernels are not really modern as such. Just different.
Minix is a lot older than 25 years. Your point stands, but you need to increase the age. Minix and Tannenbaum's textbook was part of my operating systems class in undergrad CS. I think I took that class in 1989 or 1990.
As an aside I enjoy the irony that we are trying to build Minix on Linux systems many years after the debate...
To add another layer - as I understand it, the processors we are mostly running those Linux systems on are secretly running a minix based system for their management functions [1].
Reference: https://en.wikipedia.org/wiki/Intel_Management_Engine
Just to rant about this article though, I really dislike this statement by the author:
> Why on this green Earth is there a web server in a hidden part of my CPU? WHY?
> The only reason I can think of is if the makers of the CPU wanted a way to serve up content via the internet without you knowing about it.
That's absolutely not the use case here. Intel is not on some secret conspiracy to collude with governments to serve content from your computer.
The management system is a legitimately useful functionality for the support teams of large corporations, public entities (schools, local government), etc. enabling management of their fleet of user devices. It's a very unforgiving and thankless task to be able to support a worldwide workforce, for example, or large non-computer savvy users.
Now, does the Intel Management Engine pose a security risk. Absolutely! We tech folk should absolutely know about this capability and be able to make decisions if this functionality should be enabled for our fleet. Having it default "on" without a way to sensibly turn it off broadly or update it when necessary is a big issue. But this is nothing more than Intel giving the majority of its user base functionality that is desirable, not some deep laden conspiracy.
I don't like the Intel Management Engine running on my personal device. But I sure appreciate it for its intended use case and audience.
If it can serve data from your computer, it _will_ serve data from your computer.
Not everyone across the globe is confortable with lack of privacy on their own devices.
I fully wish that Intel would have gone with the "opt in" approach. Either opt-in with choices in architecture, or opt-in with the option disabled by default. It's the "always there, you don't see it, you can't disable it" thing that's the problem. Intel messed up here, for sure.
But the idea of the Intel Management Engine is sound and extremely useful. And it's the visceral (possibly unsubstantiated) attack against it in discourse that's the thing I'm addressing.
I mean, truthfully, maybe the only way to change things anymore is to be overly loud and exaggerate about issues, basically black & white arguments without any middle ground. Maybe social media has brought us to this point, where we can't see issues as both positive and negative. It kind of starts to sound like our politics, in this way. So maybe the only thing that could possibly change how the IME is configured or deployed is to be a huge stinker about it and make large noise. Sad, but that's probably the case.
Even a sensible opt-out approach would be better than what we have today, we're simply stuck with it whether we like it or now.
I think that's what makes most people disregard the fleet management aspect and leap towards the conspiracy angle.
On the cloud workloads, with kubernetes on top of hypervisors, the workload is more akin to microservers than anything else.
Likewise on Android, since Project Treble and starting with Android 8, all newer drivers are userspace processes with Android IPC.
... is about the closest I know.
The 3.3.0-rc8 or whatever version has been sitting there for several years, despite all blockers already fixed. Just because there's nobody at the helm to push the release out.
There's also significant unmerged work that's just waiting for someone to merge it.
The micro-kernel is mostly unchanged from its previous incarnations. It can be described as an hollowed-out Unix-like kernel design from the 80's, to the point where a fair number of its syscalls are direct equivalent to Unix ones. Basic features taken for granted nowadays, like 64-bit support, SMP or kernel threads are missing. Its code is especially old and difficult to work with.
The driver layer works best with a computer from 20 years ago. There's no support for USB (on x86), NVMe, display controllers, UEFI and so on. I don't think that booting on modern hardware would be even possible at this point without some major overhaul.
The service layer is similarly outdated. The most modern filesystem supported is ext2 and it wasn't very stable from what I remember. The native MINIX 3 file-system implementation is solid, but its design is very similar to the System V file system from the early 80's. The most advanced isolation mechanism available is chroot and there's no support for running multiple, isolated userspace instances, which is a shame for a micro-kernel, service-based operating system.
Replacing the outdated MINIX userland with a source port of NetBSD's userland a decade ago was a colossal mistake in hindsight. The NetBSD source tree required a lot of modifications to make this work and back-porting newer versions of NetBSD's source tree is extremely difficult. Instead, a NetBSD syscall translation layer to achieve binary compatibility would've probably been a far more maintainable solution. Additionally, pkgsrc support wouldn't be a problem either.
Finally, I'm pretty sure no one used it as a daily driver back in the 2010's. While it was reasonably functional through SSH inside a VM, trying to use it on real hardware was an exercise in frustration because of all the sharp edges.
Don't get me wrong, MINIX 3 has extremely cool features like the ability to transparently updating system services on-the-fly or extreme resiliency against device driver crashes. The presentation talks done by Andrew Tanenbaum [1] are in my opinion still extremely good to this day and a fair number of the research papers on the MINIX 3 are worth the read. I'm not trying to discourage anyone from trying it out or stepping up as a maintainer, but there's a reason why it became unmaintained.
[1] https://www.youtube.com/watch?v=MG29rUtvNXg (there are multiple versions of it done over the years)
Source: I'm a former contributor.
I can understand the xv6 codebase in a few days (obviously couldn't say the same thing about Linux), and it's very easy to rebuild, but for me it only stays inside QEMU.
Or perhaps take a more realistic approach: production-grade kernel is complex, thus you are not be expected to understand the whole thing in short time?
Even beyond teaching toys, there are huge projects now (SerenityOS for example) where "runs on bare metal" is barely, or not a consideration. I wonder if this causes stagnation in the niche of device driver development-- if all you have to worry about is a handful of "friendly" VM devices, who is learning to deal with the hassles of persnickety real hardware?
I don't know if I've ever seen much innovative device driver development work going on in toy or learning operating systems before virtio type interfaces. Before that most of them used a couple of the simplest hardware available anyway.
There has long been (and still is) a bunch of different emulated hardware that something like qemu offers (and not too hard to add your own emulated hardware) that you can trivially add to your VM if you want to develop device drivers. A number of real hardware companies nowadays actually contribute their hardware models to qemu and even develop to them internally for pre-release development and CI and regression testing.
So I would argue running in VMs with capable device model support like QEMU has actually made learning and developing hardware device drivers more accessible rather than less.
I imagine things like video and chipset drivers are full of "if we're on stepping A0 or A1 hardware, we have to do this magic wiggle dance to get feature XYZ to actually work" hacks, most of which are only documented within the manufacturer's own propriatery-driver team.
There's probably also a lot of compromise for external confounding variables-- "Our part works to spec 98% of the time, but if you install it in specific systems, they're at a specific corner of the compatibility envelope, and you have to do ABC to get it actually stable."
I'd suspect as a result, something like "Driver for VirtualBox SVGA" is going to be a lot simpler than a comparable level of functionality targeting real hardware. OTOH, we do have stuff like 86box which seems to be trying to be a closer simulation of real hardware.
After that, it depends on what you find interesting. Personally, I think Unix is a fossilized relic whose 50 year old design has been pushed far beyond its use-by date. I'm a fan of Fuchsia OS and especially its Zircon kernel, its design documentation is well worth a read just to shake off the idea that POSIX makes for a great syscall layer in the 21st century.
In the end, it doesn't matter too much, you'll always learn something no matter what the operating system you look into. Try things out and discover what you like instead of relying on what others find interesting. I just wish that people stop thinking that the traditional, monolithic Unix system design is some sort of holy scripture that shall not be questioned.
OpenVMS doesn't have fork(), just vfork() and it can run LLVM and modified versions of bash and GCC. Windows NT used to have a POSIX subsystem. Heck, IBM even managed to certify z/OS to be POSIX compliant.
Just because you happen to need some level of POSIX compatibility for interoperability doesn't mean that POSIX has to be the basis for your syscall layer.
UNIX had a couple of interesting design options and that is about it, time to move on.
It's inspired by BeOS. I'm not old enough to use BeOS at its time, but at least from what I read, it's not "yet another Unix"
> practical/real world
Definitions of that vary wildly.
Oberon is a personal favourite of mine: tiny, a few hundred KLOC for the entire OS, compiler IDE, UI, in a native-code-compiled, type-safe language.
There is quite comprehensive documentation on wikibooks:
https://en.wikibooks.org/wiki/Oberon
You can run it in a browser:
https://schierlm.github.io/OberonEmulator/
There is an app in the Mac App Store:
https://apps.apple.com/us/app/oberon-workstation/id1057155516
You can run it on Linux:
It also runs atop Windows and macOS. Lots of choices, really. :-)
Book and code are available from Wirth's site: https://people.inf.ethz.ch/wirth/ProjectOberon/index.html
But the mirror is a little nicer looking (although older? It says, "The second (2013) edition of the book and source code are published on Prof. Wirth's website. We provide links to the original material here, and local zipped copies, with kind permission from the authors."): http://www.projectoberon.com/
Several emulators for the Oberon RISC CPU are available:
In C with SDL graphics: https://github.com/pdewacht/oberon-risc-emu
In JS and Java, from the live version above: http://schierlm.github.io/OberonEmulator/
In Go: https://github.com/fzipp/oberon
People have used Wirth's Verilog with FPGAs to make actual Oberon workstations.
(My own somewhat embarrassing Python emu: https://git.sr.ht/~sforman/PythonOberon )
But maybe that’s the answer for MINIX too - maybe one of the people who have authored all those unreviewed PRs might start a community-based fork. If all the activity moves to the fork, there is a chance the originators might officially bless it
Can you install gcc and X ?
I don't think I've run an X server on it ever, but I can verify you can at least get it to pop up xterm's on a remote machine.
gcc: I don't think any recent gcc's work, but the latest minix (3.4.0rc6) does have clang-3.6 as /usr/bin/cc
It was fun, and I learned a lot about OSs in that class.
> Fiwix is an operating system kernel written from scratch, based on the UNIX architecture and fully focused on being POSIX compatible. It is designed and developed mainly as a hobby OS and, since it serves also for educational purposes, the kernel code is kept as simple as possible for the benefit of students and OS enthusiasts. It is small in size (less than 50K lines of code), runs on the i386 hardware platform and is compatible with a good base of existing GNU applications.
>"Until now, I have not been able to find a MINIX 3 project that allows you to compile the code that is referenced in the book Operating Systems: Design and Implementation (3e) (v3.1.0)."
For such a well-known and longstanding project I was surprised to read this. Are the sources available at https://git.minix3.org just too old or is this more to do with the book?
> if someone thinks they're going to be able to pick up the latest edition of Operating Systems: Design and Implementation in search of documentation for either MINIX-the-project or MINIX-the-software, they will be sorely disappointed. The fact that the book is now about a decade out of date is one reason for the latter. There are a number of reasons for the former.
<https://news.ycombinator.com/item?id=9894961>
[1] http://www.few.vu.nl/~ast/afscheid/