If Microsoft states that they don't have any for a project like this, I would be wary of taking it too seriously.
grep 'name = ' ms-litebox-Cargo.lock | wc -l
238
edit: grep 'name = ' ms-litebox-Cargo.lock | sort -u | wc -l
221 -c, --count
prefix lines by the number of occurrences grep 'name = ' ms-litebox-Cargo.lock | sort | uniq -c | grep -v '1 name' | sort -n
Package windows-sys has the highest number of versions included, 3: 0.59.0, 0.60.2, and 0.61.2.Edit: Also, beware of the unsorted uniq count:
cat <<EOF | uniq -c
> a
> a
> b
> a
> a
> EOF
2 a
1 b
2 a* Many of them are part of families of crates maintained by the same people (e.g. rust-crypto, windows, rand or regex).
* Most of them are popular crates I'm familiar with.
* Several are only needed to support old compiler versions and can be removed once the MSRV is raised
So it's not as bad as it looks at first glance.
The "North" part seems to be what I think you'd traditionally think of as a library OS, and then the "South" part seems to be shims to use various userlands and TEEs as the host (rather than the bare hardware in your example).
I'm really confused by the complete lack of documentation and examples, though. I think the "runners" are the closest thing there is.
OP wasn't suggesting it was, just that the lack of quality in one significant area of the company's output leads to a lack of confidence in other products that they release.
It does sound hard, and might need to employ homomorphic encryption with hw help for any memory access after code has been also verifiably unaltered through (uncompromised) hw attestation.
This. A while ago a build of Win 11 was shared/leaked that was tailored for the Chinese government called "Windows G" and it had all the ads, games, telemetry, anti-malware and other bullshit removed and it flew on 4GB RAM. So Microsoft CAN DO IT, if they actually want to, they just don't want to for users.
You can get something similar yourself at home running all the debloat tools out there but since they're not officially supported, either you'll break future windows updates, or the future windows updates will break your setup, so it's not worth it.
So they are not incentivized to keep Win32_Lean_N_Mean, but instead to put up artificial limits on how old of hardware can run W11.
I have no insider knowledge here, just this is a thing which get talked about around major Windows releases historically.
Citation needed since that makes no logical sense. You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have. Sounds like FUD.
>but instead to put up artificial limits on how old of hardware can run W11
They're not artificial. POPCNT / SSE4.2 became a hard requirement starting with Windows 11 24H2 (2024) (but that's for older CPUs), and only intel 8th gen and up have well functioning support for Virtualization-Based Security (VBS), HVCI (Hypervisor-protected Code Integrity), and MBEC (Mode-Based Execution Control). That's besides the TPM 2.0 which isn't actually a hard requirement or feature used by everyone, the other ones are way more important.
So at which point do we consider HW-based security a necessity instead of an artificial limit? With the ever increase in vulnerabilities and attack vectors, you gotta rip the bandaid at some point.
What is missing here that was present when this same computer was running Windows 10?
Yes, you can bypass HW checks to install it on a pentium 4 if you want, nothing new here.
>What is missing here that was present when this same computer was running Windows 10?
All the security features I listed in the comment above.
This computer had the security features that you listed while it was running Windows 10, and now that it is running Windows 11 it is lacking them?
(I'm not trying to be snarky. That's simply an astonishing concept to me.)
This was most evident back in the 90s when they shipped NT4: extremely stable as opposed to Win95 which introduced the infamous BSOD. But it supported everything, and NT4 had HW support on par with Linux (i.e. almost nothing from the cheap vendors).
I'm running 11 IoT Ent LTSC on a some T420; it runs pretty okay.
> Example use cases include:
> * Running unmodified Linux programs on Windows
> * ...
That won't work if the unplugged Linux program assumes that mv replaces a file atomically; ntfs can't offer that.
If even MS internal teams rather want to avoid it, it seems like it isn't a great offering. https://news.ycombinator.com/item?id=41085376#41086062
In their intended applications, which might or might not be the ones you need.
The slowness of the filesystem that necessitated a whole custom caching layer in Git for Windows, or the slowness of process creation that necessitated adding “picoprocesses” to the kernel so that WSL1 would perform acceptably and still wasn’t enough for it to survive, those are entirely due to the kernel’s archtecture.
It’s not necessarily a huge deal that NT makes a bad substrate for Unix, even if POSIX support has been in the product requirements since before Win32 was conceived. I agree with the MSR paper[1] on fork(), for instance. But for a Unix-head, the “good” in your statement comes with important caveats. The filesystem is in particular so slow that Windows users will unironically claim that Ripgrep is slow and build their own NTFS parsers to sell as the fix[2].
[1] https://lwn.net/Articles/785430/
[2] https://nitter.net/CharlieMQV/status/1972647630653227054
Not true. There are increasingly more cases where Windows software, written with Windows in mind and only tested on Windows, performs better atop Wine.
Sure, there are interface incompatibilities that naturally create performance penalties, but a lot of stuff maps 1:1, and Windows was historically designed to support multiple user-space ABIs; Win32 calls are broken down into native kernel calls by kernel32, advapi32, etc., for example, similar to how libc works on Unix-like operating systems.
But there's another issue which is what cripples windows for dev! NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT. The MFT ends up having serious lock contention so lots of small file changes are slow. This screws up anything Unixy and git horribly.
WSL1 was built on top of that problem which was one of the many reasons it was slow as molasses.
Also why ReFS and "dev drive" exist...
Also, as far as my (very limited) understanding goes, there are more architectural performance problems than just filters (and, to me, filters don’t necessarily sound like performance bankruptcy, provided the filter in question isn’t mandatory, un-removable Microsoft Defender). I seem to remember that path parsing is accomplished in NT by each handler chopping off the initial portion that it understands and passing the remaining suffix to the next one as an uninterpreted string (cf. COM monikers), unlike Unix where the slash-separated list is baked into the architecture, and the former design makes it much harder to have (what Unix calls) a “dentry cache” that would allow the kernel to look up meanings of popular names without going through the filesystem(s).
https://github.com/Microsoft/WSL/issues/873#issuecomment-425272829
Still, the fact that it's open source is a good thing. People can now take that code and make something better (ripping out the AI for example) or just use bits and pieces for their own totally unrelated projects. I can't see that as anything but a win. I have no problem giving shitty companies credit where its due and they've done a good thing here.
Teams, Office (especially online), One Drive, SharePoint, Azure, GitHub, LinkedIn, all became very shitty and partially unusable with increasing number of weird bugs or problems lately.
A comment like yours is just like saying: "I know a buggy open-source software, why would I trust that other open-source project? The open-source community burned all possible goodwill".
That's the theory, but I don't know how far LiteBox is along to supporting that workflow.
> It focuses on easy interop of various "North" shims and "South" platforms.
For replacing wine on Linux the "North" would be kernel32 API or similar, the "South" would be Linux sys all API.
However this is meant as a library, thus require linking the Windows program to it and eine is more than the system interface, it has all the GUI parts etc of win32 API
Honestly far less interesting to know I was wrong.
That's also what I thought this was, and came to the comments expecting to see something neat about why libraries might need bespoke operating systems.
Consumers and businesses deserve better. It's crazy to me that in 2026 Notepad++ being compromised means as much potential damage as it does, still.
There has to be a better way. I think Linux's flatpak is a reasonable approach here, although the execution might be rather poor. I want a basic set of trusted tool that I can do anything with, and run less trusted tools like GUI programs in sandboxes with limited filesystem access.
There is also sandboxing configuration via Intune for enterprises.
Linux excels over Windows in the area of security by a wide margin, I have no qualms about running an app on Linux versus Windows, any day of the week.
No, this is wrong but might be true if you are talking about Linux package manager vs. Random Windows .exe on internet. But if you are talking about Secure Boot, encrypted disk, sudo etc. Windows is more secure but it looks like https://amutable.com/ will make Linux more secure like Windows.
Edit: Some insecure things on Linux: Dbus (kwallet etc.), sudo, fprint, "secure boot".
This is how most unikernels work; the "OS" is linked directly into the application's address space and the "external interface" becomes either hardware access or hypercalls.
Wine is also arguably a form of "library OS," for example (although it goes deeper than the most strict definition by also re-implementing a lot of the userland libraries).
So for example with this project, you could take a Linux application's codebase, recompile it linked to LiteBox, and run it on SEV-SNP. Or take an OP-TEE TA, link it to LiteBox, and run it on Linux.
The notable thing here is that it tries to cut the interface in the middle down to an intermediate representation that's supposed to be sandbox-able - ie, instead of auditing and limiting hundreds of POSIX syscalls like you might with a traditional kernel capabilities system, you're supposed to be able to control access to just a few primitives that they're condensed down to in the middle.
If you have to recompile, you might as well choose to recompile to WASM+WASI. The sandboxing story here is excellent due to its web origins. I thought the point of LiteBox is that recompilation isn’t needed.
> If you have to recompile, you might as well choose to recompile to WASM+WASI.
I disagree here; this ignores the entire swath of functionality that an OS or runtime provides? Like, as just as an example, I can't "just recompile" my OP-TEE TA into WASM when it uses the KDF function from the OP-TEE runtime?
What is unclear is if it uses its own common ABI or if you use the one of the host os. I don't know why but from the project description I have a little bit of feeling that this is another vibe coded project.
Basically it lets your program run directly on a hypervisor VM, though this one will also run as a Linux/Windows/BSD process.
It sounds interesting and a step forward (never heard of library Os itll now), but why won't this run into hundreds of the same security bugs that plague Windows if it's not spec'd and verified?
I haven't used Copilot much, because people keep saying how bad it is, but generally if you add escape hatches like this without hard requirements of when the LLM can take them, they won't follow that rule in a intuitive way most of the time.
As agent, or writing everything for me, not yet.
Use Linux or BSD and ignore that approach for Vendor Lock-in* into their “library OS”.
I wonder if they, the industry as a whole, eventually will make being able to freely use a PC a subscription, bastardizing "freedom" completely.
A library OS is an operating system design where traditional OS services are provided as application-linked libraries, rather than a single, shared kernel serving all the programs.
I'll play with this later today after work and see how mature it is and hopefully have something concrete and constructive to say. Hopefully others will, too.
"Microsoft bad, Linux good" kind of comments are all over the place. There is no more in depth discussions about projects anymore. Add the people linking their blogs only to sell you thier services for an imaginary problem, and you get HN 2026.
It's maybe the time to find another tech media. If you know one, I would be glad to know.
LibOS is lightweight, with extremely short startup time, and can be used to run Linux programs, making it a versatile option for various applications. It is designed to provide compatibility and sandboxing without the need for VMs, making it a lightweight alternative to containers and VMs. 1
The Library Operating System for Linux was announced on the Linux kernel mailing list, indicating its official recognition and support within the Linux community.
LiteBox is a sandboxing library OS that drastically cuts down the interface to the host, thereby reducing attack surface. It focuses on easy interop of various "North" shims and "South" platforms. LiteBox is designed for usage in both kernel and non-kernel scenarios.
LiteBox exposes a Rust-y nix/rustix-inspired "North" interface when it is provided a Platform interface at its "South". These interfaces allow for a wide variety of use-cases, easily allowing for connection between any of the North--South pairs.
Example use cases include:
Reddit discussion: https://www.reddit.com/r/linux/comments/1qw4r71/microsofts_new_opensource_project_litebox_as_a
Project lead James Morris announcing it on social.kernel.org: https://social.kernel.org/notice/B2xBkzWsBX0NerohSC