Assuming you're being serious: no. Linux distributions do not work this way because Linux was itself deliberately designed to be unlike Minix.
It's more accurate to say that Linux evolved to be very different from Minix because of Torvald's focus on pragmatism.
1. https://groups.google.com/forum/?fromgroups=#!msg/comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ
http://oreilly.com/catalog/opensources/book/appa.html
http://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate
"Operating Systems: Design and Implementation" (included Minix) and "Modern Operating Systems" are great books and I think is because of them that Minix is relevant in OS teaching (specially OSDI, although I enjoyed MOS too).
EDIT: typo
Not to say that I think it's bad to have plenty of unixes floating around - just curious.
The main goal is to improve reliability. Bad drivers or bad hardware are the leading cause of lockups these days. If you isolate the drivers, file system etc from one another, you can prevent the whole system from dying. Minix3 goes further than its predecessors by adding a "reincarnation server", which periodically pings other servers and kills/restarts them if they've become unresponsive.
Minix essentially does at a single-system level what cloud architectures try to do at a multi-system level: assume that the parts are unreliable, isolate faults and recover by restarting sub-units.
To me the more interesting possibility is that in a microkernel world you could solve a number of classic OS/system software bunfights. Databases, for example, tend to fight a bit with OSes about the best way to buffer data or flush to disks. In a microkernel system you can arrange matters so that the database provides its own memory and disk management. It's just another process; nothing special.
Here's the report that was written about the design: http://www.minix3.org/docs/jorrit-herder/osr-jul06.pdf
Andrew Tanenbaum explains it well: http://www.youtube.com/watch?v=bx3KuE7UjGA
The above is stating the obvious, so I presume you are asking about commercial/embedded use. The answer is probably similar - if you want a nice clean base to start from and want a general-purpose UNIX rather than a realtime system, and you don't need much from the userland beyond classic UNIX tools. Perhaps your work involves hacking at the system level, or you're keen on having a microkernel architecture. It's a bit niche but could be a very suitable base for an embedded project.
Can someone explain how Minix compares to Hurd? In terms of how far ahead both projects are, the differences in their architecture and visions?