gnramires 41 minutes ago

I think we shouldn't[1] be making Operating Systems, per se, but something like Operating Environments.

An Operating Environment (OE) would be a new interface, maybe shell and APIs to access file systems, devices, libraries and such -- possibly one that can be just launched as an application in your host OS. That way you can reuse all facilities provided by the host OS and present them in new, maybe more convenient ways. I guess Emacs is a sort of Operating Environment, as browsers as well. 'Fantasy computers' are also Operating Environments, like pico-8, mini micro[2], uxn, etc..

Of course, if you really have great a low-level reason to reinvent the way things are done (maybe to improve security, efficiency, DX, or all of that), then go ahead :)

The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer). Also, developing a new UX is very different from developing a new low-level architecture (and you can just plug the UX into existing OSes).

In most cases an OS shell (and an OE), from the user point of view, is "just" a good way of finding and launching applications. Maybe a way of finding and managing files if you count the file manager in. It shouldn't get too much in the way and be the center of attention, I guess. (This contrasts with its low level design, which has a large number functions, APIs, etc.). But also it should probably be (in different contexts) cozy, comfortable, beautiful, etc. (because why not?). A nice advanced feature is the ability to automate things and run commands programmatically, which command shells tend to have by default but are more lacking in graphical shells. And I'm sure there is still a lot to explore in OS UX...

[1] I mean, unless you really have a reason with all caveats in mind of course.

[2] https://miniscript.org/MiniMicro/index.html#about

MomsAVoxell 6 hours ago

I had the privilege to work as a junior operator in the 80’s, and got exposed to some strange systems .. Tandem and Wang and so on .. and I always wondered if those weird Wang Imaging System things were out there, in an emulator somewhere, to play with, as it seemed like a very functional system for archive digitalization.

As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.

  • Keyframe 3 hours ago

    as a fellow dinosaur and a hobbyist, I concur. Especially SGI's. For those that didn't know, MAME (of all things) can run IRIX to an extent https://sgi.neocities.org/

maxlin 6 hours ago

This list should include SerenityOS IMHO.

It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.

  • Rochus 5 hours ago

    > This list should include...

    And a few dozen others as well.

alphazard 2 hours ago

Notably missing from this list are seL4 and Helios which is based on it.

https://ares-os.org/docs/helios/

The cost of not having proper sandboxing is hard to overstate. Think of all the effort that has gone into linux containers, or VMs just to run another Linux kernel, all because sandboxing was an afterthought.

Then there's the stagnation in filesystems and networking, which can be at least partially attributed to the development frictions associated with a monolithic kernel. Organizational politics is interfering with including a filesystem in the Linux kernel right now.

  • MYEUHD 2 hours ago

    It's not based on it, but inspired from it.

    Helios was written from scratch.

    • alphazard an hour ago

      I don't really understand or appreciate a distinction. The seL4 design was used as a starting point and small changes were made mostly as a matter of API convenience. I consider the design of an operating system to be by far the most difficult part, and the typing to be less impressive/important.

      Helios hasn't done anything novel in terms of operating system design. It's taken an excellent design and reimplemented it in with a more modern language and built better tooling around it. I tend to point people towards the Helios project instead of seL4 because I think the tooling (especially around drivers) is so much better that it's not even a close comparison for productivity. It's where the open source OS community should be concentrating efforts.

Lerc 4 hours ago

Are there any operating systems designed from the ground up to support and fully utilize many processor systems?

I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability

  • toast0 an hour ago

    I think you're reaching towards the concept of a Single System Image [1] system. Such a system is a cluster of many computers, but you can interact with it as if it was a single computer.

    But mainstream servers manage hundreds of processor cores these days. The Epyc 9965 has 192 cores, and you can put it in an off the shelf dual socket board for 384 cores total (and two SMT threads per core if you want to count that way). Thousands of core would need exotic hardware, even a quad socket Epyc wouldn't quite get you there and afaik, nobody makes those, an 8 socket Epyc would be madness.

    [1] https://en.m.wikipedia.org/wiki/Single_system_image

  • GMoromisato an hour ago

    I'm working on GridWhale (https://gridwhale.com).

    It's not a true OS--but it's a platform on top of an arbitrary number of nodes that act as one.

    The cool thing is that from the program's perspective you don't have to worry about the distributed system running underneath--the program just thinks it's running on an arbitrarily large machine.

  • fiberhood 3 hours ago

    The RoarVM [1] is a research project that showed how to run Squeak Smalltalk on thousands of cores (at one point it ran on 10,000 cores).

    I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.

    We just applied for YC funding.

    [1] https://github.com/smarr/RoarVM

    [2] https://www.youtube.com/watch?v=f1605Zmwek8

    [3] https://www.youtube.com/watch?v=wDhnjEQyuDk

  • 0x0203 3 hours ago

    Yes, to a degree, but probably not quite like you're thinking. The super computers and HPC clusters are highly tuned for the hardware they use which can have thousands of CPUs. But ultimately the "OS" that controls them takes on a bit of a different meaning in those contexts.

    Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.

    I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.

    Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.

  • Findecanor 3 hours ago

    There have certainly been research operating systems for large cache-coherent multiprocessors. For example, IBM's K42 and ETH Zürich's Barrelfish. Both had been designed to separate the kernel state at each core from the others' by using message passing between cores instead of shared data structures.

RetroTechie 2 hours ago

Why the "novel" qualifier?

There exist many OSes (and UI designs) based on non-mainstream concepts. Many abandoned, forgotten, @ design time suitable hardware didn't exist, no software to take advantage of it, etc etc.

A 'simple' retry at achieving such alternate vision could be very successful today due to changed environment, audience, or available hardware.

xattt 5 hours ago

I can’t help but notice that each of these stubs represent a not-insignificant portion of effort put in by one or more humans.

  • mrbluecoat 2 hours ago

    Indeed. Could have been retitled "Labor of Love OSes"

serhack_ 7 hours ago

I would love to see some examples outside of the WIMP-based UI

  • WillAdams 4 hours ago

    Well, there were Momenta and PenPoint --- the latter in particular focused on Notebooks which felt quite different, and Apple's Newton was even more so.

    Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)

  • amelius 6 hours ago

    Maybe a catalog of kernels?

  • wazzaps 5 hours ago

    MercuryOS towards the bottom is pretty cool

    • MonkeyClub 4 hours ago

      MercuryOS [1, 2] appears to be simply a "speculative vision" with no proof of concept implementation, a manifesto rather than an actual system.

      I read through its goals, and it seems that it is against current ideas and metaphors, but without actually suggesting any alternatives.

      Perhaps an OS for the AI era, where the user expresses an intent and the AI figures out its meaning and carries it out?

      [1] https://www.mercuryos.com/

      [2] https://news.ycombinator.com/item?id=35777804 (May 1, 2023, 161 comments)

rubitxxx3 5 hours ago

This list could be longer! I expected much more, given that CS students and hobbyists are doing this sort of thing often. Maybe the format is too verbose?

diego_moita 2 hours ago

As a kernel programmer I find it so lame that when people say "Operating Systems" what they're thinking is just the superficial layer: GUI interfaces, Desktop Managers and UX in general. As if the only things that could have OS were desktop computers, laptops, tablets and smartphones.

What about more specialized devices? e-readers, wifi-routers, smartwatches (hey, hello open sourced PebbleOS), all sorts of RTOS based things, etc? Isn't anything interesting happening there?

m2f2 6 hours ago

[flagged]

  • padjo 6 hours ago

    Don’t try to force your values on other people. In the end your time spent with friends is just as meaningless as their time spent developing an obscure OS.

  • junon 5 hours ago

    No thanks :)