Mu: designing a safe computing stack from the ground up

Since this category is lying sad and empty I guess I’ll start spamming it with my projects.

Mu was my first attempt at something like a malleable system. The hypothesis was that software is hard to change for 2 reasons:

  • dependencies make it hard to build
  • even once you get it building for yourself, it’s very easy to break things with a change and not realize for weeks. This makes even experienced programmers skittish about modifying some strange new piece of software. As a result we live in a preliterate world when it comes to software, where programmers only read code they themselves wrote, or that they work on on a regular basis, maybe getting paid in the process.

Mu’s answer to these challenges was:

  • build everything up from machine code for maximum parsimony and ease of building.
  • make everything testable all the way up from machine code and include lots of tests with good names so that someone breaking something sees an evocative error message.

This approach is a bit extreme compared to the stuff in the catalog here :smile: I still believe that any malleability at the product level is inherently self-limiting. The way to support arbitrary recombination and reuse is to expose the naked code of an app. (Whether it’s textual or graphical, whatever the author uses to create it.) The challenge then is to keep the app simple enough that people of all levels can make changes to its code, even before they know much about programming. At least experienced programmers should be routinely comprehending whole new email-sized programs written by others.

(I still don’t have an answer to the sheer intimidation experienced by non-programmers towards even a single screen of code. So I depend on the following copes: intrinsic curiosity, a burning desire to change something, and – ideally – access to a programmer who can patiently answer questions.)

Mu eventually failed in the face of the major challenge faced by every OS project ever: the combinatorial explosion of hardware that needs to be supported. I knew going in that OS’s faced this challenge, but I’d hoped reducing performance requirements might allow me to focus on some lowest-common-denominator standards. Unfortunately, wide swathes of hardware categories have no such standards. Disk, network, touchscreens, GPUs. It took me a while to learn enough to realize this about disks and networks in particular.

I still flatter myself that Mu is useful for certain kinds of apps. One advantage the low dependency surface gives it is robustness and low maintenance. I hope Mu apps will be easy to run unmodified in 50 years (i.e. that x86 will still be around). It makes for a good teaching environment in a 1:1 situation, as I validated with an earlier prototype over a couple of years. Some apps I built with it for my own needs:

1 Like

Thanks for sharing! :slightly_smiling_face:

Given what you’ve learned with this effort, do you see ways to tweak the approach to avoid / route around the wall of hardware support, or would you keep the approach as it is (even though it’s a hard road to take)?

1 Like

Switching to Lua and LÖVE was basically my attempt to avoid/route around the issues Mu ran into. I always knew I had to depend on some substrate layer. Instead of x86 it’s now LÖVE. Lua is already memory safe, something I spent a lot of effort adding to Mu.

The thing I compromised on was the zero dependence on C, but that was never a goal going in, just a consequence of having to build something approachable at the level of machine code.

The thing I’ve had to make my peace with over time is that I can’t avoid trusting some thing. My motivation in chasing machine code was to that C compilers might be malicious. But then I found out about Intel’s management engine :roll_eyes:

I also spent a long time under-estimating the reality that people have to trust me. Downloading a binary made by some unrelated third party that’s been around longer than my projects and has a track record of parsimoniously introducing features, retiring them in a responsible way, being tolerant of fragmentation – I think all that is amazing for trust.

I’m curious about how this approach would work for malleable systems in practice. Because even if you deliver the software in this state it depends on downstream consumers to keep adding additional tests and fix tests as they make changes in order for the software to evolve and spread.

1 Like

Yep! 90% of everything is shit. If you give people a malleable substrate, 90% of the things they’ll malleare it into will be shit. But the foundations will be good, so the messes they’ll get into will be proportionate to the work they put in. Many of them will learn from mistakes. The substrate, being open, will be a resource for learning from examples. Some people won’t use it. There will be ways to help more people learn. I think all this is unavoidable and good.

2 Likes

Thanks for sharing your hypothesis. I think it is an important practice to make explicit our motivations to build/explore malleable systems. In my case, when exploring malleable systems during my research, my starting point was you can not change what you can not understand, so my subsequent hypothesis was:

we need to make explicit the metasystem properties of digital tools/infrastructures, i.e. the ability to describe themself via a continuum between the running app/tool and its source code and create explicit educative practices in a grassroots community where such apps/tools are introduced/used/modified.

As I wanted to go beyond developers, I introduce coding via data storytelling and visualization to a wider population (journalists, activists, students, teachers, researchers, artist, etc.) and that lead me to create Grafoscopio to test my hypothesis and bootstrap a community of practice around such hypothesis and the research behind. The community still exist (I wanted a research that went beyond papers and academia and where closer to the grassroots communities outside) and I think that we’re still slowly exploring “How can we change the digital tools that change us?” (the question of my research).

What I have seen missed in most of the malleable tools research (except for the Dynabook, despite of being focused mostly/only on children) is the social aspect of malleability learning, practice and community appropriation. People is a silent assumption (mostly developers, computer scientist, researchers, mostly in the Global North). I’m not saying that this is your case and on the contrary, making the hypothesis visible can be a path to see what other things we need to make visible (like people and communities behind and in front of malleable systems).

So, thanks again for making explicit your hypothesis and starting this conversation here.

3 Likes

That’s a totally valid criticism. I never managed to work in the context of a community, and I still struggle with that in my current projects. Do you have any suggestions? I’m curious what it took to foster a community around Grafoscopio, and if there are lessons I can learn from your experience.

I wanted research that went beyond papers and academia and closer to the grassroots communities outside

This 100% matches my priorities. Because of my difficulty bootstrapping communities, my focus is on building things are useful even just to a single person working alone. But perhaps that focus perpetuates the problem.

we need to make explicit the metasystem properties of digital tools/infrastructures, i.e. the ability to describe themself via a continuum between the running app/tool and its source code and create explicit educative practices in a grassroots community where such apps/tools are introduced/used/modified.

I don’t follow this at all. Could you rephrase or elaborate?