Collaborative work on malleable software

Indeed, and we also need hierarchical modularity, i.e. the possibility to construct components with well-defined interfaces using lower-level components whose interfaces don’t leak to the higher level. Support for that is surprisingly rare. I have been told that Rust crates can do that. Most component systems fail because of some flat name space. Examples: DLL hell, package managers with a global name/version space.

1 Like

Even though I appreciate all the different approaches taken by all the different projects, I would personally like to have some guides or principles with which to review them.

“What are the basic methods that can enable malleability?”

Questions like these should have an evidenced based answer that is independent of the technology that could be used to achieve it.

For example, in biology, the functionality of a protein remains intact even after multiple mutations. This allows the organism to survive in the presence of multiple mutations. How can we transfer this rule to software and malleability?

We need to allow people to change their software freely, without fear of catastrophic failure. We need to provide them guarantees that with their change, it will still work as expected.

Personally, I have tried to find such tips, in order to guide my research on the subject. But I am not an expert on biology or complex systems , so I cannot provide the evidence. And a lot of principles that could have helped me, elude me.

I am not aware of any research on this question. But I agree it’s something we should have.

Sometimes it does, sometimes it doesn’t. But proteins are definitely more robust than software is today. And we do have some (rather recent) insights into why proteins and DNA are robust under mutations. The genetic code itself is part of this mechanism. No programming language has anything similar, as far as I know. And I suspect that no Turing-complete language can ever promise robustness.

I don’t think we can imitate the mechanism (which I dont know actually). I was thinking mostly of the effect of such mechanism.
Multiple mutations are allowed to accumulate without a loss of functionality. This permits exploratory search into new protein structures. Thus it increases biodiversity and the speed of evolution.

Now, I believe that it is useful to have this conceptual model for software as well. So Malleability is not just the reduction of the cost of a mutation, but also the ability of that mutation to be effective and functional and thus spread to the rest of the software genome.

To put it another way, it is useful to look at the effects of a specific method in terms of evolution and population dynamics, and then guide us in the design of our tools.

Also, I believe that at this point in history, we dont just have one user and her software. We have millions of users and a mediated entity that acts as a controller, for ex. facebook or mastodon. It would be useful to know the effects of the controller to users. We need to model them as dynamical systems.

So we have malleability of software by a COMMUNITY and the community needs to be able to understand the dynamic system it alters.

In this regard, I find the effort put by dynamicland highly promising.

1 Like

I agree with just about everything you wrote here, but I am also realizing that we need to keep this vision of a future distinct from the more modest vision of malleability as “reducing the cost of a mutation”, which is valuable as well, and probably easier to achieve.

In other words, I think we need distinct terms for “software that users can adapt to solve their own problems”, where “users” are individuals or closely collaborating small teams, and “software that can evolve in loose collaboration by a potential huge number of users working on their own variants”. And perhaps there’s a level in between that could be useful as well.

2 Likes

I think the distinction should be between
a) applications that are used by a single user or a small team.
b) applications which require interactions by millions of users.

This distinction mostly affects the methodology required to understand the dynamics of the software and the decision procedure required to perform global changes, which is communal in b).

I would though argue that the effectiveness of a mutation is useful in both cases, because even in case a), the software is free software, thus anyone can take the changes that another person has done.

I believe that these two concepts(cost/effectiveness) can work in parallel, they are not mutually exclusive, and depending on the type of software and problem, one could lean towards one or the other.

1 Like

Random overheard comment:

jj is slaughtering us on rebase speed
git

1 Like

“Millions” sounds a bit scary. I’d be happy to get started with something like hundreds. Which may not require the same approaches as literally millions.

There’s also “requires” (interactions by x-s of users) vs. “enables”, “encourages” and other intermediates. The whole space remains unexplored. There’s a lot of work to do.

1 Like

Wondering if anyone has taken a closer look at Radicle, which seems to be a P2P forge where each repository-plus-issue-tracker is identified by a hash and some discovery mechanism permits access to the repository given that hash. It isn’t clear to me from a quick glance at the site how the discovery mechanism works.

1 Like

I recall looking at Radicle a few years back, and it seems like a nice design overall… but I wasn’t quite sure how to best try it out for real use, since I’d have to convince at least one other person to do the same in order to get much out of the P2P side. :sweat_smile:

And as for discovery, I am not quite sure either… There doesn’t seem to an “approved” spec so far, but there was some discovery content in one of their proposals.

1 Like

Thanks for the pointer, that helps a bit. And yes, trying out P2P stuff is really difficult unless you are surrounded by suitable nerds!

Maybe it helps. Building on top if this…

1 Like

Just chiming in quickly to mention that I am coining two new concepts related to Moldable Development and in the context of Social experience design (SX). For now I’ll just give a brief summary with no further elaboration.

I find moldable development too generic a term, and the article summary definition of “a means to gather information from the system” too focused on the technical aspects.

Moldable Operational System (MOS)

This is a new type of application, or rather a social experience that is in production.

Moldable Operational System

A moldable operational system (MOS) removes the artificial boundary between software creators, their projects and the resulting software deliverables consumed by their clients. A MOS recognizes the entire software lifecycle, where client needs and business domains always evolve. A MOS anticipates this by making IT stakeholders and their needs intrinsic to the operational system, and adjacent to the client’s needs. Design, development, operation and use is a collaborative process between client and creator, that is supported by the system while it is in production.

Moldable development process

No definition here as yet. But moldable process is needed in a MOS in order to find the sweet spot in the development process where the client’s business problems are front and center. Processes mature and are themselves up for continual improvement, and furthermore each organization, team culture and business domain requires different process variations. In other words process must be malleable, not just the software.

Interestingly a HN submission today delves into the same topic: Oh my poor business logic.

2 Likes

I really like the Moldable Development concept (or the glimpses of it that I’ve seen embodied in Glamorous Toolkit), but, I have to say that, judging by GT, it also terrifies me due to its lack of security thinking.

I don’t quite know how to square the circle of “all the software on my computer should definitely be moldable by me, the operator” with “but also any software that gets onto my computer, by any means, from anyone who is NOT me must definitely NOT have access to any of those moldable features until I’m sure I know what it does”.

One example of what terrifies me is Glamorous Toolkit / Lepiter’s “thisSnippet” object reference, available to all snippets of code typed into a Lepiter notebook.

With one method call (thisSnippet database), a snippet of code - which might have come from somewhere else, and I might have been tricked into clicking “execute” on it - can get root access to my database or my hard drive and then do what it likes.

I’d like things to not be quite so moldable. I don’t entirely know how. Microsoft’s Windows 7 answer of “constantly pop up dialogs asking the user if they really want to run this superuser-level action” certainly isn’t the way, but neither is “just have root lol”.

Perhaps what I want is a set of nested visible interlocks, so that code snippets start out with zero access to the wider framework, and I have to grant rights to these features, case by case, and then turn them off.

For another example: browsing a Lepiter notebook that’s not mine (like the default “Glamorous Toolkit Book”) has become a traumatic experience to me after I accidentally deleted a page - or nearly did - just by clicking. (Because in Lepiter, there’s a “-” icon which can either mean “minimise this window” or “delete this page”, which in user interface terms is much like a car having an accelerator pedal labelled “brake”). To fix this, I really would like to by default be in a “read” mode where I can’t accidentally modify a page. If I need modify rights, I should go through a highly visible, deliberate ceremony of enabling changes, at multiple levels: at the document/database level, at the page level, and at the snippet level, I think. Also, all changes I make need to be logged (with timestamp and logged in user) and have the ability to be reverted, like in a wiki.

I think I’d like a similar series of visible locks for enabling potentially “foreign” code to have access to my system, too. Make sure that at every recursive level of objects in the system, the operator is clear about what they’re granting access to, that this access is highly visible and non-spoofable in the user interface, and that it can be tracked and removed as soon as it’s not needed. Even moldable systems absolutely need to be able to run with minimal rights when the moldability isn’t directly being invoked. But finding the balance in this process will be a dialog that’s quite tricky to work out.

I suspect even the classic Smalltalk method style might be leading us in totally the wrong direction, security-wise. For a third example, here’s a hypothetical Smalltalk-like language from Stéphane of combo.cc. And I mean no disrespect, but… this, this right here, again, it’s a security nightmare:

https://combo.cc/posts/in-search-of-the-holy-grail/

"Hello, " append: name, append: "!" println

This right here. Take a good look. Do you see what’s wrong with this style?

We’re creating a string, and then calling a println method on it.

So every string, everywhere in the system, has the ability to write to your standard output.

No string should have the inherent ability to create output! Strings shouldn’t even have a reference to the system they’re on! They’re data! You’re handing every single piece of data a live grenade!

If that idiom/style came from Smalltalk-80, then that’s the original sin of security, right there. We can’t blame C++ for this. If Microsoft had rewritten all of Windows 95 from scratch in Smalltalk, in this style, it would still have led inevitably to Office 98 macro viruses. There is no way a “grant every piece of data root access to my I/O port” philosophy could ever have given us secure desktops.

But I suspect that this style of method call is all over Smalltalk. If it is, then not only the Smalltalk/Squeak/Pharo codebase, but the whole software architecture, philosophy and pattern language that evolved around the Smalltalk community from the 1980s through 2020s, is going to need a lot of serious reevaluation and probably rebuilding from the ground up before it could be considered safe for use on an Internet-connected machine in 2023.

2 Likes

Very good observations, and I’m totally with you on those.

I have to delve deeper in all the research, but the focus is mostly on the innovative UX patterns, so may be forgiven that that isn’t yet with security-first mindset front and center. In terms of security, for the component-oriented (app-free computing) approach I am looking into Wasm/WASI for sandboxing and only allowing contract/interface/capability-based strictly controlled access to any system resources.

This would make the examples you give wholly impossible, unless such powers are explicitly bestowed.

There’s a “too moldable” too, I think. What one’d want is “controlled moldability” and not an anything goes. I am thinking that this moldability should be domain-specific and what is possible at a particular location depends on use case and context. That should still allow ample flexibility and evolution of the domains by a diverse set of stakeholders, but also possibly reserve more sensitive extension/modification to a more select group of people that can be trusted with such responsibility.

2 Likes

Capabilities are good, but what Smalltalk shows me is that it’s not enough just to have capabilities (Smalltalk objects are capability boundaries, they just put the boundaries in the wrong places). We have to also have an architecture that clearly separates out what the capabilities are and where they flow. And we need to be able to reengineer this architecture as we discover security faults in it. Which is very hard, since changing an object architecture breaks all your userbase’s entire workflows; if at all possible, best to get it right before beginning. Is it possible, though? Hopefully we can at least learn something from the failure modes of previous systems.

I do like what Glamorous is doing with its UI experimentation. Despite the problems I’ve hit, I have enjoyed exploring it and I quite like the “notebook” style based on sliding Miller Menus. And the commitment to defining views for every object so a “naked objects” modelling style can be built up interactively and visually. There’s some very interesting work being done there, and I’d like to steal ideas from it if I can.

(For example, I’m wondering if a text version of the GT UI would work - it might end up looking a bit like Norton Commander for documents, but that might not be a bad thing? Two panes, they slide left and right as you create child nodes… am I just reinventing Emacs buffers, or is there something genuinely new and interesting here?)

“Controlled moldability” is a good term, I like it. I’m not sure quite what it would mean for software architecture but I suspect it might require the ability to interactively modify every object as we turn “debug/edit” features on or off. If it were Lua objects, for example, we might perhaps need to be able to swap out the metatable of an arbitrary running object in order to “turn on” the features that were not normally there. Possibly not, but we might potentially need a mechanism of that level of power, and we might need ways to separate out different level of accesses to an object. These questions inform the design and selection of low-level VMs. Though simplicity should always be a priority too.

I also think that we should think very carefully about UIs, and find a way to make a UI that’s system-controlled (not created by each app/component) and that has non-spoofability built into it. For example, it should be guaranteed impossible by design for any app or component to appear identical to any other window, to go fullscreen, to move itself or raise itself and steal keystrokes, etc. Windows has never done this and I think Linux , MacOS/iOS and Android also do a very poor job. But if our whole desktop was configurable and connectable, and windows weren’t siloed into Apps, then we would need to know much more than we do today, exactly what each visual component was, and be sure that we always have a secure path for interaction through known-good components.

Also Stéphane is probably on here, so I have say that I really like this post from August about VMs:

https://combo.cc/posts/what-i-would-like-to-see-in-a-modern-vm/

3 Likes

Thinking out-of-the-box on creative next-gen UX/UI is certainly an inspiring exercise with these concepts of malleability/moldability in mind. I once bumped into this UX conceptual design of MercuryOS, where the designer questioned why nowadays we still have the equivalent of a messy office desk full of papers in Windows Desktop abstractions. See MercuryOS and introductory post:

Also Bret Victor gives a lot of inspirations e.g. in the video on why we always draw dead fishes:

When I look at Glamorous Toolkit, I see an expert UI. One that is information-dense and probably quite productive, but which also looks very intimidating to the newcomer. An interface that has a learning curve, before one truly loves it. It will be harder to introduce such UI designs to a broad audience. (But again, it is the UX concepts and patterns that matter, and GT is great PoC of those.)

What appeals to me are Infinite Canvases, which - by removing artificial viewport boundaries - help free the mind and unleash creativity. I am intrigued by the idea to have something similar to Prezi Presentations, except for having the entirety of the collaborative social experience / application into one single workspace all of the time. Fully interactive and living at different zoomable/pannable areas of the canvas. Zoom out and get a full perspective on the state of everything. See what’s others are doing, and get visual cues of where your attention should go to, etc.

3 Likes

Obligatory link from me: Teliva. @natecull you’re right that text mode can simplify things. I chose it partly to guarantee that there’s some part of the screen (a menu bar) that user programs can never write to. (Well they can, but I cover it from the system on every “frame”.) And there’s a coarse set of predicates that all disk and network calls pass through, that you can program. A meta program that is always written by the user.

It’s hell on usability, as you can imagine :smile: Any program you download can’t do squat initially. Which in turn means no third party software vendor ever has an incentive to use it compared to competitors that are less secure :joy:

1 Like

There is indeed no security in Smalltalk. But then, Smalltalk shares that feature with 99% of all programming languages and systems, which all predate the Internet. Which is why security is such a big issue today.

Glamorous Toolkit is meant to be a developer tool, for use by experts. It’s certainly not meant to run code downloaded from the Internet. This includes Lepiter notebooks. They are meant to be shared among developers who trust each other, or check the code before running it.

Note that you can develop secure systems with a development tool that itself isn’t. There’s decent support for Rust development in Glamorous Toolkit (but I can’t say much about it since I don’t use Rust).

The question of who should be allowed / enabled / empowered to change what in a computing system is very important, but I am not aware of any comprehensive (i.e. system-level) solution.

1 Like

I hear you. Nobody likes to be told what not to run on their computers. Especially not experts. I enjoy running Node.js for hacking small scripts, and I know it has full rights to access all my filesystem and I don’t care (as long as it’s me that’s writing the scripts).

Thing is: I want to believe that developers are super-careful about the code running on their machines, I really want to believe that, but in my personal experience, developers are the people right now who are most about running code downloaded from the Internet, just curling all the githubs right into sudo bash. And then uploading the output of that to machines they don’t own where someone else has root. That’s the Cloud devops culture in 2023. Or at least the culture I’ve observed.

So my feeling is that if Glamorous/Lepiter ever hits the bigtime, developers will most definitely be sharing and importing their Lepiter notebooks all over the Internets, and they will definitely not be checking all of the notebooks they import, and lots of fun and exciting stuff happening inside corporate firewalls will follow, the kind of stuff that those of us who support Microsoft Office and try to put out cyber-fires have been dealing with for the last 20 years.

However, as you say, dealing with the consequences of their own actions is the responsibility of developers. I guess my personal interest is in something like a very much cut-down Lepiter for use by personal non-developer type people who are interested in tracking just their personal thoughts and personal data and stuff, but would like to also do a little bit of computation.

And I’m also wondering what would happen if someone built their own super-minimal operating system and tried to use something like Lepiter as the core of that OS. What things would need to go in that OS, and what things would need to stay out for security’s sake. Eg, what if we built “a Commodore 64 for 2023”, and Pharo/Lepiter/Glamorous (or something like that but much smaller) was its BASIC.

The question of who should be allowed / enabled / empowered to change what in a computing system is very important, but I am not aware of any comprehensive (i.e. system-level) solution.

That’s the core of the problem, yeah. I guess we don’t have to solve that problem immediately at the OS level: we could try building a single multiwindow browser-like application and see if it is at all possible to solve the thorny permissions-delegation problem a little better and with less accidental complexity than Web browsers so far have.

2 Likes