Collaborative work on malleable software

Indeed, and we also need hierarchical modularity, i.e. the possibility to construct components with well-defined interfaces using lower-level components whose interfaces don’t leak to the higher level. Support for that is surprisingly rare. I have been told that Rust crates can do that. Most component systems fail because of some flat name space. Examples: DLL hell, package managers with a global name/version space.

1 Like

Even though I appreciate all the different approaches taken by all the different projects, I would personally like to have some guides or principles with which to review them.

“What are the basic methods that can enable malleability?”

Questions like these should have an evidenced based answer that is independent of the technology that could be used to achieve it.

For example, in biology, the functionality of a protein remains intact even after multiple mutations. This allows the organism to survive in the presence of multiple mutations. How can we transfer this rule to software and malleability?

We need to allow people to change their software freely, without fear of catastrophic failure. We need to provide them guarantees that with their change, it will still work as expected.

Personally, I have tried to find such tips, in order to guide my research on the subject. But I am not an expert on biology or complex systems , so I cannot provide the evidence. And a lot of principles that could have helped me, elude me.

I am not aware of any research on this question. But I agree it’s something we should have.

Sometimes it does, sometimes it doesn’t. But proteins are definitely more robust than software is today. And we do have some (rather recent) insights into why proteins and DNA are robust under mutations. The genetic code itself is part of this mechanism. No programming language has anything similar, as far as I know. And I suspect that no Turing-complete language can ever promise robustness.

I don’t think we can imitate the mechanism (which I dont know actually). I was thinking mostly of the effect of such mechanism.
Multiple mutations are allowed to accumulate without a loss of functionality. This permits exploratory search into new protein structures. Thus it increases biodiversity and the speed of evolution.

Now, I believe that it is useful to have this conceptual model for software as well. So Malleability is not just the reduction of the cost of a mutation, but also the ability of that mutation to be effective and functional and thus spread to the rest of the software genome.

To put it another way, it is useful to look at the effects of a specific method in terms of evolution and population dynamics, and then guide us in the design of our tools.

Also, I believe that at this point in history, we dont just have one user and her software. We have millions of users and a mediated entity that acts as a controller, for ex. facebook or mastodon. It would be useful to know the effects of the controller to users. We need to model them as dynamical systems.

So we have malleability of software by a COMMUNITY and the community needs to be able to understand the dynamic system it alters.

In this regard, I find the effort put by dynamicland highly promising.

1 Like

I agree with just about everything you wrote here, but I am also realizing that we need to keep this vision of a future distinct from the more modest vision of malleability as “reducing the cost of a mutation”, which is valuable as well, and probably easier to achieve.

In other words, I think we need distinct terms for “software that users can adapt to solve their own problems”, where “users” are individuals or closely collaborating small teams, and “software that can evolve in loose collaboration by a potential huge number of users working on their own variants”. And perhaps there’s a level in between that could be useful as well.

2 Likes

I think the distinction should be between
a) applications that are used by a single user or a small team.
b) applications which require interactions by millions of users.

This distinction mostly affects the methodology required to understand the dynamics of the software and the decision procedure required to perform global changes, which is communal in b).

I would though argue that the effectiveness of a mutation is useful in both cases, because even in case a), the software is free software, thus anyone can take the changes that another person has done.

I believe that these two concepts(cost/effectiveness) can work in parallel, they are not mutually exclusive, and depending on the type of software and problem, one could lean towards one or the other.

1 Like

Random overheard comment:

jj is slaughtering us on rebase speed
git

1 Like

“Millions” sounds a bit scary. I’d be happy to get started with something like hundreds. Which may not require the same approaches as literally millions.

There’s also “requires” (interactions by x-s of users) vs. “enables”, “encourages” and other intermediates. The whole space remains unexplored. There’s a lot of work to do.

1 Like

Wondering if anyone has taken a closer look at Radicle, which seems to be a P2P forge where each repository-plus-issue-tracker is identified by a hash and some discovery mechanism permits access to the repository given that hash. It isn’t clear to me from a quick glance at the site how the discovery mechanism works.

1 Like

I recall looking at Radicle a few years back, and it seems like a nice design overall… but I wasn’t quite sure how to best try it out for real use, since I’d have to convince at least one other person to do the same in order to get much out of the P2P side. :sweat_smile:

And as for discovery, I am not quite sure either… There doesn’t seem to an “approved” spec so far, but there was some discovery content in one of their proposals.

1 Like

Thanks for the pointer, that helps a bit. And yes, trying out P2P stuff is really difficult unless you are surrounded by suitable nerds!