Making networked software our own

I want to respond to @natecull’s recent provocative comment but without hijacking that thread. There’s a pretty fundamental limit on the extent to which you can make a program your own when it talks to other programs, because it has to respect conventions to be understood. Protocols basically. And it can be difficult (particularly for an outsider) to tell if a given change to a program ends up modifying the protocol in an externally visible way.

The style of programs I’ve been building lately has largely been network-less, but I’m starting to think about adding small amounts of networking. Thoughts so far:

  • The best kind of program from a malleability perspective is when you control both ends of every communication. Then it’s a small matter of testing to ensure your protocols are internally consistent in the ways you actually care about in practice.

  • It’s also useful to build clients that can talk to other servers. Without this you just turn solipsistic. Again, the testing burden is proportionate to your requests in practice.

  • The hardest category of program to build malleably – and the one to go to great lengths to avoid – is a server for external use. Here the protocol surface area will grow unboundedly, and it’s hardest to avoid bugs because the definition of “bug” is extremely expansive: it’s how any version of your server has ever worked.

Final point: the third category isn’t just networked servers, it also includes libraries that are packaged up in a self contained manner for others. Which explains for myself why I’ve never been tempted to create one. I’m fundamentally lazy, and the cost benefit trade-off just never made sense to me here.


I see two more general issues here: modularity and infrastructure. The client-server distinction is a form of modularity, and public servers, as well as reusable libraries, are particular forms of infrastructure.

For infrastructure, I see no place for malleability. The objectives are in conflict. Either you make a promise about your software, or you reserve the right to change whatever you like. The best you can do is to make promises carefully, and only as needed.

The corollary of this is that you should never advertise your prototypes as infrastructure, and aim for massive adoption. Unfortunately, that’s what happens all the time in computing.

Modularity is a more interesting question. In a solo project, it’s just an aspect of architecture that you introduce for your own benefit, so it’s not in conflict with malleability. But another reason for modularity is development in some organization. Conway’s law then claims that it’s the organization that defines the modules of the technical architecture. This obviously puts constraints on malleability. I don’t remember having read anything interesting on this topic so far.

Let’s not forget the forth category, p2p processes where there is no distinction between the server and the client.

And we can’t avoid developing malleable networked software, networked software is entangled with all parts of our lives now. We have to think of malleability under the p2p constrains.

@Apostolis from my perspective in this thread, p2p makes things worse because it makes everything a server :laughing: P2p software is even more tightly constrained by shared protocols, which can complicate the experience for people trying to modify software for their needs.

@khinsen I’m not so sanguine as you. “Infrastructure” is just a word, and different people draw the line differently. I see our goal here as making “infrastructure” malleable, and moving to depend more on malleable “infrastructure”.

I find an evolutionary perspective useful here. Life is deeply interdependent and yet fiercely competitive. Adaptive advantages accrue to organisms that can benefit more from their infrastructure while letting their infrastructure control them less. (Why bother with malleability, then? Because level playing fields foster competition. Yes, I’m rebutting Marc Andreessen :slight_smile: )

1 Like

I didn’t expect to see a reference to Marc Andreessen in this forum!

My definition of infrastructure is technology that you can build on without having to worry about its sustainability (because it is taken care of by others). That implies that nobody messes up your foundation, at least not on time scales relevant to you. Of course there’s never any promise, only expectations. Software infrastructure can then only be malleable as long as it remains interoperable with its own past, i.e. respects documented APIs and protocols.

By that definition, the concept of infrastructure doesn’t apply to life. But the evolutionary perspective is indeed useful, because it fits a malleable universe better than the technology/design perspective. So maybe it’s the term “infrastructure” that should go away.

1 Like

My main interest in type theory has been related to Conway’s law and malleability . In essence, the parallel change of multiple parts of software that are connected in a multitude of networks. The only way to achieve this is to mimic life and the membrane system, create specifications that allow changes to not be visible to the exterior world.

Type theory here is not to guarantee safety, though that would be desirable as well. It is to guide developers in the context of never ending concurrent change.

How do you do that?

With correctly creating specifications of course that do not require the programmer to prove something. In other words the specification should be written in such a way that it tells the programmer what is possible and what is not, and it does not force him to prove to the type checker that his program is correct.

And when I am talking about a specification, I am not talking about a restful one, that does not have state. The specification has state and it changes with time in a dynamic way.

The reason for all this is the increase of complexity of p2p social processes so as to handle the complexity of the environment.

1 Like

That’s pretty similar to what I use order-sorted term algebras for in my Digital Scientific Notation. It’s much like a type system, but the types are malleable, rather than rigidly predefined by a language’s core data structures. And I agree that this can be a promising approach for scaling up malleable systems as well.

One thing I’ve been thinking of is funnelling everything through a limited number of formats. e.g. a lot of things can be converted to and from email so my own software only needs to work with email and the external systems deal with the translation.

1 Like

Recently (over the last year or so), I started to use e-mail for two non-traditional scenarios:

  • Managing bookmarks across devices (Linux, macOS, Android). Each bookmark is an e-mail message with the URL in the body and the associated title in the subject field.
  • Reading non-public Mastodon messages. A small cron job on a server picks up the messages and sends them to me by e-mail, meaning that they enter into my standard e-mail workflow.

E-Mail is really well supported by tools and libraries, that’s a big plus.


That’s clever. I didn’t think of using it for bookmarks.

Do you have a prefix on the subject or a special header to mark them? Or are they just stored in a folder or something?

I send them to a special address (with a + suffix, a little-known feature of mail addresses). The messages with that suffix are sorted into a special IMAP folder on the server.

1 Like