Convivial Design Heuristics for Software Systems

Paper by stephenrkell (2020)

Consider how many web applications contain their own embedded ‘rich text’ editing widget. If linking were truly at the heart of the web’s design, a user (not just a developer) could supply their own preferred editor easily, but such a feat is almost always impossible. A convivial system should not contain multitudes; it should permit linking to them.

The battle for convivial software in this sense appears similar to other modern struggles, such as the battle to avert climate disaster. Relying on local, individual rationality alone is a losing game: humans lack the collective consciousness that collective rationality would imply, and much human activity happens as the default result of ‘normal behaviour’. To shift this means to shift what is normal. Local incentives will play their part, but social doctrines, whether relatively transactional notions such as intergenerational contract, or quasi-spiritual notions of our evolved bond with the planet, also seem essential if there is to be hope of steering humans away from collective destruction.

Metadata

  • suggesters: jryans
  • curators: jryans
1 Like

I’m really liking this, though I have Feelings about it.

Value reference over definition. When we ask what a programming language can express, we mean what can be defined within it, not what things outside itself it can reference. So begins that language’s play for domination. To achieve conviviality, this
must be reversed. The more a system can draw from its context of use, the less it dominates its users.

Compare this with the Unix model. Other than “standard input”, about all that the Unix model allows a program to “draw from its context of use” is a file. But granting access to the filesystem requires letting a program delete all your data and is increasingly considered a major security risk.

I’d like to be able to share much more data between processes in a multiprocessing operating system than though either “files” or “network sockets”.

While “valuing reference” is very important, we also need to be able to put strict bounds on what any given program in a language can reference.

Value linking over containment. This is the same idea restated. What we contain, we control. By contrast, linking a (partial) program to an external definition requires some kind of negotiation across a boundary. This might be by consensus, when the link is
‘direct’, meaning referer and referent are well-matched syntactically and semantically. Or it might be by mediation, where some adaptation layer intercedes on the interaction. This distinction ought to be as essential as that between synchronous and asynchronous
communication, yet it remains a non-feature of most programming systems.

Certainly the post-iPhone “app” model is making everything more and more Contained in app silos rather than Linked. I do think though that we need ways of Containing data so that there is clear collection of it (as on a removable disk, or in “weak links” that allow garbage collection).

Probably though, all data should be immutable wherever possible, so that copying it/caching it and referencing it are the same operation.

Beware fragile affordances. Mention of ‘linking’ calls to mind the web. The web appears to embrace linking, but in practice its linkable design is easily trampled on. Computationally sophisticated web pages are usually not functionally linkable in such a way that would allow, for example, one to usefully reference their contents ‘from the outside’ or cause their internals to be re-bound to alternative referents elsewhere.

There’s something important there. The more we make media “rich” (or “computationally sophisticated”), the harder it is to make them reliably linkable. Is this inherent to computation, or can we strike a balance?

Support referential structures, not [just] definitional structures. What else might it mean to be ‘expressive’ with respect to the outside? Infrastructure should make it easy to ‘construct views’ that is specifying new ways to refer to already extant definitions

In networks, despite the goal of evolutionary application deployment that lies behind the Internet’s end-to-end design, IPv4’s possessiveness of the network namespace—‘the only addressable entities are IPv4 interfaces’—defied even contemporary thinking on how to do naming, and continues to stymie evolution of IP in deployment (specifically, to version 6 [6]).

This! I was just thinking last night about how, in the late 1980s, it seemed that if a global Net emerged, it would be based on nameable Programs (or Objects) running on multiple machines, and you’d route messages to those programs/objects. Then TCP/IP went mainstream, jumping us right back to a 1960s model of numbers and “ports”… and DNS names only went as far as “hosts” which became snapped up by big business. We lost at least 50 years worth of name-resolution knowledge in the rollout of the Internet.

Forget world domination. Implementers of new languages sometimes speak of a ‘libraries’ problem. What they mean is that their language cannot adequately interface with existing code, and to compound the problem, it has also not yet achieved world
domination—meaning thousands of developers have not yet been compelled to spend thousands of hours, on either reimplementing that existing code, or writing and maintaining shims or wrappers. This is not a problem that can or should be ‘solved’. Rather, the problem is in the assumed necessity of bootstrapping a new ecosystem—
the hope of world domination.

Linux fell prey to this fear. Now it’s the Official Operating System of the Cloud, it’s won, the world is dominated, but I don’t feel freer.

Something similar has happened to “startups”. It’s not enough to succeed, a new business has to utterly dominate or be destroyed.

We need ways of surviving that don’t require winning in a deathmatch against the entire world.

Bring the outside in. In older languages such as Pascal, C, Fortran and so on, external entities can be declared, often with a suitable type annotation and/or calling convention, and thereafter used much like a ‘native’ object. However, in more ‘modern’ garbage-collected languages, this kind of referencing the outside has been deemed intractable. The art of language implementation has turned inwards—on an entanglement of compilers and managed heaps, a fully circumscribed universe.

Hmm. Yes, but also, be careful, the outside is full of (points to the flaming wreckage of the Cyberwar). You don’t actually need or want it all in, necessarily.

Do not create worlds. At or near the origin of the universe, creating new worlds is a necessary activity. But when existing matter is plentiful, it is either a futile gesture or a bid for domination. A key idea behind subject-oriented programming [8]. was that
creating a new taxonomy need not force creation of a new world.

Hmm. What’s subject-oriented programming, I wonder? Because Object-Oriented Programming certainly did have this “force the creation of a new world” problem.

[8] W Harrison and H Ossher. 1993. Subject-oriented programming: a critique of
pure objects. ACM SIGPLAN Notices, 28, 411–428.

All objects should appear in all spaces. To continue the idea of gatewayed spaces: if new worlds cannot be created, how can we create a new reality? The answer is to create a new space—but one that maps the old world within it. This notion of ‘space’ is not
an established concept in programming systems design.

I like this “spaces” idea, it’s what began haunting me around 2005. It does seem to be important.

However: I don’t want all objects in my space, thank you. I want all the objects that I want and trust to have in my space, and that’s not at all the same thing.

But I want the ability to be able to choose what I trust.

Say no to classical logic. Ostermann, Giarrusso, Kästner, and Rendel [17] wrote about the link between classical logic and conventional notions of modularity. Classical logics have the property that new facts cannot invalidate old inferences. When applied to
modularity, such approaches bring scalability, because locally established facts (such as the correctness of a client with respect to a module specification) are never jeopardised by changes elsewhere (such as varying the implementation within the module). However, facts do change.

Hmmmmph. This smells very much like the AI themes from around 1980. Lots of experimenting with new non-classical logical schemes, which led to “Frame logic” which overlapped with “Inheritance” in OOP.

And for the most part, none of this worked.

The different extensions to classical logic weren’t even self- stable, most of them, let alone playing well with each other. I am dubious that more tinkering with beyond-classical logic is the key to success.

Implement porously, not portably. The best portable specifications provide unifying ‘views’ onto a multitude of external definitions. But in contrast, a portable implementation is often problematic, created as a ‘needs must’ approach to mitigating external diversity. In order to avoid linking with such diverse outside entities, it tends to contain fresh re-implementation addressing only a fixed ‘lowest common denominator’ view of the outside.

Hmm. I can’t parse this. There are so many security hazards now that I think we need to throw a lot of things away and keep to a subset of what we know to be safe.

Abstraction definition as a costly operation. … the definition of new abstractions is often perceived as a ‘free’ operation, or at least an author’s prerogative, but it is actually a high-cost operation precisely because it is a play at dominating others.

Maybe. If you force people to use your abstraction and only your abstraction, yes. But if you don’t allow people to create their own abstractions? Then you’re the one dominating them.

Create the simplest possible tools for safely creating the most general possible abstractions.

Reasoning scales best when it’s small. Limiting the use of abstraction and preferring non-monotonic reasoning would seem to put some dampeners on our ability to reason about large systems. Yet I certainly would not argue in favour of unreliable systems.
How do we reclaim this goal? The ‘obvious’ answer is to keep our systems small.

Yep, small is good. But don’t force systems to be too small either. Let them be the size they naturally want to be.

Information guiding, not hiding. If defining new interfaces is high-cost, then where does that leave perhaps the most time-honoured design heuristic, information hiding in the sense of Parnas [18]? Such a technique requires predicting what is likely to change,
so that we know what to hide.

Yes, this is always the big problem with information hiding (and by extension, the entire OOP paradigm which relies heavily on it).

But it also seems like there is a need for some information hiding, as a fundamental system primitive. Otherwise there is no privacy, and no security, at all.

I don’t know how to square these two circles.

1 Like

Woohoo! I found the full text of Harrison and Ossher, 1993, “Subject-Oriented Programming (A Critique of Pure Objects)”.

https://dl.acm.org/doi/pdf/10.1145/167962.165932

Abstract

Object-Oriented technology is often described in terms of an interwoven troika of themes: encapsulation, polymorphism, and inheritance. But these themes are firmly tied with the concept of identity. If object-oriented technology is to be successfully scaled from the development of independent applications to development of integrated suites of applications, it must relax its emphasis on the object. The technology must recognize more directly that a multiplicity of subjective views delocalizes the concept of object, and must emphasize more the binding concept of identity to tie them together.
This paper explores this shift to a style of object oriented technology that emphasizes the subjective views: Subject-Oriented Programming