Smooth Gradients of Composability, or Old Man Yells At Compilers

I have a built-in, knee-jerk sense of annoyance whenever I see a new compiled language. I can’t help it. It’s probably because I grew up on interactive BASIC. My first instinctive response is: “Great! A new language! We absolute need new and better approaches to programming! But, why on Earth did you sabotage your ideas right out of the box by making it compiled? Now we’ll have to start over from scratch with the next language.”

It is hard to explain this intuitive dislike for “compilation” to someone who didn’t grow up with interactive, interpreted languages. After all, compilation is currently considered the Best Practice of Best Practices in language design. It’s just what we do - it’s almost (perhaps not quite but almost) even what Computer Science itself is defined as being, the science of compiler design. If you’re not designing a compiler, are you even doing Programming Language Design? How could you have a language without compilation? What kind of strange person would you have to be, to be interested in programming languages yet NOT interested in compilation?

This essay is an attempt to sketch out a rationale for my intuitive sense of distress at compilation, perhaps just to try to simplify any future conversations I might have with bright-eyed language designers with yet another slightly modified take on C.

(Or in this case, and the language which prompted this essay: “Fennel”, which appears to be Clojure For Lua, and which sadly trips my sense of “How very close to being a good idea, and yet how very far. Please take the compiler out of this thing first, and then we can begin to talk.” Fennel: https://fennel-lang.org)

Okay. So what annoys me about compilation is, it breaks what I think is a major quality of good programming language design, but one that isn’t often identified. I will give this quality a name: “Smooth Gradient of Composability”.

So what is a Smooth Gradient of Composability?

What it means is that, in my aesthetic opinion, programming is about building Systems by means of Composition. A System is a thing made out of smaller parts, that are glued together to make a bigger thing with multiple parts but which is itself, in some important sense, still a single thing. That’s a System. One thing, multiple parts, made by putting parts together. So far, so good, right? We all agree on this.

Next, Composability is the ability to put parts together. Any tool or language intended for building Systems must have some amount of Composability in it. Otherwise it’s just going to be one big opaque lump - not a system-creating thing but a black box, an Application perhaps. It’s probably impossible for any piece of software to actually be incapable of Composability at some level: just by being able to send messages to a thing, means you can build it into a wider System. But not all tools or languages are necessarily equally good at this task.

A Smooth Gradient of Composability means specifically that you can compose parts together to make Systems at every scale, from small to big. As you put parts together, each set of parts then must have the same composability properties that an individual part has, and the same systemic properties that any system has. The tool or language must not impose arbitrary restrictions on how small or how large your parts or your system can be. If it does impose such restrictions, that’s where your Gradient of Composability is no longer Smooth: it has big jumps in it.

I want tools that don’t have big jumps in their Gradient of Composability. I want everything from a function call to a function or type declaration to an Object to a Program to a Process to a Virtual Machine to an Operating System to a Network… I want all of these kinds of Systems, and all things at levels in between them, to be equally able to be plugged together or taken apart. A smooth transition from small to large, at all scales.

Compiled languages, at least in the form we currently see them, and which bright-eyed and happy young language designers keep churning out, do not exhibit a Smooth Gradient of Composability. Instead they have a very lumpy gradient. This is how:

Compiled languages are based on Compilation Units. A Compilation Unit is either a file, or a set of files. (Sometimes just naming and selecting the set of files to be compiled is an extremely non-trivial problem and possibly an undecidable one: that’s why Package Managers exist, why all of them are awful, and why they all fight). However, a Compilation Unit is not really a thing that exists inside a compiled program, and it is certainly not the smallest thing. The smallest thing is usually an Expression or a Statement; the next smallest might be a Declaration or a Statement Block or a Function; there might be something like an Object. None of these things, however, can generally exist on their own in a compiled language. The compiler gets to ingest one huge chunk of Program, at once, and it emits one huge opaque chunk of Compiled Artifact (object code, machine code, bytecode, intermediate representation, or whatever).

Further, both source compilation units and binary compiled artifacts are pretty much obscure to the running program. They have structure inside them, yes, but this structure is greyed out and hidden.

This is very much Not a Smooth Gradient of Composibility. It is a large chunky gradient. Worse, it’s two entirely different sets of gradients, two completely separate worlds. One one side of the compiler you have a language built out of Types, Objects, Functions, Expressions, Statements etc. Then it goes through the distressingly loud and very clever machine full of parsers and optimisers and rotating blades and mulchers, and out the other side comes extruded sealed blocks: Libraries, Executables, Files.

So we are maintaining entirely separate worlds of our programming stuff, and the fine-grained world of the source language gets chunked into opaque grey mush once the compiler has had at it.

Then the pain of chunkiness doesn’t stop at just two separate worlds: A compiled program will then usually further go through the operating linker/loader/exec system and come out as Processes, Memory Pages, Network Sockets, etc.

Finally, in the Cloud world, we are still moving in an even chunkier direction: our equivalent of Compilation is Deployment, and that tends to work at the level of Containers and VMs. We’ve made an entire computer - or network of computers - our smallest unit of Composability.

This massive chunkiness all up in my composability gradients hurts my aesthetic sense immensely. I want something like a Cloud (without the privacy issues, but let’s assume I have my own Cloud that I own), but where functions and types and even expressions can exist as first-class entities, just one big soup of them.

In the worlds of Forth, Smalltalk and Lisp, we get closer to my ideal of smooth gradients. Not really there yet, but closer. For one, we don’t have such a separation between Language Runtime and Operating System: the language often controls the bare machine. Compilation in all three languags, where it occurs, happens at the level of individual functions. There is a strong sense that a system is a living, running entity, not a dead stopped one, and that changes happen in small increments.

I believe there is plenty of room in design space for more programming languages which put an emphasis on composability at small levels, and where the mechanisms of composability continue to scale smoothly up to the level of networks. I can see no reason why we need many of the arbitrary 1970s-era limitations that we have: the idea of process and machine, for example, shouldn’t really even be a thing in 2023.

To get there, though, we need this quality and this vision to be articulated in clear terms. We need an approach to compilation, where it’s necessary (and I’m not convinced it’s always necessary), that doesn’t throw away the fine structure inside a program when it turns it into object code. And we need an approach to maintaining and interacting with software systems that puts a priority on the liveness and always-on nature of the system (while, yes, also preserving the “can rebuild from scratch” quality that we have in files of source code).

The “gradient of composabilty” is I think one possible way of articulating this quality and this vision.

2 Likes

Oh, and a postscript, but perhaps a very important one:

In order to achieve a Smooth Gradient of Composability - and this again is why the current flood of new compiled languages frustrates me, because they’re all doomed by ignoring this - we have to have something that transcends the concept of ‘programming language’. We need something like a universal runtime.

The focus on compilation ignores the runtime: that’s the whole purpose of compilation, to take a language specified solely in terms of syntactic patterns and make it run on whatever runtime we want. But the shape of the runtime itself is exceptionally important if we want our Composability to be Smooth and not Chunky.

If we are okay with Chunky - if we want our unit of composability to be “I Deployed a new Container/VM/Application/set of Libraries” - then sure, we can ignore the runtime. The current global computational fabric, built as it is out of the 1970s assumption that the Internet is a bunch of PDP-11 computers running Unix talking to other PDP-11 computers running Unix, can handle that. Every time we want to make a change to a single function or a single type in a program anywhere, we’ll just spin up an entire new simulated PDP-11. That’s what our VAX (Windows) and Unix-derived operating systems (Linux, MacOS, Android) currently do.

But if we want Smooth composability, where the things we’re spinning up are a bit smaller than an entire simulated PDP-11, well to get that, we need a runtime that’s available everywhere. That runtime needs to deal in things that are maybe the size of a Smalltalk object - but probably smaller. Probably something more like a Lisp cons cell.

This is not something that compiled language designers have on their radar at all. If it was, they wouldn’t be talking in terms of files and packages, they’d be talking in terms of runtimes, object serialization/deserialization systems, and protocols for inter-runtime communication. I don’t see this being a current topic of discussion.

I also think that Java, and the rapid speed at which the Java language itself is revving, has demonstrated to us that “Java object/class” is NOT a sufficiently small and stable abstraction to build such a new global runtime. (Edited to add: If Java was sufficient, then the core of it wouldn’t need to change, would it? There would just be a bunch of new classes being written as libraries. When the core abstraction of a language is “class” and yet, the language has to evolve by adding things which aren’t classes to its core… then something has obviously gone very wrong with that unit of abstraction, it is not doing the job of abstracting that it was built to do). And I’m fairly dubious about WASM filling this role, either, though it might be possible to build such a runtime on top of WASM.

Javascript… is doing something a little like this, but its components are now too large, too complex, and not quite the right shape. (Internally, Javascript is a soup of Objects, but they’re created from scripts – chunks of text – , which are a different kind of entity. There’s a lot of Lisp in Javascript, but not quite enough. And web browser Javascript runtimes have a lot of complex and brittle and very centralised security, but it’s there because the Javascript core didn’t think about security enough.)

To be clear, the kind of runtime I’m talking about is one where we could take a type or a function or an expression written in one language and transmit it and have it interact with a type or a function or an expression written in a completely different language.

At that point, the idea of what a language “is” starts to blur. In my opinion, a language ought to be something like a set of definitions, and not much else. There should be a universal “grammar” - a glue for putting the equivalent of “words” together - that the runtime provides. That means that the runtime needs to be absolutely as simple as possible, providing something like secure access to individual memory cells, a concept of perhaps functional application (perhaps extremely bounded in terms of resource usage, to avoid denial of service attacks), and not much else. It starts to sound very much like a Lisp, but again, not even really a Lisp that we currently have.

That’s a vision that is so far outside of the current box that it’s hard to articulate. I think perhaps the “data science notebook” people (as in Jupyter) are coming closest to having this need, but they’re still thinking at a level of chunkiness that is mostly unhelpful to me.

1 Like

In one way, I fully agree with your rant. The first question I ask about every new language and its tooling is “how does it interface to software written in other languages”, and the answer is universally disappointing. The two common mechanisms are a C-level FFI and the OS level of files and processes, which is always outside of the language.

But then, I don’t believe in the universal runtime either. That was the idea of Lisp machines, and early Smalltalk systems. It doesn’t work out, because the world is too messy (you will never get everyone to agree with your choices), and too diverse (a washing machine controller does require runtime support very different from a Web server).

What I think is technically achievable, though socially difficult, is a small set of low-level primitives and protocols for interfacing software components. We actually have some of those, such as the universal agreement on 8-bit bytes and on Unicode with UTF-8 for representing text. And some of the higher-level protocols, such as over-the-wire bytestreams or JSON, are quite acceptable though not perfect. The big missing piece is data with execution semantics, aka code. The compatibility requirements for successful interop at scale are so severe that it’s hard to reach convergence of initially different conventions. WASM is no more sufficient than, say, x86 assembly. It takes conventions on top of that and they imply choices about data representation and resource management that language designers disagree about, with very good reasons. Could we end up with a handful of such conventions for different use cases? In principle, yes, but I don’t see it happening any time soon. There’s no institution with sufficient resources to make this happen.

Yep, “a set of low-level primitives for interfacing software components” would fit my requirement of “a runtime”. It wouldn’t need to be a single universal implementation, but it would need to be an agreed-on set of primitives and semantics. And yes, the success of bytes and Unicode/UTF-8 are one example of such protocol. TCP/IP for packet transmission is another. POSIX for process-handling sort of stuff and various filesystems are another example, though at that level the abstraction of “process” starts becoming very lossy because it’s too deeply intertwined with other specifics of an operating system. So I guess after POSIX we go straight to Kubernetes. But if Kubernetes is any kind of answer to anything, then the questions which provoked it must be very confused indeed.

CORBA and COM I think are sort of in the ballpart of what I hoped would become a “universal runtime”, but they obviously both failed (in very different ways). I’m not sure if that failure was because of the failure of “Software Component” to be a sufficient abstraction, or because of implementation-level things. I guess it seems that CORBA failed because its definition wasn’t close enough to actual machines, while COM failed (despite being what still powers Windows) by being too close to the physical x86 CPU to provide any kind of security.

I wish we had something standardised as a transmissable unit of computation between the level of “sequence of bytes” and “entire virtual machine tied to a specific CPU, operating system, and prepopulated filesystem”. This is exactly the scale and role that all the 1990s hype about Objects claimed would be filled by object systems. But… object systems remained imprisoned inside processes, and when they weren’t (as in COM/OLE/ActiveX), they became massive malware vectors due to their too-fragile architecture.

The reason the Lisp cons cell still fascinates me is that it seems almost the smallest, simplest possible entity that still provides some extremely powerful abstractions: for one, it provides opaque, secure (non-spoofable) memory pointers, which even in 2023 still seem to be unattainable future magic technology. It does have the disadvantages of throwing away half your RAM (not really a starter on 8-bit machines or even 16-bit ones), and of fragmenting RAM, making caches unhappy, and of not allowing computed access to tables of RAM. But, that magic secure pointer still seems like a treasure almost worth putting up with those downsides. That and the definition of a cons cell is still simple enough to hold in one programmer’s mind, while even the simplest object structure often isn’t.

Lua’s “tables”, however, might be small enough to to meet that criteria (since the source code is presumably small enough to read; I haven’t tried to, but probably should). They’re bigger than cons cells, they can be automatically allocated and freed, they have faster searching and less memory waste.

So I guess what I was hoping Fennel might be, and was annoyed that it wasn’t, was something like a REPL over an alternative syntax for Lua, rather than a one-way compiler to it. I imagine what I would want to see in such a thing would be:

  • a syntax for tables
  • clearly defined serialization and deserialization rules for that syntax, including how to handle loops and metatables, so that you can get data out of a running Lua instance with as much fidelity as it exists in RAM, and transfer it into another running Lua instance
  • a syntax and some core sematics for methods, preferably using tables rather than opaque text strings to define the method body (ie, what Lua doesn’t do, but Lisp does - true homoiconcity down at the symbol level).

Then my hope would be that I could eventually get to a black box which is just a server that runs Lisp or Lua or a similar language and that never ever switches off. I could migrate code into or out of it, and it just runs there. You don’t switch a router on and off as Ethernet packets come in and out; you don’t reboot a hard drive as you copy files on and off it; why should you ever have to reboot a server or even “run an installer” to get software on or off? I just want a computing device which is a dumb slab that “program packets” go into and out of, in some similar kind of way; with all the smarts being in each individual object/function instead.

Perhaps the tricky part is that software can loop or spawn copies of itself and therefore can consume infinate amounts of resources in the way that a packet in a router can’t. But again, that seems to be something that ought to be defined as a problem and then solved right at that “language runtime” level. Just as we can have secure RAM pointers, and expect secure stack frames, we should have secure resource usage limits at each function call.

2 Likes

Some of those ideas are implemented in Unison, which can in fact send code to another machine and have it run, no questions asked. It’s also almost a live system, using incremental compilation of small units, much like Lisp and Smalltalk do. But it’s yet another language, not a generic runtime in any sense. I don’t know how Unison handles limits on resource use.

2 Likes

Yep, Unison does look interesting, I haven’t really kept up on where it currently is. I very much like the idea of hashing code and dealing with imports and dependencies via hashes.

I would like to find a way of taking that approach but applying it to a much simpler runtime model: I guess something like a Lisp, or at least with allocated cells that have opaque pointers (they might be larger than conses; I wonder about powers of two, perhaps), and with some super-simple type system.

For types, I keep coming back to an idea of “a specified function has been run against this memory cell, and terminated with true as output, and an unforgeable central record in the runtime has been kept of that function run occurring” and I wonder how far that idea might get us.

1 Like

I am interested in any idea that could bridge the two universes of “anything that my type checker cannot prove to be OK is rejected” and “OK or last-minute crash”. In particular, I am interested in ways to

  • migrate code from dynamic to static type checking
  • combining code based on different type systems
  • introduce domain-specific type systems

What you propose could be called “empirical typing”. Definitely an interesting idea.

1 Like

That sounds like a good name for it!

I guess I’m assuming that the core essence of most type systems boils down to two things:

  1. The ability to unforgeably mark items in RAM or trusted permanent storage as being “of a certain type”, where the “type” marking combines both a “shape” and an “intention”. (Ie, an integer 10 might be tagged as either “10 miles” vs “10 kilometers”, and although those two types both have exactly the same data shape, they are not equal. So a type must mean something more than just a shape.)
  2. The ability to do algebraic operations on types, where those operations are guaranteed to not loop infinitely even in the presence of recursive structures (ie avoiding all the various logical paradoxes that originally gave rise to the idea of “type”).

It seems that if we can choose a set of functions that we can guarantee will each terminate because they have already run (ie what a compiler is, but smaller parts than a whole compiler), those functions could help us implement both 1 and 2.

Downsides:

  1. It doesn’t much help us mutate values while preserving type. It’s more for the case where we have immutable values.

  2. The user could try to run a type verifying function that looped indefinitely or consumed a large amount of resources or possibly did other bad things, potentially leading to badness at data parse and load time. I don’t have a solution to that other than to suggest that each language or program could perhaps pick its own set of trusted primitive verifying functions (or perhaps just one), and prevent the use of any functions not tagged by that one verifier. That might still lead to a problem of type incompatibility between languages as each language fights over which holy trusted root verifier to use, but at least that problem seems somewhat more tractable than just “every language has its own type system and they don’t interconvert at all”.

  3. I would really rather have a type function which is able to enumerate all (or most) possible values of a type, rather than just verify a value - as Prolog predicates or Kanren relations can do - but I’m still not quite sure how to get there from the functional-programming world in the simplest way. (I mean yes Kanren could do it but that involves carrying around an entire stack environment as a return value, that’s not really minimalism and simplicity).

2 Likes

I am in favor of 2. Because verifying a type shape could require arbitrary code.

For example, let us assume we have the type of a circle, ie floats (a , b) that need to respect the equation of the circle a^2 + b^2 = R^2

Any validation would require us to execute the equation.

Data could be treated as an actor, meaning that the data type should be an executable that performs the validation of the datatype.

This way, all languages are interoperable, both typed and untyped.

Possibly, I’m not sure about that.

By “shape” I meant a very basic oldschool C-style definition of “type” as “a structure in memory/storage” . In this sense the “shape” is just a series of data fields: very finite, very concrete, not Turing complete, not requiring any arbitrary code to determine. Ie, both “miles” and “kilometre” would have a shape of simply "1 integer field ". (Well, I suppose the shape might be potentially countably infinite if we’re including lists).

As far as I understand it (which isn’t much because type theory remains very opaque to me), “structural typing” is the subdiscipline of type theory that deals with equivalences of this kind of type, as opposed to “nominal typing” where both the shape and the “name” of the type are important. (Not really a name as such, but some kind of unique ID code representing the existence of the type as an entity separate from its component data fields).

It seems to me that the structural vs nominal debate in type theory is roughly equivalent to the object vs relation/set debate, which goes back to I think Plato vs Aristotle in philosophy, the Ship of Theseus, and similar essence vs existence debates: is a “thing” merely the collection of its members, or does it have something else, an “identity” which is separate from its members? The “relational database” world thinks that two tables are equal if they have the same columns. The “object” world thinks that two objects are not equal if they have the same fields, if they are a different object (which they usually accomplish by either having a hidden field with an Object ID, or having the ID be a memory pointer or something similar).

What I’m trying to suggest here is that structural typing, nice and simple though it might be, really isn’t enough to do all the things we need in even very simple typed programs. What we need is at least nominal typing. Which then implies that, if we want to represent types as runtime entities, they have to be much more like Objects than they are like Sets or relational Tables. A type needs a unique “identity” which is not one of its data members. Which is a bit of a blow since I have misgivings about the whole object-oriented project, and I really like the relational / set-theory argument that “all you need is the components of a structure”. But following how types are practically used in programming today, it seems we really do need an “object identity” as one of the fundamental primitives of any programming system that can represent types as runtime data (which I want).

Anyway, that identity vs structure debate seems to me to be a bit orthogonal to how types are defined. I could see, for instance - and many if not most type systems do this - an algebraic system that defines a type, which is NOT Turing complete but just a bunch of rules. And there is an appeal to this. But I guess I want to not have two separate, parallel, languages inside each programming language: one being the language itself, and the second being its type system. I’d rather that either the language was defined from the type system, or the type system from the language. So having a function that represents the type, is one way of trying to define the type system from the language.

I think most object-oriented languages essentially take the object constructor plus its set of methods as a single function-like thing (back in the day I think it was just one function with the first argument being a “selector”) which maintains the “type identity” of the object. Which is similar to my idea here.

A problem though is that while an object’s constructor+methods, or a “type function” like I’m thinking about (essentially just a constructor for immutable objects), would work for the use-case of ensuring that objects are validated and guaranteed to be of a certain type before we do further computation on them … it still becomes a sort of opaque sealed box for anyone looking at a type while it’s running. And even with access to source code of the type verifying function, it might be undecidable to unravel what decisions are made and why. Yes, this function tells us “is thing X an instance of type T (or class C) or not”… but, it doesn’t tell us how and why it is. We can’t get at why it’s a type T or a class C: what rules or judgements led the type-verifier or the class-compiler to that conclusion. And I think the “why” is pretty important. Especially if we have type or class definitions that change over time (as sadly they do).

More traditional type systems, with a type language that’s separate and parallel from the programming language, can get around this problem by having the “rules” that make up the type system be exposed through some kind of API so that even running code could do some kind of algebraic operations on a type passed to it (eg to understand at least things like “is object X of type T1 a subtype of type T2”). But, I don’t like having two parallel languages like that, it feels like that never ends well. They always get out of sync, you always want to ask more questions of the type system than the designers thought you would.

What I’m perhaps hoping is that if we did have “type functions” as a sort of “simplest thing that could possibly work”… then perhaps we could build, out of that mechanism, something equivalent to “rules” that could be composed and examined at runtime. I think that would work. But I’m not sure that it’s the most elegant thing.

1 Like

Actually the underlying idea of what I’m thinking about here is maybe different from each runtime data object being tagged with a single class/type/constructor function. I’m thinking about objects having multiple type tags. Here’s how I might do it in Javascript:

  1. Have a module Oracle for a “type oracle”. A module so that it hides the internals. It has a Map or preferably WeakMap (hidden) which maps an object to a function, and then a function to a return code
  2. A “seal” function in Oracle which takes an object and a function, first checks that the object and all its fields are frozen (ie immutable), and if so then runs the function on the object, and adds the object, function and result to the map [If we had proper objects which can’t be arbitrarily mutated by things other than their class, we could relax the immutability requirement.]
  3. A “isSealed” function in Oracle which just tests if an object and function are in the map with non-false result code. “seal” would consult “isSealed” first to prevent duplicating work.
  4. A “hasSeals” function in Oracle which returns an array of all functions which have been run on a given object up to this time and stored in the map.
  5. A “permittedSeal” function in Oracle which judges if a given function can be used as a seal or not. That’s the root of trust of the whole system and would require careful choice in any given environment. Probably should be user-selectable, to a point, but then there should be some way of preventing arbitrary changes beyond that point.

So an object could have many “types” (seals, ie, successful executions of type-judging functions), the list is non-exhaustive, and these are all stored independently from the object itself.

It still might be quite clumsy to construct algebraic rules out of a big pile of type judgements, and there might be a better way to do it, but it feels like a sort of beginning point.

If the object being tested is a Javascript function, then the type-judging function would probably actually need to take source code and a compilation environment and then compile the function itself and seal the resulting binary function as compiled by it, because there’s otherwise not much we can do with just a compiled function. That’s where we might want a language of our own that’s easier to do typelike algebraic manipulations with.

Does Lua have freezing/immutability, weak maps and a module mechanism to prevent anyone tampering with a global object? (Just a function/closure namespace should be okay for that). If so then Lua should be good for doing this as well. We could get by without the weak map, but then we’d need to explicitly discard sealed objects that have gone out of scope, and it’s nicer if the language’s garbage collector can do this automatically. Without immutability though, the whole thing pretty much falls apart.

1 Like

I see (via Hacker News today) that back in 2016, Lawrence Tratt called this feature “fine-grained language composition”, which is a better term than “smooth gradients”.

What I was trying to suggest with the term “gradient” is a handwavy aura of AI optimisation jargon, ie that without the gradient of the programming fitness function that you’re trying to work on being smooth there’s a danger of our programming environments becoming trapped in locally optimal but globally pessimal configurations of the solution spaces. And that we can’t jump out of these local optima because we’d have to change too many things at once, some of which (like compilers) the average user does not have access to.

I believe this is pretty much what has happened. We’re using poorly-fitting software development tools (ie, languages) because it’s hard to change these tools; much harder than it should be. This is surprising to me, given that “making better tools” is the entire purpose of the software industry, and “reinvent everything all the time” is the current method that the software industry has settled on to achieve its purpose. And yet, that industry seems to be still somehow failing either to reinvent its own tools, or to make them better when it does reinvent them, or both.

The idea of a gradient of optimisation that’s not itself in an optimal state for optimising, seems like a possible explanation for this feeling of stuck-ness across the industry.

But “fine-grained” is a more precise term.

Laurence Tratt’s 2016 article: Laurence Tratt: Fine-grained Language Composition

His blog archive including posts from 2023: Laurence Tratt: All blog posts

The Hacker News thread (with nothing in it but for reference): Fine-grained language composition | Hacker News

2 Likes

The language composition work from Laurie’s group is quite relevant indeed, thanks for reminding me of that! :smile:

Along with fine-grained composition at the language level, you may also be interested in Amal Ahmed and Daniel Patterson’s work on “linking types” for multi-language software as a way working towards meaningful semantics in the combined program:

You may also be interested in a related multi-language verification approach that recently appeared earlier this year:

2 Likes

You had me at “Forth, Smalltalk and Lisp”.
I’ve recently heard one too many introductory explanations saying “compilers and interpreters are both ways to translate code you write to the machine”, sometimes mumbling e.g. “python is considered interpreted but it also compiles to .pyc”) and technically yes the line is blurry; i mean the V8 JS interpreter now contains as many as three tiers of JIT compilers(!)

I’ve been struggling for words how do I even begin to explain so people who only heard the CS tower-of-compilers narrative would realize there is a large cultural gap of goals…
The best i came up so far is:

Compilers translate your code down to a language closer to the machine;
interpreters simulate a new, different machine, raising it closer to the level of your code.

and then I’d talk about how compilers love “erasure”. Type erasure, name erasure, … source erasure, even language erasure.
If your goal is to create a program in the next language (e.g. create an executable binary), then all those things are good to discard. It’s a feature that some refactors like renaming a function provably have no effect and emit byte-for-byte the same program.
It’s a feature C++ templates could emit the exact same program as I could write in “bare C++” or in C with function pointers.

But to me, as lover of both interpreters/runtimes and Open Source, those^ are all bugs. Because I consider what happens when I click “inspect” or when the program dumps a stack trace part of my app’s behavior. Renaming a function has an effect, it hopefully makes my program more approachable to the end user that I invite to open the hood.

5 Likes

That’s a very good way of saying what this itch is about! Yes, a program’s behaviour must include how we alter it (and debug it while it’s running), not just what it does by itself as a sealed box.

It strikes me that from this perspective, a program is a piece of language encoding knowledge, not just a mechanism performing a task.

As language, a program is there to communicate knowledge to the humans who both use and alter it. The knowledge is encoded not just in its output or its top-level behaviour, but in its structure as well. Ideally most of its parts have a closer relationship to the structure of the knowledge that it’s encoding than to the needs of the machine performing that knowledge.

(Although it will really be a balance of both. But ideally I think our computing machinery should move in the direction of being simple rather than complex; easily and safely adaptable to multiple tasks. And we should explicitly prioritise adaptability and safety over brute performance. It frustrates me deeply when we keep optimising computing machinery to do one tiny niche task well and most other tasks poorly. I feel like that’s a recipe for exponential complexity, fast but generally untrustworthy automation, and resulting social collapse.)

One way these two perspectives (language vs mechanism) diverge is that, if a program is just a mechanism, and its only interface with users is the task it does, then it makes sense to seal it up and not let it be modified by non-experts in case they damage something. But if a program is knowledge for humans about the task it does, then it makes sense to design it in such a way that most of it can be taken apart and reused elsewhere, because knowledge thrives on communication and understanding and not just rote performance.

At least, that’s what I want from my software: I want most of it to be as context-free and understandable as possible. And a lot of my frustrations with modern software stacks comes from everything being too tightly bound to contexts of use that aren’t my actual needs. Although my world is becoming more and more run by automation, it seems to be becoming harder and harder to get any of that automation to respect my needs rather than those set by some distant programmers in a different social class in a different country.

In the 2000s, I thought that just publishing source code would mean software would become adaptable to the needs of its users. That’s happened a little bit, but not enough. And I think a large part of the reason is that the source code, even though we have it, is still written in languages that aren’t themselves designed with the needs of the user learning and altering it in mind. That is, they’re still designed as mechanisms and not really as languages. Or not as much of languages as they could be. They’re languages that focus a lot on verbs rather than nouns; and when they do have nouns, those nouns are very tightly tied to very specific machines that very quickly go out of date… much faster, at least, than the knowledge they’re describing goes out of date.

To come back to your statement: The idea of the behaviour of a program being how it acts at runtime, as seen from the debug interface (and that internal behaviour being I think a reflection of its linguistic structure as knowledge-representation), I think that’s important and interesting. Our user-experience design doesn’t think about debugging or alteration as being user-level concerns at all, and it should.

My annoyance with “compilation” also has a lot to do with the still persistent, pre-OOP idea of a program as being something that “runs” and then you “stop” it and edit it and compile it then “run” it again. Software shouldn’t ever stop running, in my opinion. But there are still so few languages or systems that are designed with “the software never stops running” ever actually true. Even the Web still has the idea of the “page load” and then the Javascript runs as a unit until the next page load. We would need a much more fine-grained concept of running, and that would mean the whole Web security framework of “domains” is not very helpful. (And in fact it really, really isn’t helpful, when you’re trying to write local-filesystem-hosted Web apps.)

2 Likes

The quest for human-readable executable knowledge has been the backbone of my work in scientific computing over the last 30 years. Back then, I believed that replacing Fortran by Python would be sufficient, all the more since Python was attached to Open Source culture from the beginning. Didn’t work. On the contrary, my Fortran code from 30 years ago is more useful as executable knowledge than my later Python code which doesn’t work any more.

My current belief is that the tool culture around code is actually an obstacle. Everyone cites Abelson’s quote that “Programs must be written for people to read, and only incidentally for machines to execute.” But nobody has so far succeeded in removing the contradiction between those two goals. One of them, much discussed, is the tension between readability and performance. But there is another tension, not discussed so far, between the different time evolution of knowledge and tools. Knowledge accretes and consolidates, requiring continuous integration and occasional reformulation but never “starting from scratch” or “breaking backwards compatibility”. Both of which do matter for tools in order to keep their evolution manageable.

More fundamentally, the problem I see is that formal systems, including computer programs, cannot hold large knowledge structures. The “database” of human knowledge is inevitably informal, such that piecing together different parts always requires contextual information.

Which is why my current focus is on Digital Scientific Notations, designed to express small formal systems that are specific to a domain or even a single scientific study. Combining them for a specific purpose is a task for human scientists on a case-by-case basis (though LLMs will likely become good assistants one day).

1 Like

@khinsen It seems I will have to read your work when I have the time.

For me the problem is the social context of programming. . The social forces that produce code are only interested in performance. They are only interested in readable code by experts, their employees.

It is for this reason that I do not consider malleable software as a technical problem, but rather a social one.

If the social forces wanted readable, malleable software the technical solution would have already been found.

So we return back to my idea that malleable readable software has a recurring cost, and we as a society need to pay it.

I agree, and I see malleability as an issue orthogonal to tool vs. knowledge representation. I happen to care about both of them.

At the risk of diverging further from this thread: For me the social problem boils down to: who are my people? I tend to start with the assumption of pluralism. There’s always going to be lots of groups of people, often with incommensurable values, working at cross purposes. And it’s good on balance to have this diversity compared to some totalizing framework.

So yes, there are social forces that are ruled by profit, or that only care about readability for insiders. My worldview considers these to be self-limiting forces. So “my people” are those concerned with a longer-term time horizon. That’s where malleability will come into its own, I think. This is the bet I’m making.

2 Likes

Totalizing frameworks are unfortunately what many people assume to be the basis for discussion in tech, no matter if they are arguing for or against any such framework. People argue for or against static type checking with the same fervor as for or against capitalism, always ignoring context.