So here’s a half-baked idea which has haunted me since around 2006. I don’t have much hope that I will ever get it to fully baked, but I want to put it out there in case there’s something which can be salvaged.
I was thinking at that time about the overlap between the Haskell idea of “curried functions” (ie functions of one argument) and the object-oriented idea of “sending a message”, and it seemed to me that given the appropriate language, you could pretty much get both at once. Ie:
foo bar baz
can mean both/either “send message foo to object bar, then send message baz to the result” or “apply function foo to the value ‘bar’, then apply the resulting function to the value ‘baz’”.
So far so good. There isn’t, yet, a language that quite combines the curried-function idea of Haskell with the message-sending of Smalltalk, and does it without a compiler and a type system, but it seems like it shouldn’t be too hard to whip one up.
But the next idea was: okay, so then also the operation of “send message to object” or “apply function to value” also looks like the idea of “traverse a graph or node structure”. Ie:
“from a root, go ‘foo’, then go ‘bar’, then go ‘baz’”
which is much the same as a filesystem or object path, as in file “foo/bar/baz”, or “foo.bar.baz” .
Still not too strange. Just like Unix and also like JSON. So imagine that we have this one big massive unified “data-space”, which is a huge/infinite set of nodes (imaginary/virtual) which we traverse by either “following labelled arrows on the graph” or “moving into named subdirectories” or “traversing JSON-style object keys” or “sending messages to objects”, or “applying functions to strings/atoms/keywords”. All just the same operation: message-send/apply-eval/move-viewpoint.
The next idea is that, obviously, some of these “nodes” are computed, it’s not just a finite data tree structure. That requires coming up with a language/syntax/semantics… something in the way of Haskell or Lisp or Smalltalk… a bit trickier, but not that hard, right? Just a small matter of definitions…
(figuring out which small definitions would be needed took me nearly 20 years and I got lost multiple times, and remain lost. It turns out there are an almost infinite number of ways to construct a Turing-complete language, so you can’t do an exhaustive search. But a lot of those infinite number of ways don’t work.)
The next idea (and these ideas were all entwined, coming at me at once) was a bit harder. This data structure would be immutable and pure-functional, of course, so we need a way to model input/output and dynamic data. “Functional reactive programming” and React hadn’t quite hit yet, but “dataflow” was in the air, so I figured: yes, something like that. This tree/graph of data nodes would be linked by pure functions between each nodes, but some of those nodes could change over time. Ie, they’d be what the reactive paradigm calls “signals” or suchlike. You’d “attach” to them by referencing them, that would add you to a subscribe list, so you’d still be a pure function of that value, but each time the value changed, you’d be reevaluated. So we’d need a way to cache and incrementally compute all these so that you weren’t re-evaluating the whole tree every clock tick. Oh, and potentially store a bit of state at each node, so some primitive like “access last value” or Elm’s “fold over time”. Oh, and make sure that everything was synchronised. And find a way to somehow “freeze a value” if you actually wanted just its value at a certain point in time (triggered how?) And make sure to garbage collect references whenever you move out of scope. And add them in again if you moved back into scope, but not lose any states of the signal. Simple, right? A week’s work at most.
(I got completely stuck here, bouncing hard off the internals of Elm and React even when those were built. Just couldn’t get my head into what they were doing and how.)
But that wasn’t the end of it. The next part of the idea was that, well of course the data in this big tree/graph wouldn’t just be numbers, it would be database type of stuff, or logical assertions like in Prolog, Datalog or RDF. Ie it would be something like a “conceptual map” that could represent arbitrary relationships between things.
(I was thinking of Inform 7 here, and how it does adventure game modelling: an I7 file starts out as what look like a bunch of logical assertions in faux-English. “The house is north of the park.” This implies that “the park is south of the house”. I figured, just encode this as something like (house north-of park). But then if you looked for, or “moved your view to”, the space point labelled (park south-of), you’d find a (computed) value there called “house”. Obviously, you’d like to use this for not just adventure games but for personal note-taking, databases, defining type or class relationships in programming languages, etc…)
So each node could conceivably be an infinite multidimensional table of recursive arbitrary sequences of lists and atoms…
So just implement Prolog over Lisp, of course, and clean up its semantics, a weekend’s work… well, since nodes/predicates/functions could be so big, you’d obviously need to just do something really simple, like make them lazy lists or something… another day’s work…
(got stuck for nearly 20 years there, again)
And next, “concatenative languages” like Forth (and Joy and Factor) were having a moment in 2006, and Forth words operating in a pipeline looked very much like “send string of messages to an object” and also “Function operating on string of keywords”, so I thought, yeah, it’ll probably make it simpler to just whip up a concatenative language for this, then I won’t have to worry about how variable semantics work…
(20 years lost there)
Anyway, at the end of this, you’d have something like a completely virtual/imaginary/ hyperlinked “cyberspace” of computed data that you could “cruise through”, by creating “views” (instantiations) of it, and it would be functional, but also object-oriented, but also a Prolog-like logical database where you could “infer nodes” by making a query. And you could model networks or windowed desktops or arbitrary communicating systems with it. It wouldn’t be quite the same as OOP because messages would flow only in one direction between nodes, but there would be a little bit of state at each node. And it would all be super-simple and self-evident once you saw it.
Well, that was the thought in 2006. It was a nice thought. It is very blurry around the edges and isn’t anywhere near coming into reality. Except that I feel like “actors” are basically one glimpse of “nodes”, and the dominance of React in the Web frontend space (yet its complete absence of influence anywhere else in computing) is another glimpse of how nodes might operate. Dialog took my initial inspiration, Inform 7, and did exactly what I wanted to see in terms of “doing that but in Prolog” - but still very closely bound to the Z-machine and not usable as a general-purpose programming language. Kanren gives some glimpse as to how to encode Prolog-like searches into pure functions, although a frustratingly awkward and incomplete glimpse. Forth (Dusk, Retro, Uxntal) in 2024 is giving concatenative semantics another moment like Factor did in 2006, but Forth itself with its open RAM model isn’t quite suitable for this node-based virtual reactive logical cruisable data-base-space idea. SUSN is the most I’ve been able to salvage so far just in terms of “a recursive space of personal text-based data” and it’s not quite even the right shape let alone having any computing capability.
I wonder if there is anything to it.