Well, part of this is my personal axe to grind, having occasionally played with Prolog and Prolog derivatives and really liking part of what I saw, but only part of it. And logic programming isn’t necessarily the best or only way forward. But, the part of LP that I like is that it seems to unify three otherwise separate things- “functions”, “types”, and “data structures” - into a single concept, the “predicate/relation”. Given that we seem to be facing a crisis of accidental complexity in programming, a major foundation-level simplification like that seems like a fairly important breakthrough to me.
The current trending logic programming “language” - MiniKanren or MicroKanren - isn’t even a language, but rather an algorithm that is usually implemented in a functional/procedural/OOP programming language. It’s a very nice thing to exist, and the Kanren family of algorithms I think has made some important steps forward, but I still would like a Kanren derivative to be its own fully self-hosting language.
Anyway, what I mean by “parseable, iterable network” is this:
In Prolog you define a predicate as a series of assertions, which can be either literals or function-like rules, and can intermingle them freely, eg:
region(R,City) :- region(R,State),city(City,State).
So now you can treat “region” as a mixture of a data structure/table, a function testing membership, and also a computed query / simulated table that calculates and returns (or constructs) data on the fly, as it’s queried.
region(westcoast, nyc) ← if that’s an assertion, then it’s just a dumb data literal. But if it’s a query then it’s a query that’s currently not true (the “region” table/predicate does not contain these values, or a rule that allows you to infer those values), so this query “fails”, which is either like exiting your whole function or like returning false, depending on how you handle it
region(westcoast, Place) ← if that’s a query, then variable Place becomes bound to a keyword which can be found defined in the region table/predicate, but it stores state somehow on the stack/heap and will backtrack if a later expression using those variable bindings fails. So now you have something like an iterator or a generator function / coroutine. Every function can return multiple (and potentially infinite) values, that’s automatically baked in.
region(R,Place) ← now you have an iterator/generator/coroutine with two free variables.
So where in functional programming or even OO programming you’d probably implement “region” as three or four very different literal data structures / abstract data types / objects / functions, here we just have one thing, a predicate, that can sort of shapeshift into being a definition, an evaluator, a query, or just a dumb set of data, depending on the circumstances.
It’s a very powerful abstraction in my opinion, and would be really great for situations like personal databases, configuration files, build systems, a lot of things that we use a lot of incompatible ad-hoc config/code languages for at the moment. But there’s a cost. The logic programming mindset is tricky to connect up to more ordinary code because of all the backtracking, and the variables having to be carried around everywhere. You never just have a function that returns a simple value, you always have a bit of a “returning a gorilla holding the banana and the whole jungle” problem in that you’re returning “a set of variable mappings that somewhere hold your value, plus a call stack to generate a new set of mappings in case those don’t work for you”.
If this could be simplified and streamlined a bit, I feel like we might have something really nice that could meet our config, data and programming needs for the next 100 years. And could potentially work to define systems both in the small and in the large. But it’s not quite there yet.
For example (to bring this back to the purpose of this thread), defining how a “predicate call” would/should work between two distributed systems: that’s a tricky part. We could probably think of it like an OOP message send, but since every predicate call creates backtracking state that can later be resumed, it’s probably like every message send would have to create a state object on the machine it’s sent to, and return not a value but a reference to that state object, so that the call could be resumed at any point. That’s a lot of object creation and persistence. And how does garbage collection work? Does a server have to hold on to state objects forever, or can it silently let them go at some point? Does the whole thing become an instant DDOS machine?
tldr: In logic programming, the equivalent of a “function call” doesn’t return simple values but rather a metadata structure containing statements about those values (ie, variable mappings, and a backtracking stack). And that metadata structure is the thing that can be “parsed and iterated” in a way that a simple function return value can’t be. That metadata-ness is what gives 1970s-era Prolog its startling ability to do very complex things with very simple definitions. But perhaps it’s just shoved the complexity under the carpet and the total amount of complexity remains the same. Still, shoving complexity around is what we do in programming, so I think it’s worthwhile to look at different ways of doing it.
Edit: Ok, on reflection, a distributed predicate call wouldn’t need to create a state object on the server that could be resumed later. It could instead return something like source or object code for a query that would return the remaining values. But that then hits another question of privacy/security: we aren’t really in the habit of understanding functions that routinely return, across the wire, parts of their own code and/or call stack. We like our functions to be black boxes that conceal whatever is within, and exposing the runtime internals to the world makes us nervous. Doing this probably could still be done in a secure manner, but it would need serious thinking about, because it violates a lot of our standard intuitions.