Point-Free Logic Programming

This is one part of an idea that’s been haunting me for around 18 years now: can we take the Logic Programming model as in Prolog or Kanren, and remove the concept of “logic variable” from it, ending up with a fully point-free logic programming?

The advantage of Logic Programming for malleable software is that it is a very simple way of getting a personal database that you can not just search but do arbitrary inferences on. It can be ridiculously easy to type data into a logic database. Though less easy than it ought to be, since Prolog dates to around 1970, and its syntax and semantics got stuck in that era. Specifically, syntactic limitations make Prolog basically unusable in its native form for being a note-taking database. You could write applications in Prolog, but that’s not the point: I want a system with as little friction as possible between the programming language and the user.

Logic programming is particularly adaptable to adventure games: the language “Dialog” by Linus Åkesson (Dialog ) uses it to spectacular effect. The programs created by Dialog can be compiled to very small systems (the 1970s-era Infocom 8-bit Z-machine). There are some constraints in Dialog however, which while they work very well together, make it unsuitable for being a general-purpose knowledge representation or desktop scripting language. And so the search for better alternatives goes on.

Around 18 years ago RDF and the Knowledge Web were briefly in fashion (Mozilla and/or Firefox was pushing it hard), and I was looking at RDF and wondering if it could be used for game programming, for a system like Dialog which I never got around to building. The answer, as with Prolog, was also “technically yes, but actually, not in its natural form, no”.

Concatenative Programming was also having a moment with Joy and Factor. It felt natural to ask if there could be a “Joy for Prolog” that might allow making logic programming simpler. If we could sidestep all the complex semantics around logic variables by just talking about entire relations at once, could we open a whole new door to user-sculptable programming?

(See, if you have relations, I think you can express both functions and types in them. That would mean you can get a whole new level of higher-order programming.)

I tinkered with binary relations a little bit but couldn’t make much progress. Binary relations are essentially a generalisation of functions: instead of an argument having one value, you can have one, zero or multiple values. And you can go both ways, from the “value” to the “argument”, which is not technically impossible in the mathematical theory of functions, but many programming theories are based on the “computation” of functions - which is usually only considered as going one way, from argument to value.

In logic programming and in binary relations you can go both ways, and it’s a many-to-many relationship, and both of those things means the “computation” model gets complicated. In Prolog and Kanren, for instance, what is being done is something called “unification”, which is not at all the same thing as the “evaluation” or “reduction” that is done in the functional programming world. The two worlds might be deeply related, very probably are, but they are both expressed in such different notation and terminology and habits of mind that it is very hard to port ideas cleanly between them. Eg, functional programming is based on “term rewriting”, while logic programming for some reason is not. (It should be; it could be; it currently isn’t.)

Binary relations felt (and still feel) like something very important and powerful. But without variables, it felt like it would be very easy to get exploding exponential complexity in searches.

Today I noticed that Kragen Javier Sitaker apparently had a similar epiphany about binary relations in 2018. I wonder if it might be able to be developed into a working system.


Binate is a particularly terse way of writing down relations by algebraically composing them from more primitive, binary relations. It can only express binary relations, but it can express binary relations among tuples, so that isn’t actually a limitation.

In particular, you write relations in terms of more primitive relations using conjunction or intersection ,, union ;, converse or inverse ~, composition (concatenation), transitive closure *, either negation ! or set subtraction -, and a form of N-ary Cartesian product: {x: a, y: b, ...} produces a relation from the intersection of the domains of a and b etc. to a set of tuples which are the domain of the new relations x and y etc., whose codomains are the codomains of a and b etc. respectively, constructed in the only reasonable way. Literals are treated as relations from the entire universe to that literal. This turns out to be sufficient to express anything you can express in Codd’s N-ary relational algebra or relational calculus.

When using Prolog semantics, the major advantage of Binate over Prolog is that programs are dramatically terser because they are point-free. For example:

ancestor = parent parent*.
father = parent, "male" ~sex.
sibling = parent ~parent.
cousin = parent sibling ~parent.

I want to believe, but I need to see this actually running to get a sense how it would work.

More on Binate here:


Binate: A very compact database query language based on binary relations