Thanks, this is really lovely - I’m touched you went to the trouble to map this out! It feels like we need some dedicated scheme for tracking these - perhaps in the form of a “substrate” or so? : P
Here’s where I think the distinction is important. Indeed - the OP has no idea at the point they find the requirement e.g. for the underline whether they want to change words or add words. This can only be informed once they find where the words and what form they take which are responsible for the underline, which for a start requires there to be some intelligible journey they can take towards the words from what they see on the screen. At the point they find them, they will likely form an opinion then which model they prefer - and especially, if they find they take the form of anything like a 1600 line TypeScript file, they will likely want model 2. However, unless the artefact they interact with was already built on some kind of substrate with coordinates to it (in addition to them having some ability to rebuild/redeploy it due to its being open source etc. etc.) model 2 is not going to be available.
I see this as almost a political kind of distinction, amounting to Freedom of Association for code - the user should have a free choice of options for how they want their communities to be connected.
Does that help situate the distinction any?
Yes, I agree with that - but also, at this scale, we are at the level where one could expect AI to satisfactorily produce the entire design for us, meaning that any distinction between approaches is pretty minor. But every big design starts as a small design, and I hope that eventually substrates can be developed which ease the task of designs growing up, becoming mature, and having more and more variants shared amongst communities with slightly divergent needs. So even if there is no real advantage at the small scale I think it’s worth exploring different means of expression. Unfortunately this is a microcosm of the “poison tree” which AI spreads in so many different communities - it makes it very much harder for immature ideas or workers to find space to grow up.
Well this sounds a bit intimidating to me - hopefully there is some middle ground in which I could produce something which is neither a paradigmatic smash, nor a humiliating failure, but something “a bit funny” which stimulates further ideas and work!
To put a line in the sand, “ancient Infusion” did actually succeed in delivering most of the goals of the OAP paper, it’s just that Jonathan Edwards (quite rightly, I think) considered that we should remove mention of the system itself from the paper since it was already massively crammed. In an older version of the paper there were system details and even a dynamic video of a CATT. You can see some test cases for the CSS selector-like system, “distributeOptions” here and some docs here. The problem was that the system was so race-laden and unperformant I wouldn’t dare render more than a couple of dozen components in it and certainly not its own self-editing interface.
What’s changed since 2022 is that I’ve rather better got my head around reactive programming techniques/incremental computation and have more confidence that I can orchestrate the workflow without it being full of transactions and weird stovepipes.
One thing I realise is, that, since you’ve developed LuaML2 we don’t need to agree about whether the Web is a worthy platform or not. I see that what I consider the premier JS reactive programming library,
alien-signals has been ported to Lua and that also there is good support for proxies in Lua so many of the underpinnings are there - should I produce anything usable “it shouldn’t be too hard” to port it in to LuaML2. Have you tinkered with reactive programming much?
{type='line', data={0,0, 0,600}},
{type='line', data={0,0, 800,0}},
Does this kind of lingo have a name, LSON or so? : P