Hi Duncan-Cragg!
I see I have confused you on several points, so I’ll try to explain.
I have a very similar feeling to you about “data-first”, but my sense is that you’re sketching out here what I’d call a “system in-the-large”, like the WWW. A thing made of protocols that operates over large numbers of computers and isn’t too concerned about efficiency. It also isn’t too concerned about computation and how exactly computation works; that’s something to be punted to other layers.
This is probably a valid way of approaching very large systems. But I’m also interested in very small systems, like personal machines running personal desktop-like OSes. Remember that in the 1980s, with the Amiga, say, we had very tiny machines by today’s standards that could run entire multitasking graphical OSes. I feel like we’ve missed that sense of smallness, in today’s web.
One of the things that drives me is the nagging feeling that we have far too much unnececessary complexity, and too many layers, in our software architectures, and that htis has happened because we picked the wrong abstractions to build on, so we had to keep adding more and more. I feel that we should look for very tiny new abstractions that we don’t have to throw away as we move from very small, to very large, systems.
So my thinking is: consider three scales of use-cases. One, a system at the level of electronic circuits. Two, a system at the level of a single graphical process (application) or desktop (OS). Three, a system at the scale of the Web. If we have a valid model of computation, it should apply roughly equally to all three scales. If it only works at one scale and not others, it’s probably not quite a correct model of computation, and we should look at why not. For instance, the idea of “message-passing”, as in object-object programming, seems to be one of those ideas or models of computation that remains valid at all three of those scales. So if we’re looking for a new model of computation, it needs to be at least as useful as message-passing is.
That vision/requirement of not just being able to run “in the large”, as a protocol like the Web, but also being able to run “in the small”, down at the level of one process on one machine - or smaller - is perhaps the difference between our perspectives.
Now to the specifics.
Can’t say I fully get what you’re saying here, but in the Object Net all links or pointers to objects are unique string IDs (“UIDs”). What forging do you have in mind? Sketch out an attack!
So I’m talking here about whether “data-first” is a valid model of computation down at the second scale level: that of a single process in RAM, or an operating system on a machine.
“all links or pointers to objects are unique string IDs” is something that would work for a large-scale, Web-like, protocol, but it’s not something that’s going to work down at the level of a single process. At best, using strings everywhere would mean we’re talking something like HTTP or Tcl/Tk. It’s going to be very slow to run as a desktop. Yes, the trend right now is to literally reinvent all desktop apps as literal web browsers talking to literal web servers and slinging megabytes of text data around on every mouse click - but I don’t like that trend.
So I’m assuming that at the desktop or process level, there won’t be “string IDs”, but rather there’d be object IDs of some kind.
What forging do you have in mind? Sketch out an attack!
The forging and the attack would be if you hadn’t thought about what kind of VM you wanted to run down at the single-process or machine level, and you figured you’d maybe use a Forth instead. Dusk OS, perhaps, because it’s super trendy right now in the indie scene which we’re adjacent to. And if it was a Forth, it would come with machine integers as its pointers, and if you didn’t think too hard about security implications, you might decide to just use raw Forth machine integers as the local version of your object identifiers, thinking they’d be fast, and that since it was your own machine, you wouldn’t need to worry about security.
And then you would download an object from a stranger, because that’s the whole purpose of sharing objects, and would get rooted within seconds. Because the animation code of that object might compile down to raw machine Forth which implements no memory safety. Because of the object identifiers being raw integers on the local machine because that’s what’s the simple, natural (but wrong) thing to do in a Forth.
So I’m asking: have you thought about how your “Object Network” would run on a single machine, in a single process? Or is it a thing like the Web that you think of as being “too big to use on a single process” and you’d use a completely different technology down at process-scale?
Yeah, implementation detail, optimisation. I’m not doing premature security or optimisation!
That’s valid, but, I’m here to remind you that maybe you do need to at least think about security and optimisation even right at the beginning stage. Don’t maybe commit entirely to something like “all objects are text files and all object IDs are text strings” unless that’s how you’d be comfortable handling, say, individual mouse clicks or graphical buffer writes.
As an on-the-wire protocol, where efficiency isn’t a concern, sure, text strings are probably fine. But you’d like to run something like this on a single machine as well as on the wire, I think?
Order is determined by what I term the “application protocols” between objects - the type- or domain-determined concept of what peers are doing and the rules of interaction. This includes application or domain specific timeouts, so that the domain/type/application protocols will work over the wire. I don’t have CRDTs or stuff like that, or lockstep clocking, it’s all loose, best efforts.
I can understand fault tolerance being a good quality to have - after all a lot of electronic engineering is fault tolerance and turning noisy analog signals into digital - and I guess that’s a big part of what the object-oriented paradigm is trying to do. I guess my question is, can you see this “Object Network” paradigm scaling down to the level of individual function-calls or message-sends inside a single application? If so, then how exactly does this “loose, best effort” approach work for a case like, say, subtraction of two integers? Where it’s quite important for the correctness of the result for the two pieces of data to arrive in the correct order but you’d maybe not want to tag every 2 bytes of data with 64 or 128 bytes of timestamps and object identifiers and other metadata?
In big distributed systems with relatively few, relatively large messages/transactions, like say large corporate databases, it’s ok to pay a large cost of metadata overhead on each transaction, so that we can sync up those updates later. But we maybe don’t want to pay that cost inside a single proces, on every mouse click event or every function call.
The potential danger of having a computing paradigm that only works well for very large systems (because we’ve “priced it out of the market” in terms of the per-transaction runtime resource cost on the desktop) is that programmers will just ignore it on the desktop, and that might lead right back to where we are now, with lots of competing “applications”. Or, to also where we are now, which is everyone shipping desktop apps as literal webapps running on a web server and a web browser, and everything costing gigabytes to bring up “hello world” and taking seconds to process a keystroke.
If you have the object’s UID, you can request it, then won’t get it without the read permission. I’m struggling with your mental model of this I think!
My mental model, as I said, was “capabilities”, as in knowing the object identifier itself being what grants the read permission. Eg Capability-based security - Wikipedia
From your response, I’m guessing that you’re not thinking in terms of capabilities? Then that complicates matters quite a bit. That’s why I said that I thought capabilities are the simplest way of granting permissions; they map naturally to the idea of “reference” in programming languages.
But also what I’m saying is that, at some point, even in order to implement the concept of a “read permission” which isn’t a capability (ie like a Unix access list), I think you’ll find that you’ll need some kind of component that hides some information necessary to grant that permission.
So committing hard to “objects don’t have hidden state”, and committing to this stance even inside a machine’s own RAM, may I think give you problems on how to implement the secret-keeping needed to implement read-permissions. The Object Network would be a way of approaching computing which has limitations, and can’t be used for systems programming or perhaps even not for application programming. Which seems a pity if we want to replace all applications and operating systems with this model.
I’m also thinking that a functional-reactive model, if it can be made simple enough (and “just cache the previous computed value of all functions and make it available as a magic variable” is possibly simple enough), could also describe the lower levels of a network.
Sorry, I don’t get this! I re-read it 5 times…
Think of my first scale level: the electronics in a computer.
We currently build circuits using programming languages ( Hardware description language - Wikipedia ) These languages use a variety of programming paradigms.
If we do indeed have a good new universal theory of computation (as is, say, lambda calculus or message-passing), then “could this theory work as a HDL” seems like it would be a good test case for it. An electronic circuit is very like a computer network, so it seems like message-passing is roughly in the right ballpark, but functional programming also feels like it kind of describes signals flowing along wires between components. (One programming paradigm that doesn’t describe either an electronic circuit sending signals, or a computer network sending packets, is the 1970s C-style imperative model of “do this then that”, but it’s still used.)
So could the Object Network or something like.it be used as a HDL? If it can’t, what extensions might it need so that it can?
The reason I ask is that it does feel like the core of the Object Network idea is there in the idea of “circuit” or “data channel”. A component observes the current state of its input wires, and its own current state, then generates a new state that it puts out on its output wires.
on(trigger, initialstate, f)
Have to say you’ve fully bamboozled me with this post.
Yeah, this part is a little inside-baseball and specifically talking about my own unformed ideas about, specifically, the Functional-Reactive paradigm (as is used currently inside web browsers in libraries like React, and which I would like to see used much more widely, outside web browsers). And specifically, about how one might represent that “object references its own past state and uses that to update to a new state” thing, in something like functional programming. The specific two primitives that I’m suggesting here (“on” vs “start”) are 1) not well named at all, just on the spur of the moment, 2) not described very well, and 3) not well thought through. I do need to talk to Bosmon more about this guts-of-FRP stuff.