An interesting quote about the origin of MVC and event handling

From an answer by Mark Miller on this Quora thread about Smalltalk:

This isn’t a programming language feature, but the model-view-controller architecture was invented on Smalltalk. Developers rave about how MVC is the best architecture for displaying different views of the same data, and capturing input, but I remember back in '07 Dave Thomas talked about how MVC was actually a kludge. He wished that somebody would invent a new display-interaction architecture that was OO, because MVC is not. He said it was created because of a deficiency in the Alto’s design. I don’t remember him talking about details too much. He said that rather than using message passing to capture things like keyboard input and mouse actions, the researchers at Xerox had to create this new concept called an “event,” which I take to be like a callback that the machine could call directly from an interrupt routine, bypassing Smalltalk’s method dispatch mechanism. Thomas said all developers have done since that time is copy this structure. He said our hardware has advanced beyond what the Alto could do, so we don’t need MVC anymore, but, he complained, nobody has come up with anything better.

Suddenly makes my long-time sense of unease about most GUI systems’ handling of “event loops” feel a little more grounded. If the concept of “event” going back to the Alto was a low-level hardware hack for interrupt purposes, that would maybe explain why “events” were never really surfaced as first-class objects (because obviously they couldn’t be, if they had to be interrupt-safe).

And the lack of events being first-class objects has, I think, led to one of the biggest problems in GUIs - that you can’t really easily capture and process an event stream. While in the older Unix-type command line interfaces, capturing and processing a text stream is so easy that it’s all the shell ever does.

1 Like

There’s some really fun stuff on Quora, buried behind one of the world’s worst web interfaces (like Pinterest, it paywalls you if you link directly to an article, but seems to display them if you sneak up on them sideways, so sorry about the URL which is unrelated to the subject).

Alan Kay “6 years ago”, which I take to be 2017:

Alan Kay

Still trying to learn how to think better
I’ve been programming since age 11, for a living since age 15. That amounts to 15+ years of programming exper…
Author has 615 answers and 7.1M answer views[6y]

(https://www.quora.com/Is-there-a-programming-language-thats-effectively-a-successor-to-Smalltalk/answer/Alan-Kay-11)

Well, let’s see. The first usable Smalltalk was designed and implemented by the end of 1972. That makes it 45 years old, and the main ideas about OOP I contributed go back to the end of 1966 (that makes this particular line of thought — dynamic programmable OOP — 51 years old). If Moore’s law represents a doubling every 18 months to 2 years (depending on what you look at), then the changes in combined scales since then are probably at least a factor of 100,000,000.

The big ideas have fared better than the implementations we did back then, but quite a few of the more important future looking ones — for example, having objects be able to negotiate their interoperability with other objects — were barely thought about, and not worked on.

In my mind, a real successor to Smalltalk is really needed, and it needs to be qualitatively a successor, not just a better version of the old ideas.

Scaling does need to be done much better to deal with today and tomorrow. Another dimension that is very important is expressibility. Smalltalk at Xerox PARC was able to do its “OS”, its graphics system and UI, its (live) development system, and a number of “personal computing abilities” involving media, high quality fonts, etc. in about 10,000 lines of always operating code (and given that this is a moving target, we could just as easily say 20,000 lines of code to make the same point).

So, we should think about something important and large in functionality that a real successor of today or the near future could bring to life in 10,000 to 20,000 lines of code (vs e.g. code that is 1000 or more times larger).

Certainly — harking back to Sketchpad — we should call for a deep set of abilities to go automatically from the “whats” to the “hows” via a constellation of integrated problem solvers under the hood. We really want to do a lot of the programming of the future in terms of “runnable requirements”, etc.

One big thing that we talked about and some interesting experiments made, and more recently a general facility made (the “Worlds” system on the Viewpoints Writings page), is the “simulation of time” for many purposes: “possible worlds reasoning”, parallel transactional functional relationships from one world “level” to the next, generalized UNDO, etc. This should definitely be done on the next serious effort for a new programming language.

Similarly, we should call for a much better approach to how software development is done. Smalltalk pioneered a lot of IDE ideas (and we got a lot of ideas from several of the previous Lisps). But to be done seriously, software has to take engineering seriously, and should now look to see what the integrated CAD → SIM → FAB tools are like in the real engineering disciplines (civil, electrical, bio, aero, etc.)

All in all, I may be missing something out in the hinterlands, but what I see when I look out at the programming landscape is rather tiny little incremental improvements (with occasionally some really bad regressions), but I don’t see serious efforts to invent “programming for the 21st century”).

We can blame bad funders for some of this. But I wonder what would happen if good funding showed up. How many computer people are thinking about “what is actually needed” rather than “what would be a little better”?

1 Like

Neat, Alan Kay’s author page - with entire answers - seems to be unpaywalled at the moment!>

From Nov 12:

The bottom line today is that most computations are not as secure and trustworthy as they need to be (in part because some of the best solutions wound up being perceived as being “too expensive” and then abandoned).

Note that Trust becomes front and center as Moore’s Law advances, especially with advanced networking.

Note that Trust becomes IMO the dominant issue when all that has gone before is added to “NCANIP”s (non cognitive artificial non-intelligent processes* are allowed to run wild in forms that as Harari has pointed out “hack the language communications systems of our species”.

Note that “Trust” is one of the deepest issues — much larger than “just computing” — when not just communication, but actual education is one of the main needs and goals. We want to be able to know the degree of trust we can allow for what our own minds come up with, what we hear from others, what we read, our teachers, etc. A big deal with science is that it was partially invented by humans — after hundreds of thousands of years — by learning better methods than trusting one’s senses, or mere beliefs of one’s cultures.

A simple principle is that for most things that are automated and scaled, the trust requirements have to be vastly expanded and made vastly more strict.

I like how this guy thinks.

2 Likes

Trust is indeed the most important aspect of information technology (in a wide sense that includes science and education) we will have to revise in the near future. Most of the sources of trust we used beyond the interpersonal level are slowly falling apart: trust in the state (and more generally public institutions), trust in science, trust in the media, even trust in sensory evidence (with deepfakes).

I have written a bit about trust in “automated reasoning” (computation, automated proofs, but also AI) here:

Establishing trust in automated reasoning

1 Like

Another Alan Kay quote that seems close to my feelings about the modern (eg Haskell influenced) approach to type discipline:

For example: why types? An early impetus was to help compilers generate code. E.g. computations with numbers don’t semantically need to know int or float, and in many cases an algorithm will need both. Another possible use for typed variables is as documentation — this is using the variable name as a stand-in for an entity that will now mostly help human programmers. A worthwhile case to consider is what to do if someone sends us a module over the Internet that we want to make use of. What is its “type”? What are the “types” of its API. Etc. Right away we should see that we need dynamically adjusting systems/languages rather than the old-style static approaches to semantics. I.e. We need “semantic types” and the field doesn’t have them yet. A stopgap is to have a dynamic language — it can be fixed so it can’t crash — that can dynamically adjust to both changes and unknowns.

The above paragraph amounts to the start of really big criticisms of most parts of computing (much of which insists on living in the past, and especially using old ideas that don’t scale).

We can see that some form of “typing” can be really useful, but that none of the currently used approaches is very good (I think most forms don’t really pay their way).

I wonder what he means by “semantic types”?

In other discussions this year on Quora he’s said things like:

I’ve noted elsewhere that “object-oriented” to me is not a programming paradigm, but a definitional scheme — it’s a way to making things that carry their definitions with them.

I like the sound of that idea! But I also wonder what his idea of a “definitional scheme” is, and what “objects carrying their definitions with them” would actually look like. One of my biggest complaints about OOP as practised today (even in Smalltalk) is that too much of how objects communicate is by sending side-effectful commands that do things - and the caller doesn’t don’t know what things, or how far those side-effects will propagate, potentially across the entire Internet. The logical result of that architecture has now happened: the Cloud, where our objects execute on centralized, opaque software and hardware stacks that are explicitly controlled by giant military-industrial defense contractors. What this centralization and militarization of the formerly open and decentralized Net fully implies for the human future, we haven’t yet seen - but it gives me a dark sinking feeling every time I think about it.

So the opaque-objects-passing-arbitrary-messages paradigm seems like about the exact opposite of a “definitional scheme” to me. If we wanted objects to be definitions, wouldn’t we need to move well beyond the idea of just sending messages? Wouldn’t we need objects to basically be fully portable between systems (because we don’t necessarily trust centralised actors to host them), and wouldn’t that mean that we’d have to always send them as “source code”? And wouldn’t we need that “source code” itself to be something that’s tractable to scan and analyze at runtime? And even then, source code is all very well for a class definition, but what about an object instance, which kind of by definition contains hidden state?

On the other hand, there seem to be very legitimate places where we need data hiding (we can’t get any kind of security without it). Yet it seems like any component that hides data - and especially that hides its implementation - decreases its trust level as seen by other components.

I’ve always assumed it’s some sort of really expressive form of design by contract where you can easily state:

  • preconditions a function requires
  • postconditions a function provides
  • side-effects a function can have (so that any other side-effects fail)

Kay doesn’t seem to care about static inference and so on. Run-time errors in a language with seamless recovery are enough.

2 Likes

Contracts specifying preconditions and postconditions and side-effect tracking sound pretty good to me! That would go a long way towards solving the “this object sends messages to the giant planetary object soup , but I have no idea where or how bad they could be” problem.

I mean obviously on the modern Internet we don’t allow arbitrary message sending to happen, hence firewalls, deep packet inspectors, email scanners etc. But it feels like if we had an object protocol that understood this, we wouldn’t maybe need all that clunky security kit (and which itself is a big source of the privacy/centralization danger that worries me).

It seems there should be a principle like “there must be, at some point, a reversible one-to-one mapping between Objects (with hidden state) and Messages (with clearly formatted and exposed state). No object may exist without both declaring and obeying the full semantics of this mapping; any object which fails to do so is both an error and a security risk.” The Internet works on this principle, that’s what all the RFPs are, clearly specifying the messages. But I’ve never felt like I’ve encountered a local-machine OOP language or system which has such a principle and spells it out.

1 Like

I think what I’d like is, somewhere at the very root of an OOP object system, there to be two non-overrideable methods

  • fromContext.toTransportFormat(Object, toContext) → Message
  • toContext.fromTransportFormat(Message, fromContext) → Object

Where the “contexts” are the root of the object tree/soup that you’re exporting the object to or importing it from (so that the transport format knows which sub-objects it needs to link to or recreate), and include all the authentication credentials to prove that you have rights to read/create objects there.

Doesn’t matter what exactly the “transport format” is as long as a) standardised in the language definition as non-optional to implement, b) it’s machine-parseable and writeable at runtime by other objects - preferably in as close to whatever its “native syntax” or data format is, and c) the result of running fromTransportFormat on toTransportFormat is an object tree with the same semantics across systems.

While I’m at it, I think I want something like “Context” to also be an object which is a (transparent) container and proxy for other objects. So it can represent things like changesets, revisions, backups, volumes, etc, as a way out of the “one giant image” problem. It could also maybe contain side effects: tell the context to execute your messages such that they don’t mutate objects but instead generate a new context containing the changeset.

We do exactly this sort of thing in the “Containers” world, and so a general approach to “Containers for Objects” seems like it would be super useful. And it might also turn out to be “Firewalls for Objects” as well.

Of course, saying I want this doesn’t mean I want to build it, but at least knowing what I want is maybe 0.00001% of the battle.

Kay talking more about what he means by a “definitional scheme”:

“OOP” (as I think of it at least) is not really a programming paradigm, but is much more a workable “universal definition” scheme, and that is especially suited for defining large systems.

It is basically an abstraction of an unlimited number of complete computers networked together (and where — by definition — the interior of the computers can also be a system of computers (both real and virtual). If you only worry about semantics, this provides a very powerful universal building block at all scales.

Metaphorically, this can be thought of as a “universality” like that of NAND or NOR, vastly scaled up: you can build any kind of functionality you want, but there is no hint of how to design the organization of universals.

In practical terms, since you can imitate (simulate) any idea, you could choose to use a real OOP framework to simulate old familiar ideas — like data, procedures, etc. — or you could choose to use the framework to deal with vast scalings and new situations brought by Moore’s Law.

Historically — and unfortunately — “OOP” starting in the 80s has generally chosen to simulate old familiar kinds of things (via its subset use as Abstract Data Types). For many reasons, this kills “graceful scaling” (and has done so)).

So — for general/commercial use — “OOP” needed to be packaged not as a programming language — too many degrees of freedom for most programmers — but as a framework loaded with powerful design schema to help programmers learn ideas far beyond mere programming. That didn’t happen.

He then segues to Declarative Programming - something that never really crossed my mind when thinking about OOP. And in fact I thought OOP with its hidden mutable state was almost the exact opposite of DP, but Kay thinks they go together. I’m guessing he’s thinking along the lines of reflection being a deep part of OOP, such that the system can introspect on objects to figure out their characteristics? Which would go along with why he doesn’t like traditional type systems, which tend to get broken by reflection.

One way to think about “declarative programming” is via an analogy to a system of “simultaneous equations”. If there is a solver that can solve them, then it is extremely handy to just add a new equation for each new situation, and let the solver find a viable solution for all together.

Note that a system of simultaneous equations, quickly gets difficult to gist — and some systems — even of linear equations — don’t have solutions. And many systems of equations don’t accurately describe the desired system.

Eventually, this will give rise to a higher way to think about this e.g. matrix algebra. (But still, how can this be grounded in more meaning that transitory goals in the minds of programmers and management?)

Declarative programming is all this and much more (an insurance system I’m aware of has over 100,000 requirements — and the working system is a partially unknown approximation to those requirements).

To me, all this (and more) implies that a next real paradigm (in the deep sense of the term) would be “knowledge based system building” of various kinds.

This is really interesting. I remember in the 1980s OOP, GUIs, and AI were all being tossed around as if they were all parts of the same thing - but then the Macintosh dropped, and GUIs + OOP went in a very different direction from both Smalltalk and Lisp-style symbolic AI (and neural-network AI went in a third direction again). Kay though seems to still have that 1980s Byte Magazine mindset (which I respect, because it was very big and strange) that OOP was supposed to enable very high-level reflective software development: CASE tools and AI-powered self-configuring knowledge systems. I think we got glimpses of that in the 1990s with Rational Rose and UML, which… wasn’t great. But it was at least reaching for something higher in a way that npm and Kubernetes… are not.

I do find myself confused by Kay’s belief that OOP is a “universality, like NAND and NOR”. Didn’t Common Lisp and Scheme demonstrate that objects, at least as “things that parse a message”, are pretty much just functions? Isn’t it then functions that are that “universal building block”? What exactly is the simplest possible “object” that’s more than just a function? Kay doesn’t anymore think that inheritance is required for a software entity to be an object, so what is? What’s the essential minimal set of traits that defines a Kay-style object? Is it identity? (Scheme functions have that, so it can’t be). Is it a meta-object protocol? (He points to TAotMOP, so maybe it is that?)

This isn’t snark; I’ve been chasing the definition of “what is, and what is not, an object” for nearly 20 years now. And I’m still confused.

I think the minimum would be to have the system pretty much understand the goals of any new system added to it, so it can do a lot of the feasibility checking (and much deeper) that humans sporadically and randomly do today.

I get the sense that Kay does have the same kind of ache for the lost future that I do (no surprise - because as a kid in the 1980s I read Byte columns written by him and people working with him). But now I worry that the vision of that self-adaptive next-generation programming paradigm being built out of computer-like things is still… well it puts a lot of trust on those computers to respond truthfully and behave non-maliciously, doesn’t it? And I feel like the Internet has proven that the opposite will happen. Running code will straight-up lie to you, and do far worse if it can, in ways that raw data can’t. Surely the only way we can build large trustworthy systems - the only way we’ve ever been able to build them - is out of things that aren’t currently running code? And to the extent that the things we build from are currently running code (not, eg, data files or stopped source code that we can examine and compile ourselves), then to exactly that extent, they’re not trustworthy?

I respect Kay and the Smalltalk vision of constantly running, “never stopped” code. I want that! But at the same time, it terrifies me. The only way I can possibly imagine this vision working (perhaps it’s what Kay is assuming?) is there being some kind of low-level VM debugging interface? But surely such an interface could never be implemented out of trusting objects to respond to messages - because those objects will lie to you if they can.

There’s gotta be some root of trust somewhere in an object system which isn’t the objects themselves. Is that root, whatever it is, the thing that makes them “objects” as opposed to “not objects” (ie, just arbitrary typeless functions)?

Hmm. Could we maybe say that, as a “closure” is a “function plus environment”, that the minimum essential “object” is a “closure plus trustworthy runtime-readable metadata, up to and including source code”?

My understanding is types defined by categories external to a formal system, rather than by its internal structure. For example a type “person” that means “data describing an individual human being” rather than "a data structure with fields “name”, “age”, etc.

As for Alan Kay’s views on OOP, which I don’t claim to fully understand either, I think it’s important to distinguish them from Smalltalk. Kay worked on Smalltalk 50 years ago, so it’s normal that his views on OOP have changed since then. I remember him saying/writing somewhere that he considered Smalltalk a mere stepping stone, a system good enough to design a better system. But I am not sure what he imagined that better system to be.

1 Like

Interesting. That comes down to replacing executable code by specifications defining the behavior of that code, formal specifications being a quasi-generalization of mathematical equations to the realm of computations (see The nature of computational models for the details). Solvers for systems of specifications do exist, but are not very well known. An example is the Bird-Meertens formalism, aka “Squiggol”.

1 Like

This is what we need. But in smalltalk, an object can be shared among multiple objects. Thus it is impossible to separate the groups of actors into a single abstract entity with a unified behavior, an abstract actor.

My definition which I think is the same as Alan Kay’s.

An object/actor is an arbitrary separation of computing units that interact with each other with msgs.

So an actor is a set of other actors, or a single primitive actor.

An actor can respond to a msg or it can change state on its own. The second option is fundamental to have, because only then can we model sets of actors as a single actor.

Only then can we encapsulate infinite computation. In other words, an actor is active, it computes even if it doesn’t receive any msgs.

An actor is an abstraction.

1 Like

Nothing that I know has managed to describe sets of actors as a single actor. We only have primitive actors.

1 Like

That’s an interesting and helpful definition of the difference between an “object” (perhaps even a Smalltalk-style object) and an “actor” (a Hewett-style actor and perhaps the Kay-style objects which haven’t been implemented yet even in Smalltalk).

But if an actor is “computing”, it has to be doing so in response to something, though, right? Even if it’s just a long-running computation started by its creation event? But perhaps not doing so just in response to an immediate incoming message?

So would this quality of activeness perhaps be the same as being able to send asynchronous messages rather than the synchronous messages we normally have in Smalltalk and C++ style objects? (Ie: send a message, then continue computing a function/expression/procedure). If so, how exactly do we define the semantics of asynchronous message sends?

Would an asynchronous message send be different from an ordinary synchronous message send that happens to have a “side effect”?

Can have them in anything approaching a functional language (ie a language based on term-rewriting), or are we forced to use an imperative style? (As Hewitt’s actor languages tend to have).

Would asynchronous message sends be anything different from what we do now in C++ and Javascript, where we can send a message that takes an object reference, starts a new thread, does some computation, and at a later time uses that object reference to send a synchronous message to us with the response? If we already have objects that can invoke multiple threads, why is this not enough to be considered Kay objects or Hewitt actors? Is there some specific thread-safety or multiprocess communication locking/blocking/etc property implicit in the Kay/Hewitt vision of exactly how messages are sent, that hasn’t quite been spelled out?

(For instance: Kay really doesn’t seem to like Communicating Serial Processes as a model of multiprocessing at all. I’m not sure yet why, but that feels like it’s probably a key to understanding his specific vision of message-sending.)

In a single thread, could we approximate either “activeness” or asynchronous message sends through the use of coroutines or generators?

Is coroutines/generators in fact what Scheme’s “hairy control structure” (continuations) was all about? I feel like I’ve read somewhere that it was, that Scheme’s big idea was continuations, and that continuations came about because of Hewitt’s proto-Actor ideas in PLANNER inspiring the idea of trying to model communicating processes in a single-threaded Lisp-like language.

If Scheme’s continuations were an attempt to approximate the 1970s ARPA object / actor idea of objects which were really like little servers or processes, continuing to do computations while responding to messages… did continuations in fact succeed at this? There’ve been Web frameworks written using the continuation idea, and they seemed to work well enough?

But would Alan Kay consider Scheme a true implementation of his object ideas? I suspect not, but if not, why not? What would Scheme need to add (or remove) to get there?

Or am I still completely missing the point, and is “activeness” in the sense you understand something completely different from asynchronous message sending?

Nothing that I know has managed to describe sets of actors as a single actor. We only have primitive actors.

That’s something that’s worried me for quite a while too (if I understand you correctly).

I want a language where one object can represent (simulate, proxy) a group of objects - and not just a group of objects (because any object can contain references to a group of other objects), but objects physically stored inside that container object, and visible to other objects as references ‘inside’ otherwise really existing objects. So that an object can represent something like a transaction, commit, changeset, container, package, copy-on-write-filesystem, etc - a set of objects that override the “inheritance chain” or however it is other objects locate them. Compositions of such objects would need to be able to be cleanly decomposed too (ie, rollling back a transaction or changeset), and also there should be a way for objects containing other objects in this manner to be stored very close together in storage (to try to avoid defragmentation and cache-smashing).

I’m not sure how to achieve this. I feel like it’s maybe doable just by a hashtable/dictionary with a computed “inheritance” mechanism (maybe Smalltalk’s notUnderstood, or Lua’s metatables could do it?) But maybe it needs some special virtual machine support (particularly for the “objects should be stored physically inside the container object”) bit.

But in smalltalk, an object can be shared among multiple objects. Thus it is impossible to separate the groups of actors into a single abstract entity with a unified behavior, an abstract actor.

Ah! Yes, that (in my idea) might be part of the bit that needs “special VM support”: these “virtual objects” created by the “container object” would need to appear to other objects as having their own individual VM-level object references. And one of the major points of the object model, as I understand it, is that the VM must guarantee that objects cannot fake up object references whenever they wish.

I suppose, in a suitably reflective object system (ie anything with something like “notUnderstood”), the container object could create some tiny “proxy” objects (to create the object references) which then connect back to the container object to get their “actual implementation”. It all seems rather messy though. And also, feels familiar: an instance of the Gang of Four Flyweight Pattern perhaps? But in the kind of system I imagine, “virtual objects and containers” would exist everywhere and be used so much that it would be much, much better for the VM to natively do this if it can.

However, I’m not at all sure if this thing I’m talking about (“container objects / virtual objects”) is the thing you’re referring to, though? If not, can you unpack your “sets of actors as a single actor” idea a bit more?

2 Likes

The problem lies in abstracting groups of actors.
It is not about asynchronous msgs.
I would like to know if the original actor model actually considered this case.

This example bellow is impossible without having the abstract actor change his state/compute without receiving any msgs.

:

@natecull At the moment, I haven’t figured how to implement this, I do not know all the details yet.

I am starting from the abstract requirement and going downward. Still not there yet…

But having common abstract goals is important. I would be happy to find that someone else has found a solution, and thus I do not have to… :rofl: .

(Also, since this is very general, there might be multiple solutions each with its own drawbacks)

Hmm. But from my perspective, an “abstract actor changing his state without receiving any messages” is the same thing as an actor that’s sending asynchronous messages to smaller parts of itself.

(I’m assuming that we’re both defining “asynchronous message sending” as “sending a message to an agent, but not waiting for a response, and instead continuing on to send another message, or many, to another agent”).

Maybe I’m missing the point, but it looks to me like inside your “abstract actor” block you’ve got Agent B and Agent C that are each sending messages to each other, making an infinite loop (so presumably the computation they form doesn’t ever “terminate” in a single message). Okay. Infinite loops are cool. Maybe the “abstract actor” formed by B and C represent a search over the prime numbers, and A represents the GUI for that search, getting a message whenever a new prime is found, or sending a message to change the search parameters.

To me, this seems to be the same situation as Agent C - in response to one of its messages from Agent B - sending a message to Agent A, and then not waiting for a response, continuing on to send a message to Agent C, to continue its infinite loop. And similarly, at some point, receiving a message from Agent A, and then in response to that, sending a message to Agent B but also not waiting for a response, instead carrying on with its normal conversation with Agent B.

(These “extra” asynchronous messages between Agent B and Agent C presumably do something simple enough like modifying a value slot, that they don’t explode into an exponential flood of messages. Ie, since each of these “asynchronous message sends” would be a little like a “process fork”, Agent B and Agent C aren’t fork-bombing each other. When one of these async message sends terminates, its value would be ignored and the lightweight thread/process it represents would end and be garbage-collected.)

Meanwhile, from the perspective of Agent A, it’s talking Agent C as if it’s just one actor, though really Agent C is representing the abstract actor “both Agents B and C”. But Agent A doesn’t know or care about what’s “inside” C, whether B “really exists” or is just a part of C. A is maybe just occasionally sending synchronous or asynchronous messages to C (perhaps waiting for a response from C, perhaps not), and also occasionally receiving synchronous or asynchronous messages from C (which itself might wait for a response from A, or might not). Since these messages can be asynchronous, Agent A talking to an actor that’s “doing computation without receiving messages” isn’t surprising in any way. Messages from Agent C (being the representative of both B and C) are just handled by Agent A when they arrive, and meanwhile both A and C carry on with whatever else they’re doing, in full parallel / concurrency, in however many “threads of execution” (or the VM’s equivalent of that - most likely something much more fine-grained and lightweight, down to the message-send level).

My concept of what an “actor” in this model would be does assume that even something as simple as “an integer” or “a property slot” is an actor, in the same way that in Smalltalk, variables and integers are objects. So what we might normally think of as a single actor would actually always be a collection of very tiny actors. This idea might not be in Hewitt’s conception of Actors; it’s just what seems natural to me, after having my head in a Smalltalk / Lisp / Forth kind of space, where everything in a language is built out of the same very tiny “things”.

Does this understanding match yours, or are we still on a different page? Am I still overlooking something very important? I can’t say I’ve ever actually built an actor language of the kind I’m sketching here, so maybe there’s something vital to how this would work in practice that’s actually very hard.

(I would assume the hard thing is “how to mitigate/structure the consequences of concurrency”, since uncontrolled async messages at any time could mean any actor’s state can mutate randomly while you’re in the middle of a computation, which is normally A Very Bad Thing. So presumably we wouldn’t want Agents B and C to be servicing messages from Agent A just anywhere in the middle of their fine-grained computation? We’d want them to be doing chunks of real stuff in nicely blocked transactions, then have Agent C handle requests from Agent A in transactions somewhere in the middle, during some kind of clearly marked out “safe to receive new work” state? Whenever I’ve thought about implementing an Actor or Functional Reactive language - or even trying to understand what Elm or FrTime or React are doing in their core loop - this is always the part that gives me the cold shivers and where my brain stops working. How time is coordinated and scheduled such that state doesn’t get completely smashed.)

Sorry if I’m rambling and this is completely off-point from your problem.

But there’s a cool throwaway comment in Alan Kay’s Quora corpus where he mentions that the Xerox Alto had an interesting feature in its microcode: it didn’t have interrupts! Instead, it did some kind of a really fast polling loop of its low-level device ports and… did something clever in response that was not a classical interrupt handler but was more fully Smalltalk, I don’t quite understand what. But it might be relevant to this problem.

Kay also says that he really likes Erlang, that it’s very close to his idea of Objects, so that’s another possible inspiration for solutions.

Actually in the “Actors” (probably not the same as Hewitt’s Actors) model I’m sketching here, I imagine a “thread of execution” would be an Actor itself, ie, the whole point of an Actor existing would be that it is a thread. So “sending an asynchronous message” would actually be “creating a new Actor”. And I’d imagine that it would be as simple as evaluating an expression containing the “new” message, and in fact probably would just be that.

I’m not sure that I’ve entirely thought this through, in fact I’m sure I haven’t; I’ve thought about it a lot and have never got to the point of the ends joining up. But it’s pretty much what I thought Object-Oriented Programming was about when I first met Java in the mid-1990s, and then was shocked and stunned to find that Java objects weren’t parallel-executing units of concurrency but all ran in a single thread, and that “new” stopped the world and waited for a “dumb” object to be created, when then waited for you to do something to it.

What I think I really want is Functional Reactive Programming in a tiny Lisp-like form that’s small enough for me to understand. (React and its friends are emphatically Not That.) I feel that FRP might turn out to be just Actors in a trenchcoat - but, I’m not sure that it is.

I used to also feel that there ought to be a very close correspondence between “concatenative languages” - especially in a prefix rather than postfix form, which is a bit unusual to find - and coroutines/generators, and independent parallel processes sending messages in realtime. The reason I felt this is that in a prefix concatenative language, you can “return a result and yet also keep an infinite computation running” just in a single expression. However, my intuition ran out halfway through the process of sketching out the consequences of this feeling and trying to build a toy prefix language to see if I could make it work. I didn’t quite understand what shape I wanted expressions to be and what it would mean for them to “execute in parallel” in a single term-rewriting system.

(Although I do still feel that that ought to be possible: just picture a computer, say, as a single expression into which a sequence of input events are fed… it’s just what happens in the middle that is vague to me.)

The primitive actors can be implemented with asynchronous msgs indeed. Or with erlang or any other actor framework.

But we don’t want to program with primitive actors. The details for this elude me at the moment.