Stewart “the best news since psychedelics” Brand (85 and still going!) hit Hacker News yesterday by posting the latest chapter of his book-in-progress, “Maintenance: Of Everything”, titled “The Soul of Maintaining a New Machine”. It’s about the Xerox technician subculture in the 1980s/1990s, and the “Eureka Project” (from PARC), an attempt to share knowledge bottom-up which conflicted with the rigid top-down Xerox management style, and the concept of “communities of practice” which arose from Eureka.
Seems highly relevant to malleable computing.
One irony in particular strikes me: the top-down management-imposed “decision tree” style Fault Isolation Procedures, which were hated by the technicians and are contrasted with the freewheeling PARC-led Eureka knowledge-driven approach… seems to me to be a little too close to Alan Kay’s idea of objects and his wish to get rid of “data”. Objects which expose no internals and can only do actions programmed into them by a distant programmer (much like the FIP trees which only allowed pre-programmed actions and recommended replacing the whole unit rather than fixing it), vs knowledge of details which can be shared beyond what the designers expected… Hmm… Yet Kay came from PARC, not management! So one would naively expect him to be on the side of shareable knowledge rather than sealed mystery modules. What’s with that? One more conceptual conflict to add to the multilayered Xerox mythology, I guess.
Compare the FIP concept of “do this, then that” with the typical OOP API experience of all interaction with objects always being verbs. Also compare with the typical programmer’s experience of using an object-oriented component library designed by someone else. Especially with systems for which no source code is available, so you can’t do the last-resort trick of pulling the source from the github to see what the mystery boxes are actually doing.
A trick I used just last week in my day job - I too belong to the technician tribe, and we always struggle with the broken things that developers give us - with an official HP driver install tool that was mysteriously failing. Because the source was available on Github, I was able to discover the root cause by pulling the black box apart and individually testing the component Powershell command lines. If the source had not been available, if Alan Kay’s strong encapsulation wall around that object was in place and I was judged “not having the need to know” that information, I’d still be in the dark as to what was breaking and why. And our customers would not have a working system.
(I may of course be misjudging Kay, again. Smalltalk included source browsers. So does “encapsulation” and its accompanying doctrine of “information hiding” imply that the user should or shouldn’t have the source code for objects? If the user can get the source code, then why can’t they also access the internal object state? I mean, from a 10,000 foot view, it does feel like these two major software engineering philosophical currents of our age - Information Hiding and Open Source - are in fact very much at war. But everything is always more subtle and complex than it seems.)
I just finished reading Shop Class as Soulcraft, and a large chunk of chapter 7 is about exactly this, titled “personal knowledge versus Intellectual Technology”. The crux of it seems to be there is no substitute for personal knowledge, rather guide people on principles than try to solve it rigidly.
I feel like Information Hiding, as a philosophical principle, really should only apply to executing objects themselves, not to the user/programmer/operator with system-level access to those objects. It seems important to Alan Kay’s vision that the operator should be able to take exactly the same piece of “code” and run it in a software “CAD” or “simulator” tool on a large supercomputer - and then once it’s working, run that same code “live” on a low-spec machine - and the code itself shouldn’t behave differently based on its executing environment. Which means the code must not be allowed to read its environment, but the operator must be allowed to read the code.
At least that’s the part of what Alan Kay thinks that makes sense to me. Then I read about him elsewhere saying that he doesn’t like “data” and thinks that all objects should be little “ambassadors” that explore their embedded system to determine its capabilities and present themselves differently depending on their environment, and I’m like, wait, that’s the exact opposite of Information Hiding isn’t it? That’s straight-up giving malware the keys to do reconnaisance and polymorphic camouflage. So I remain confused.
Edit: That first paragraph (running “exactly the same” piece of code in both SIM and FAB environments) is perhaps still not quite what Kay thinks. From https://www.quora.com/What-was-the-last-breakthrough-in-computer-programming : “In software engineering, this would mean being able to automatically move from vetted designs on supercomputers to optimized systems that would work on commodity machines.” I guess that means he would allow that “optimized systems” of objects could be dramatically different between their “simulation” phase and their “fabrication/deployment” phase - as long as that process of optimizing was done “automatically”. How exactly automatic optimization is supposed to work when the objects in question are highly reflective, environment-sensing, Turing-complete code, I’m not at all sure. I also really don’t like Kay’s working assumption that software development should just naturally require supercomputers 10 years ahead of their time. That’s the exact centralized Developers vs Users situation we have today that we want to get away from! How do you get from “must have a supercomputer to program or you’re not doing it right” to “children should be able to program their own computers and share their work”? Again, an apparent massive contradiction in the core of Kay’s thinking, but again, he presumably sees his thinking as coherent, so I must be completely misunderstanding him. And so my quest to even begin to understand him restarts. What’s the key to his thought that I’m missing?
(Possibly one answer is that I have a deep fear and despair of the abusive predator that Silicon Valley has become, and want programming to get out from under all central systems and run to the edges as fast as possible. Meaning I don’t want to think of programming as needing any capital investment at all, if possible, otherwise yikes, survival for the little people is over before it’s begun. While Kay perhaps still thinks in terms of SV / big academia / big industry / big systems / big capital as having a natural leadership role in the culture, large machinery as giving a natural and positive advantage to everything, and just thinks that SV and big capital need to smarten up about how they’re doing that leadership.)
But I do like Kay’s argument in that Quora post, that major step-changes in “levels” of programming languages pretty much ended in 1984. That was my impression also, as a kid in the 1980s reading Byte magazine, and seeing all of the cool languages and ideas I was reading about just evaporate into depressing (and terrifyingly insecure) C++/COM sludge by the end of that decade.
This was a digression from Xerox and their photocopier-repair culture, of course. Stewart Brand probably doesn’t see eye to eye with Alan Kay on everything, although Kay obviously admires Brand (he says that PARC during the Alto years had a complete set of the Whole Earth Catalogue that they saw as inspiration).
The “cosmic service” idea that networked personal computing should be about not just doing jobs more efficiently, nor replacing humans more cheaply (the current drive for AI), but helping us (as humans) to think more clearly. So programming should be about designing both languages and a literature, with the primary goal of clarifying and amplifying human thought. That idea (in early Creative Computing and Byte magazines) is what attracted me as an 80s kid , and why C++ repels me so enormously. As a machine, C++ might go “brr” very fast, but as a language, it’s unclear and contradictory in its own thought-structures, and it makes human thinking more opaque. I have much the same feeling of despair and dread about large language models: this approach to AI doesn’t amplify human thinking, it suppresses it.
Alan Kay and Stewart Brand’s specific proposals can be argued with, and should be. We should also argue strongly with John McCarthy, John Backus, Ted Nelson, Richard Stallman, Doug Engelbart, Marvin Minsky, Vannevar Bush, Tim Berners-Lee, and JCR Licklider: not all of their ideas were compatible with each others’. But the general ARPA vision of “human augmentation” remains exciting.