Thank you for your very thorough answers!
That makes a lot of sense actually. So Skia is just being the “framebuffer” and pretty much is an OS-level I/O interface, and nothing from GT is being written in Rust. But wouldn’t it maybe be nice for a Smalltalk VM to use Skia so that things other than GT could use it? Is there some kind of rift between the Pharo and GT teams, or would putting Skia in Pharo break all of Pharo and so it’s better to try it out in a completely new project?
With today’s standard Smalltalk workflows, that never happens. Source code and notes are kept in Git repositories. Images are usually short-lived. I use mine for a few days at most.
I’ve heard this text-file-based workflow mentioned before. It bugs me quite a lot because it seems extremely “un-Smalltalky” (it feels like we’ve moved back to the “batch run, discard the flaky RAM environment after the run finishes” philosophy), but on the other hand it also feels like a survival adaptation to the brute pragmatics problem of how to do change management and backup in a world where Smalltalk is neither the OS nor the network. It doesn’t really feel like the solution Alan Kay hoped for, but at least it works and is robust.
I find myself doing much the same thing in Node.js as my current “personal malleable computing platform”. I use Node inside Termux on my Android phone and Node in a command window on a Windows or Linux desktop. Then I put my main scripts as .js module files, and restart Node every time I change them, but also drive Node mostly interactively from the REPL. It’s not exactly what I want to do, but it’s the only environment so far I’ve found that can run identical environments on desktop and phone, so I have it with me, and I can code live database queries as Javascript functions and edit my data in a text editor. If necessary I can edit the code too, but I often find myself interactively prototyping functions just in the REPL. And then I can copy data and code just as files in and out of the phone.
Node is surprisingly good as a REPL. Command-line history that saves between sessions, and tab completion, means that even on a phone it’s possible to perform steps of data loading and parsing without too many keystrokes. Copy-paste for patching small functions in flight isn’t as good as a proper browser/inspector, but works better than one would expect.
It would be nice to have more structure than text files, but there seems to be a fundamental tension between “strictly classful/typeful objectness” and “cross-platform transportability”. The tighter we lock down types/objects, the more tightly bound they become to code, and that means we have to either export Turing-complete, system-dependent, hard to analyze code with them (a cybersecurity risk, these days), or we have to dissolve their encapsulation and identity at the network or disk boundary and export them as some kind of plain, examinable data record. And then parse and “compile” them fresh as they come in from the network or disk.
It always fascinates me that Smalltalk was a product of the ARPA-IPTO culture and heavily inspired by the message-passing Internet. And yet! The Internet itself did not become an object-based network. I wonder if there’s an alternate history where ARPA went completely down the Smalltalk-y route, such that instead of a file-based Web we got a completely distributed object-based network? What might such a thing have looked like?
For most of the 1990s it felt like such a thing was absolutely inevitable, until it suddenly wasn’t. I can’t quite put my finger on when the “wasn’t” hit, but it was probably between the appearance of Google and Linux (2000) and their “text is the best interface” mindset, and the dominance of iPhone and app stores (circa 2010). I think it was mid-2000s when I first noticed the “object orientation is bad, actually” backlash.