Agreed: good default friendly security in distributed identities is hard to get. Scuttlebutt, NOSTR and Anytype are interesting experiments on that front, with the last one having the most friendly default experience for configuring your pass phrase.
Regarding the unfriendliness of Git and Jupyter’s JSON for storing document and their history, that’s why I use Lepiter, Markdeep and Fossil to have a human friendly format and storage/publishing repository. It requires technical knowledge, but it has been tested and welcomed in non technical and grassroots communities with good results so far, providing that there is some community facilitation to introduce such tech stack in community’s activities.
I like NOSTR’s general approach, though I’m not sure I trust the particular community that happens to be developing it (since they seem to be baking cryptocurrencies and “pay to play” into the relays, which is a big disincentive for me to get involved). Also I don’t know why they went with secp256k1 instead of Curve25519, which is built-in to standard Web stacks like Node.js. But I like the general idea of a) picking an elliptic curve cryptosystem, so the keys are tiny compared with PGP/GPG; b) each user generates their own keypair with no third party needed; c) your public key is your identity and it’s a copy-pastable line of hex; d) rely entirely on trusted introductions from other known individuals (web of trust) rather than centralised pay-for-play PKI, and that’s it, it’s done. If you want to post a document or data file and assert proof that it’s yours, all you need is about two lines of text to prefix it (your public key and the signature of the SHA2 of the file). That’s not too unfriendly - it could be made much better than, eg, GPG-encrypted email was.
PGP/GPG suffers from too much abstraction (among other issues). It presents itself as a system that automagically handled encryption and digital signatures, for users who don’t need to know how it works. But it’s much too complicated for most potential users. A more concrete implementation, as you describe, would perhaps work much better in practice. Users would have to know what keys look like and how they are used, but they would also see those keys every day and thus learn about them “on the job”.
I’d like to see Smalltalks with AOT native compilers in the image instead of JITs. JITs, I think, came about here due to the image and VM being predefined and not built with AOT native compilers in mind.
When would you run an AOT compiler in a Smalltalk system? What would you gain compared to JIT? I do see the point of native vs. bytecode, but in the context of Smalltalk, AOT and JIT seem to be equivalent to me.
The AOT compiler would be in the image.
The image would contain two compilers (or one compiler with multiple targets).
- One quick and simple compiler to bytecode.
- One optimising compiler to native code.
This would allow for a much simpler VM, less low level code and would enable work on the compilers at the Smalltalk level instead of at the much harder and lower VM level.
When would you run the optimizing compiler, and on what?
Today’s Smalltalk implementations compile each method to bytecode after any edit. They process the bytecode via a JIT compiler when an expression is evaluated, specializing for (and caching) for specific method arguments.
You can’t do any of these optimizations at the single-method level because everything is dynamic in Smalltalk. So AOT would make sense only for evaluating an expression. But even then you wouldn’t get very far before having discovered specific method arguments to optimize for.