eliot | 2025-05-13 17:17:40 UTC | #1 First-time poster here, thank you for this forum of interesting ideas. I've long been fascinated by the history and future of personal computers and end-user programming. I'm enjoying reading the posts, exchanges, explorations. Without having any particular point to make.. Sometimes I think about the birth of the world wide web, its vision of empowering people, augmenting the human intellect; about how it's turning out in reality; and the creative liberating potential of the medium. I feel like there's a reason why the web was invented on the NeXT computer. Not to praise this particular model or brand, but to acknowledge that the conceptual design of this machine, its architecture and user interface - inherited and evolved from the explorations at Xerox PARC research lab - this personal computing environment must have played a role in the creative thinking process that led to the innovation. --- ![nextstep-mathematica|672x499](upload://hBKp3RfSFvESMmqtkLo51bJn5z4.jpeg) > NeXTSTEP had been such a productive development environment that in 1989, just a year after the NeXT Computer was revealed, [Sir Tim Berners-Lee at CERN used it to create the WorldWideWeb](https://webfoundation.org/about/vision/history-of-the-web/). - [The Deep History of Your Apps: Steve Jobs, NeXTSTEP, and Early Object-Oriented Programming - Computer History Museum](https://computerhistory.org/blog/the-deep-history-of-your-apps-steve-jobs-nextstep-and-early-object-oriented-programming/) > During the 1997 MacWorld demo, Jobs revealed that in 1979 he had actually missed a glimpse of two other PARC technologies that were critical to the future. > > One was pervasive networking between personal computers, which [Xerox had with Ethernet](https://www.computerhistory.org/revolution/networking/19/381), which it invented, in every one of its Alto workstations. > > The other was a new paradigm for programming, dubbed “object-oriented programming,” by Alan Kay. Kay, working with Dan Ingalls and Adele Goldberg, designed a new programming language and development environment that embodied this paradigm, running on an Alto. Kay called the system “Smalltalk”.. > Smalltalk’s development environment was graphical, with windows and menus. In fact, Smalltalk was the exact GUI that Steve Jobs saw in 1979.. During Jobs’ visit to PARC, he had been so enthralled by the surface details of the GUI that he completely missed the radical way it had been created with objects. The result was that programming graphical applications on the Macintosh would become much more difficult than doing so with Smalltalk. ![nextstep|690x473](upload://uPj3WDgp1gKmbpmvDwpCBYgqA0x.jpeg) > With the NeXT computer, Jobs planned to fix this exact shortcoming of the Macintosh. The PARC technologies missing from the Mac would become central features on the NeXT. > > NeXT computers, like other workstations, were designed to live in a permanently networked environment. Jobs called this “inter-personal computing,” though it was simply a renaming of what Xerox’s Thacker and Lampson called “personal distributed computing.” Likewise, dynamic object-oriented programming on the Smalltalk model provided the basis for all software development on NeXTSTEP. --- - [World Wide Web Foundation: History of the Web](https://web.archive.org/web/20250409235333/https://webfoundation.org/about/vision/history-of-the-web/) > In March 1989, Tim laid out his vision for what would become the web in a document called “[Information Management: A Proposal](http://info.cern.ch/Proposal.html)”. Believe it or not, Tim’s initial proposal was not immediately accepted. In fact, his boss at the time, Mike Sendall, noted the words “Vague but exciting” on the cover. > The web was never an official CERN project, but Mike managed to give Tim time to work on it in September 1990. He began work using a[ NeXT computer,](http://en.wikipedia.org/wiki/NeXT_Computer) one of Steve Jobs’ early products. > > By October of 1990, Tim had written the three fundamental technologies that remain the foundation of today’s web.. - HTML: HyperText Markup Language. The markup (formatting) language for the web. - URI: Uniform Resource Identifier. A kind of “address” that is unique and used to identify to each resource on the web. It is also commonly called a URL. - HTTP: Hypertext Transfer Protocol. Allows for the retrieval of linked resources from across the web. > Tim also wrote the first web page editor/browser (“WorldWideWeb.app”) and the first web server (“httpd“). ![image|673x500](upload://7v1EKe1coZFrg6Jp0Aq5suZg2sn.jpeg) --- It's significant that the first web browser was also a web authoring tool. > Tim Berners-Lee wrote what would become known as WorldWideWeb on a NeXT Computer during the second half of 1990, while working for CERN. WorldWideWeb is the first web browser and also the first [WYSIWYG](https://en.wikipedia.org/wiki/WYSIWYG) [HTML editor](https://en.wikipedia.org/wiki/HTML_editor). > > The browser was announced on the newsgroups and became available to the general public in August 1991. By this time, several others..were involved in the project. > > ..The team created so called "passive browsers" which **do not have the ability to edit** because it was hard to port this feature from the NeXT system to other operating systems. Passive browsers.. It's a small step from that to the "passive web", as a kind of [faster horse](https://quoteinvestigator.com/2011/07/28/ford-faster-horse/) of television. ------------------------- eliot | 2025-05-16 12:03:37 UTC | #2 ![Smalltalk-80-GUI|425x500](upload://nRlfltXmwo5uLECyR7n7Z6dRvFz.jpeg) So what happened to NeXT and the original vision of `WorldWideWeb.app`, a unified object-oriented and networked environment that was a **web browser, authoring tool, and server** all in one.. Did it scatter like a broken mirror, each piece reflecting what could have been? ![Interface Builder for NeXTSTEP](upload://q4Ev0W14o1ftFcHnVsRpnmNL33b.jpeg) Interface Builder, HyperCard, and XCode.. > Interface Builder is descended from the NeXTSTEP development software of the same name. > > It was written in Lisp and deeply integrated with the [Macintosh Toolbox](https://en.wikipedia.org/wiki/Macintosh_Toolbox). Interface Builder was presented at MacWorld Expo in San Francisco in January 1987. > > Denison Bollay took Jean-Marie Hullot to NeXT after MacWorld Expo to demonstrate it to Steve Jobs. Jobs recognized its value, and started incorporating it into NeXTSTEP, and by 1988 it was part of NeXTSTEP 0.8. It was the first commercial application that allowed interface objects, such as buttons, menus, and windows, to be placed in an interface using a mouse. > > One notable early use of Interface Builder was the development of the first web browser, WorldWideWeb by Tim Berners-Lee at CERN, made using a NeXT workstation. ![MacPaint|200x200](upload://5VPN7kdkKWlLB3I2tsRsMMDm7Hi.gif) > HyperCard was created by Bill Atkinson (as described in [this interview](https://twit.tv/shows/triangulation/episodes/247) ~10:50). Work for it began in March 1985 under the name of WildCard. In 1986, Dan Winkler began work on HyperTalk and the name was changed to HyperCard for trademark reasons. > > It was released on 11 August 1987 for the first day of the MacWorld Conference & Expo in Boston, ![hypercardb|200x200](upload://gxkvLHSB5rSMuTgxGEA142VFPaD.gif) I see, so the Interface Builder and HyperCard were released in the same year. Must have been something in the air, or water. In the linked interview, Bill says he was working on HyperCard when Steve Jobs left Apple, and he (the latter) tried to convince him to continue working on HyperCard at NeXT. ![HyperCard-bird|525x371](upload://tIdY7IqiKFmRWc42Xa7PWzlzkG6.jpeg) > HyperCard is a software application and development kit for Apple Macintosh and Apple IIGS computers. It is among the first successful [hypermedia](https://en.wikipedia.org/wiki/Hypermedia) systems predating the [World Wide Web](https://en.wikipedia.org/wiki/World_Wide_Web). > > HyperCard combines a flat-file database with a graphical, flexible, **user-modifiable interface**. HyperCard includes a built-in programming language called HyperTalk for manipulating data and the user interface. "What I wanted to make essentially was a **software construction kit** that allowed non-programmers to put together pre-fab modules, drag-and-drop a field here and a button there. ..With automatically retained information - you put something in a field, and unplug the computer, it's still there.." > The database features of the HyperCard system are based on the storage of the state of all of the objects on the cards in the physical file representing the stack. The database does not exist as a separate system within the HyperCard stack; no database engine or similar construct exists. Instead, the state of any object in the system is considered to be **live and editable at any time**. > > ..The system operates in a largely stateless fashion, with no need to save during operation. --- ![ncsa mosaic|443x500](upload://4Qoi09Bmm4cTKyKDBoUChZKpI32.jpeg) Browsers went one way: Viola, Samba, Mosaic.. > [Mosaic was inspired by ViolaWWW](https://www.w3.org/DesignIssues/TimBook-old/History.html), which [was inspired by HyperCard](https://www.google.com/books/edition/How_the_Web_was_Born/pIH-JijUNS0C?hl=en&gbpv=1&pg=PA213&printsec=frontcover), which was inspired by Smalltalk ([and LSD](https://boingboing.net/2018/06/18/apples-hypercard-was-inspire.html)), which was inspired by Engelbart's [NLS](https://www.youtube.com/watch?v=yJDv-zdhzMY), who was inspired by the 1945 article ["As We May Think"](https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/) by Vannevar Bush. ![image|232x237](upload://aK2LmLqrwVLQvT63iy9ax2Uv0ji.png) Editors went another way: GUI frameworks, widget toolkits, IDEs, visual app builders.. --- ![lambda-2d|545x500](upload://35ITjkOx6iIO2RJKcG6UNkx5tcE.png) From the concept of an "object" in Smalltalk.. > Smalltalk emerged from a larger program of Advanced Research Projects Agency (ARPA) funded research that in many ways defined the modern world of computing. In addition to Smalltalk, working prototypes of things such as hypertext, GUIs, multimedia, the mouse, telepresence, and the Internet were developed by ARPA researchers in the 1960s. > > ..the Smalltalk language and environment were influential in the history of the graphical user interface (GUI) and the *what you see is what you get* (WYSIWYG) user interface, font editors, and desktop metaphors for UI design. > > The powerful built-in debugging and object inspection tools that came with Smalltalk environments set the standard for all the integrated development environments, starting with [Lisp Machine](https://en.wikipedia.org/wiki/Lisp_Machine) environments, that came after. ![libfive|690x354](upload://4FnFmMgy7HIbDfM0S0XQErk4Wnt.jpeg) > Then I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as “mocklisp,” which looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things. > > I concluded I couldn't use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, **making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs**. > > - [My Lisp Experiences and the Development of GNU Emacs](https://www.gnu.org/gnu/rms-lisp.html) ![hypercard-screenshot-c14|640x480](upload://fplVbGegJBSMHNsYJTMzZ6Uf3Ju.jpeg) > HyperCard’s never to be released successor, [SK8](https://en.wikipedia.org/wiki/SK8), which was originally Lisp-based and eventually migrated to HyperTalk, which evolved into SK8Script. There’s an entire lesser known history of Apple and Lisp behind this, e.g., MacFrames, another predecessor to SK8, used Coral Lisp, which eventually was acquired by Apple and became Macintosh Common Lisp. - [Why Hypercard Had to Die](https://www.loper-os.org/?p=568) > ..SK8 (pronounced "skate") was a multimedia authoring environment developed in Apple's Advanced Technology Group from 1988 until 1997. It was described as "HyperCard on steroids", combining a version of HyperCard's HyperTalk programming language with a modern object-oriented application platform. > > The project's goal was to allow creative designers to create complex, stand-alone applications. The main components of SK8 included the object system, the programming language, the graphics and components libraries, and the Project Builder, an integrated development environment. ![react app products|690x388](upload://wwoZVgVaqdZ72fOAo3Dfv1chrYb.jpeg) > Bill Atkinson, the inventor of HyperCard [was a student](https://mprove.de/visionreality/text/2.1.10_hypercard.html) in the Smalltalk classroom series. Alan Kay, of the Language Research Group at Xerox PARC & inventor of Smalltalk, would advise him on Hypercard. > > HyperCard also included the Hypertalk programming language, which would serve as one of the inspirations behind JavaScript. Is React a poor man's Lisp, or an evolutionary offspring of HTML to fill a niche, dreaming of becoming a direct manipulation programming tool? What about web servers, confined to the realm of specialists - doesn't everyone deserve the ease of serving their own data and programs from their own personal computer or device? And emails, why can't we send messages directly from my screen to yours, without having to go through someone else's server? And how about WebAssembly, a universal low-level language that runs on (almost) any platform, which (almost) any language can compile to. That sounds awfully like a mythical Lisp to express computation and to build an inter-personal computing enviroment with. Shouldn't it be simple, basic, and accessible enough for children and non-technical people to learn to write - or at least visually build with? Or Web Components. It sounds like the building blocks of the future, with which we can create our own `www.app`, a "web browser, authoring tool, and server all in one". But our timeline didn't turn out that way, at least not yet. ------------------------- akkartik | 2025-05-16 04:18:23 UTC | #3 In the main I resonate with your sentiments. But I want to push back on the "doesn’t everyone deserve" framing. The cosmos has absolutely nothing to do with fairness or what people deserve. Might as well ask if everyone deserves to be able to travel the galaxy, or to never want for anything, or to be able to live in peace. Computers would seem miraculous a short 100 years ago. It's incredible what we can do with them. Many things are hard, and we should work hard to make them easy. Absolutely nothing has ever happened because we "deserve" it. For example, we can't run webservers or send emails to someone else's computer because there are security risks associated with them. On the one hand computers can be miraculous, but on the other hand computers are also dark forests full of predators. These are real, hard problems. There's no reason to expect us to have solutions in our lifetimes. In darker moments I would also argue humanity does not "deserve" such things, because we so often misuse the miracles we _are_ granted. ------------------------- akkartik | 2025-05-16 04:20:41 UTC | #4 Your original posting hit hard because [I recently published a hypertext browser](https://akkartik.name/post/luaML2) -- without the ability to edit hypertext from within the browser. ------------------------- eliot | 2025-05-16 14:48:18 UTC | #5 Ah yes, I saw the hypertext browser recently, made with Lua and [LÖVE](https://love2d.org/). It's lovely, I'm inspired by how simple and small it is, like re-thinking the web browser from first principles. The project was on my mind at times as I wrote down my rambling thoughts on where the web came from and where it's going. ![image|690x280](upload://zUb5Lwmpn20eMFpa18NdS2EMXuK.png) It feels Lisp-y, a tree, a list of lists. I like that all the nodes have the same structure, the type/tag/function and its data/arguments/attributes/children. If it were to go the way of Emacs - a questionable proposition - it could become a self-hosted hypertext browser and editor (written in hypertext?) to make "the data structures of the internals of the editor exposable and manipulable by the user’s Lisp programs", or in this case their hypertext document/application. Not sure what good that would do, I just like the concept of a self-hosted language, like a Lisp interpreter written in Lisp, or a C compiler that can compile itself. --- > ..push back on the “doesn’t everyone deserve” framing. The cosmos has absolutely nothing to do with fairness or what people deserve. True, maybe "deserve" was not the best word to use here, with its connotation of privilege and entitlement. The direction I wanted to point at was about civil rights, the right to life, liberty, and general-purpose computing, as a kind of public good like water or electricity. Do people have a right to water and electricity? Well, maybe it's not about human rights, or what anybody deserves, it's about access and social infrastructure. But what if a corporation like Nestlé came in and bought up the water rights in an entire region, leaving the public unable to access it except by going through this middleman (middle-person), paying a toll for what should be (mm? questionable wording there) - common property, a public good. - [The coming war on general-purpose computing](https://boingboing.net/2012/01/10/lockdown.html) (2012) That's getting political though, and I can imagine arguments *against* the concept of having any "public good" at all. Who will own this public good, the state? Well they're the ultimate middleman. What I meant was more like: the world would be a better place if people had open access to personal computing and networked communication without too many unnecessary gatekeepers. Ideally no third-party intermediaries at all, so we can compute and communicate among each other directly. That's a naïve notion, I admit, that requires too much trust. As you said about email and web servers, it would be exploited by spammers, scammers, malicious actors ruining the public good. > we can’t run webservers or send emails to someone else’s computer because there are security risks associated with them. ..[C]omputers are also dark forests full of predators. These are real, hard problems. True.. But I still want to push the limits of vulnerability (ha!) and the risk of immediacy. I'm too optimistic about technological solutions to social questions, but it feels like with cryptography (authentication, authorization) and the right kind of system design, we could achieve much more open and direct interpersonal computing. What that would look like, I don't know. Like this whole post is without a concrete point or idea, I'm trying to make sense of the history of how we got here, why web technology is the way it is. I'm also trying to be humble, to understand that things are the way they are because generations of smart people worked on it, innovating and improving, and mostly with good intentions to contribute to the common good. ------------------------- akkartik | 2025-05-16 16:27:30 UTC | #6 You're right, the precise words don't matter -- the distinction between entitlement and right and unsolved problem -- if we agree on the broad sentiment. The thing I wonder about is what one small step might look like towards the infrastructure I want to see everyone have over time. In that context, it's interesting to ask what _one_ very privileged person should have today that would let anyone send them a message without any intervening servers, without them taking on undue risk, that would motivate them to actually install it and use it. ------------------------- eliot | 2025-05-16 18:12:38 UTC | #7 I'm digging deeper into [Freewheeling Apps](https://akkartik.name/freewheeling-apps), and I see you're making tons of experiments and variations on the concept, including many "self-editing" applications, like the chess app where one can edit the underlying logic within the app itself. It's a joy to see such a "meta" app that lets the user modify itself as it's running. And the smallness of each experiment is inspiring, like bite-sized thoughts exploring the problem space, discovering insights. > what one small step might look like towards the infrastructure I want to see everyone have over time Mmm, a small but very big question. What comes to mind in the direction of such an infrastructure.. Distributed decentralized mesh networks, where everyone is a node receiving and sending messages, including dynamic executable messages. (Not necessarily textual code as interface, it could be visually built programs in a shared environment.) The hardware and software, a thin stack as close to the metal as possible, so the entire system can be understood - and (re)built from scratch - by a single person, as in [permacomputing](https://permacomputing.net/). I imagine a world-wide network of Lisp machines with a HyperCard-like operating system. The next NeXT, the hyper-HTML. (Begone, [curse of Xanadu](https://web.archive.org/web/20250111195510/https://www.wired.com/1995/06/xanadu/)!) But to return to the one small step.. > let anyone send them a message without any intervening servers, without them taking on undue risk, that would motivate them to actually install it and use it. Can a browser communicate to another browser directly, without intervening servers? [WebRTC](https://w3c.github.io/webrtc-pc/) can do peer-to-peer communication, though it requires a TURN (STUN?) server to do the initial negotiation. In any case the browser seems too big, it could never be understood entirely much less built from scratch. It should be at most an optional dependency. Can a native application with or without GUI communicate with another instance of itself across the network? Of course, TCP/IP, HTTP, yea you know me. So there would be a discovery mechanism where these distributed apps can find each other. How about a tiny self-hosted hypermedia browser/editor/server app whose instances can connect to each other with a handshake, that lets you exchange messages, programs, objects and environments. Is there a "smol web"? Oh there is, and they even use a [subset of HTML](https://smolweb.org/specs/index.html). How cute! Similarly, there was that [Gemini protocol](https://geminiprotocol.net/). > Gemini is an application-layer internet communication protocol for accessing remote documents, similar to HTTP and Gopher. I think that's getting closer, something like this. Well, I'm going to be thinking for a while about what this "one small step" can look like, toward the kind of infrastructure I want to see. ------------------------- khinsen | 2025-05-16 18:34:01 UTC | #8 Since we are dreaming of possible futures here, I'll add mine: I'd like to see simple but extensible protocols for a future Web. Gemini, for example, is too limited for what I'd like to be able to do. But it would be sufficient for 90% nevertheless. Why not have Gemini/base (or some HTML subset instead), and then optional add-ons? Clients, servers, or peers would negotiate which add-ons they can use together. ------------------------- akkartik | 2025-05-16 18:57:12 UTC | #9 @eliot Beyond discovery there's a problem of firewalls. The world outside our computers is increasingly hostile, and firewall rules increasingly necessary. But that means it takes specialized knowledge to poke a hole through your computer's firewall rules, and expose a service to outside your computer. I'd say the reason everybody cannot run a server today is the hostility of inter-computer space and the difficulty of working with the life-support systems that help our computers survive in inter-computer space. ------------------------- avon | 2025-05-17 15:27:12 UTC | #10 Hi all! On the topic of sending messages from one screen to another without jumping through central servers, I wanted to briefly mention two projects I have experimented with. The first is the [Reticulum Networking Stack](https://reticulum.network) this is an alternative to the traditional TCP/IP + TLS stack that allows a collection of computers to self-organize into a mesh network. A Reticulum network provides p2p encrypted links using a ~200byte handshake. The low data cost to establish connections makes it usable over all sorts of weird hardware transports including lora and other types of packet radio. Reticulum is authenticated and encrypted at the packet level, and has a massive address space, so firewalls and NAT traversals aren't really needed[*]. From the developer side, having all my devices directly addressable over a "private mesh" has been really wonderful to build small networked applications on top of. [*]: For private networks, you can create a whitelist of allowed peers, which works great. But for adversarial public networks, I'm not exactly what the story is for spam-prevention. Unclear to me if any architectural decisions of RNS make it easier or harder to DDOS when compared to TCP/IP. >How about a tiny self-hosted hypermedia browser/editor/server app whose instances can connect to each other with a handshake, that lets you exchange messages, programs, objects and environments. The Reticulum Community has their own text-based browser that uses it's own home-grown markup language called "micron": https://github.com/markqvist/nomadnet Interestingly enough, peer & content discovery on Reticulum is done primarily by attaching small amounts of data to the announcement packets that the underlying mesh routing algorithm uses to organize the network, so if you connect to a community network and listen for a day or two, you'll come back with a list of sites that other's have announced without needing a central index or search engine. Right now you can only really exchange micron and text messages between NomadNet sites, but there's no reason someone couldn't hook up a Smalltalk environment over Reticulum and use it to send objects and/or VM images over the wire. Overall I think RNS offers an interesting networking base to build an alternative "smol web" or similar. The other project, which more directly tries to address the security concerns of p2p, is [Spritely Goblins](https://spritely.institute/goblins/) which is trying to build a capability-based collaborative object system that will also work [with the web](https://spritely.institute/hoot/). Allowing direct connections from the public modern web *is* scary, but I really do think capabilities, paired with modern authentication & encryption primitives, gives us a strong tool to allow us to do this safely. For further discussion on *how* capabilities can provide this, there is a great talk by Mark S Miller covering exactly this: [Architectures of Robust Openness -- Mark S. Miller keynote at ActivityPub 2019](https://www.youtube.com/watch?v=NAfjEnu6R2g) ------------------------- akkartik | 2025-05-17 16:09:02 UTC | #11 Thanks @avon for bringing up Reticulum. Do you know if it requires adjusting any existing firewalls a computer may already have? ------------------------- akkartik | 2025-05-17 16:28:57 UTC | #12 Also, do you happen to have any pointers to documentation about NomadNet's markup language? I'd love to read more about it, but can't find anything in the repo beyond the mention in the Readme. ------------------------- avon | 2025-05-17 21:42:06 UTC | #13 >Thanks @avon for bringing up Reticulum. Do you know if it requires adjusting any existing firewalls a computer may already have? The unsatisfying answer to this is that it depends on the underlying point-to-point connections you use between nodes. You could use a packet radio to connect to a nearby node, and in theory that needs only a serial connection. You can use traditional TCP or UDP, and for that you (usually) need to do the painful port forwarding/firewall setup. An interesting option that reticulum also supports is [i2p](https://geti2p.net/en/) which will be less performant than TCP or UDP, but in theory allows users to connect together over the internet without needing to set anything up manually. Tor and i2p are actually great at doing automatic NAT traversal and giving users a relatively safe way of exposing resources on a device to the global internet. For example, right now I can use [Onionshare](https://onionshare.org/) to host a little html file on my laptop, and it will give me a long string of letters and numbers that anyone in the world will be able to visit via the Tor browser. No firewall configuration needed. If trading off privacy for better performance is deemed acceptable, you could also potentially use a project like [Iroh](https://github.com/n0-computer/iroh) or [Yggdrasil](https://yggdrasil-network.github.io/) to do the p2p connections. In this same vein of "clearnet p2p" I think it's worth mentioning the now-defunct [Beaker Browser](https://github.com/beakerbrowser/beaker) and it's spiritual successor, [Agregore](https://github.com/AgregoreWeb/agregore-browser). Both very cool projects trying to use p2p tech to make browsers content-creation machines. >Also, do you happen to have any pointers to documentation about NomadNet’s markup language? I’d love to read more about it, but can’t find anything in the repo beyond the mention in the Readme. Ah sorry I couldn't get back to you sooner, this actually stumped me for ages, the only documentation around for Micron is actually inside the NomadNet's "Guide" section. If you don't want to download and install NomadNet, here's the source code for the guide page where you can kinda decipher the general syntax: https://github.com/markqvist/NomadNet/blob/master/nomadnet/ui/textui/Guide.py And here's the micron parser code if you're interested: https://github.com/markqvist/NomadNet/blob/master/nomadnet/ui/textui/MicronParser.py ------------------------- khinsen | 2025-05-18 07:48:34 UTC | #14 Thanks @avon for all those pointers to the Reticulum universe, of which I had never heard before! Looks interesting in many ways, but also a bit scary because all of that infrastructure software written in Python, and thus likely to break frequently. ------------------------- eliot | 2025-05-18 15:43:07 UTC | #15 [Meshtastic](https://meshtastic.org/docs/introduction/), a LoRa mesh radio project. > An open source, off-grid, decentralized, mesh network built to run on affordable, low-power devices I'd read about it briefly before, but today I learned how this technology was useful during the [2025 Iberian Peninsula blackout](https://en.wikipedia.org/wiki/2025_Iberian_Peninsula_blackout) which affected Spain and parts of Portugal and France. Someone from Portugal posted on a forum about preppers and digital preparedness. > No electricity (still none now), and for a long time we had zero cell reception — even now it’s patchy and unreliable. > > The Meshtastic community absolutely came through for us: people shared real-time updates, advice, and positive vibes. It made a HUGE difference for our safety and peace of mind. Honestly, we felt connected even when everything else was down. Another commented: > When electricity, internet, and phone lines failed, Meshtastic still ran strong since it is a completely standalone over-the-air radio mesh network. Everyone who takes part of the mesh helps strengthen and extend it, so if one node goes down, the mesh 'rebuilds' itself to find alternate paths. > > There are public channels, great for getting info like severe weather alerts, major traffic issues, daily weather forecasts, private channels for select family/friends, and direct private messages can all be passed along the mesh to the destination while retaining 256-bit encryption. No license of any kind is needed for them. > > It's still a sort of "hobbyist" thing, but it's very quickly becoming popular, and during the European blackout, Meshtastic proved itself to be a great tool to pass along real-time updates, advice, and help. Apparently Ukraine has one of the biggest mesh networks. A link to a Grafana interface: [Meshtastic Ukraine](https://mesh.in.ua/grafana/d/R4RChebVk/mesh?orgId=1&refresh=30s). --- That souds like an alternative infrastructure similar to the web, but more decentralized and distributed. Well, with very low data bandwidth for small messages, not for web-like usage. --- @avon Thank you for suggesting [Reticulum Network](https://reticulum.network/) and [Spritely Goblins](https://spritely.institute/goblins/). Aah yes, down the rabbit hole I go. ------------------------- eliot | 2025-05-18 17:50:33 UTC | #16 [Mark S. Miller](https://en.wikipedia.org/wiki/Mark_S._Miller), I learned about him from: - [Architectures of Robust Openness -- Mark S. Miller keynote at ActivityPub 2019](https://www.youtube.com/watch?v=NAfjEnu6R2g) Brilliant phrase, robust openness. > He is known for his work as one of the participants in the 1979 hypertext project known as [Project Xanadu](https://en.wikipedia.org/wiki/Project_Xanadu); for inventing [Miller columns](https://en.wikipedia.org/wiki/Miller_columns); and the open-source coordinator of the [E programming language](https://en.wikipedia.org/wiki/E_(programming_language)). Prescient in many ways. Also known for his work in [object capabilities](https://en.wikipedia.org/wiki/Capability-based_security), And he's been involved in the development of WebAssembly. > Miller columns (also known as cascading lists) are a [browsing](https://en.wikipedia.org/wiki/File_manager#Navigational_file_manager)/[visualization](https://en.wikipedia.org/wiki/Information_visualization) technique that can be applied to [tree structures](https://en.wikipedia.org/wiki/Tree_(data_structure)). The columns allow multiple levels of the hierarchy to be open at once, and provide a visual representation of the current location. > > It is closely related to techniques used earlier in the [Smalltalk](https://en.wikipedia.org/wiki/Smalltalk) browser, but was independently invented by [Mark S. Miller](https://en.wikipedia.org/wiki/Mark_S._Miller) in 1980 at [Yale University](https://en.wikipedia.org/wiki/Yale_University). The technique was then used at [Project Xanadu](https://en.wikipedia.org/wiki/Project_Xanadu), [Datapoint](https://en.wikipedia.org/wiki/Datapoint), and [NeXT](https://en.wikipedia.org/wiki/NeXT). Browsing tree structures.. I wonder what navigating a tree structure of local and remote content would look like, maybe similar to mounting a file system over SSH. > Miller's research has focused on language design for secure open systems. At Xerox PARC, he worked on Concurrent Logic Programming systems and Agoric Open Systems. At Sun Labs, he led the development of WebMart, a framework for buying and selling computing resources (network bandwidth, access to a printer, images, CD jukebox etc.) across the network. It seems like a precursor to ideas in the blockchain and cryptocurrency scene. > ..Miller has been pursuing a stated goal of enabling cooperation between untrusting partners. He sees this as a fundamental feature required to power economic interactions, and the main piece that has been missing in the toolkit available to software developers. > > His most prominent contributions have been in the area of programming language design, most notably, the E Language, which demonstrated language-based secure distributed computing. "Object capabilities" sounds like a fine-grained permission system where object instances start out in a secure sandbox and gradually given access to the outside world (memory, file system, network, I/O of any kind).. WebAssembly has an architecture like this, on a per-module basis. Or rather, WASI (WebAssembly System Interface) does: - https://github.com/WebAssembly/WASI/blob/main/docs/Capabilities.md ------------------------- natecull | 2025-05-20 01:27:42 UTC | #17 [quote="eliot, post:7, topic:332"] Distributed decentralized mesh networks, where everyone is a node receiving and sending messages, including dynamic executable messages. (Not necessarily textual code as interface, it could be visually built programs in a shared environment.) The hardware and software, a thin stack as close to the metal as possible, so the entire system can be understood - and (re)built from scratch - by a single person, as in [permacomputing](https://permacomputing.net/). [/quote] That's definitely what I dream of too. And worry about how to make those "dynamic executable messages" secure on tiny home machines with untrained system administrators. In the corporate world, the rise of ransomware in the last five years has led to absolute paranoia and a centralized surveillance culture about anything executable. Compilers are blacklisted. EXEs require code certificates to run. Scripting languages are being blocked. All email attachments are monitored. All SSL encrypted "private" web requests are being intercepted and decrypted and deep-packet-inspected by the corporate firewall. All process activations, all command lines, and sometimes all keystrokes are being centrally collected and stored as evidence of intrusion. (Often, all this intimate data is being stored in foreign countries by foreign-owned corporations!) All passwords are also being stored centrally, often also in foreign countries. All personal privacy is gone, just vaporized. It's an absolute repressive crackdown, and will have major anti-democratic and authoritarian implications on a planetary scale, but it's being driven by the utter failure of the "lolz no responsibility for malware sux to be u i guess" security culture of desktop operating systems over the last 30 years. And the "ship before its done and just keep on fixing it in post" mentality. I wish we could have some kind of tiny, understandable, and verifiably-not-insane object model that home users could run where you know that in the worst case, if a particular software object is subverted or runs wild, there are hard limits on how much of your entire life it can destroy. We used to have such hard limits, by accident, back in the days of removeable floppy disks. You ran a virus? It can only touch the media you've inserted, and a hard boot + a read-only tab gives you a trusted known good OS. We need something like that again, but in a form that can scale. Edit: There are two main ways malware can destroy your life and which a hypothetical tiny home machine needs to protect against: 1. By deleting / modifying local data or code. Protected against with "Virtual removeable media", ie, sandboxing read and write permissions, read-only / copy-on-write files, and by automatically doing backups/versioning (for data that is safe to persist) 2. By transmitting secret data. Protected against with "Virtual removeable cables", ie sandboxing network read/write permissions. Fundamentally, we need to start with filesystem and network permissions being something you can grant on a per-folder or per-channel level, without an app's knowledge or involvement. Ideally the network would be something like a distributed filesystem, so the two would be one concept. ------------------------- eliot | 2025-05-20 14:29:22 UTC | #18 OLPC XO-1, Children's Machine, $100 laptop. Circa 2007-2012, RIP. ![XO-Beta1-mikemcgregor-2|541x500](upload://alflLbX87HqQFUWa7HTVjpTUOQ7.jpeg) > ..low cost laptop computer intended to be distributed to children in developing countries around the world, to provide them with access to knowledge, and opportunities to explore, experiment and express themselves ([constructionist learning](https://en.wikipedia.org/wiki/Constructionism_(learning_theory))). > > The rugged, low-power computers use flash memory instead of a hard disk drive (HDD), and come with a pre-installed operating system derived from Fedora Linux, with the Sugar graphical user interface (GUI). > > Mobile ad hoc networking via 802.11s Wi-Fi mesh networking, to allow many machines to share Internet access as long as at least one of them could connect to an access point This is (would have been) a realization of Alan Kay's Dynabook concept with (at the time) modern technology choices and evolved conception. ![960px-XO-1-4th_gen-features|393x500](upload://kXNCAVxRkrnozS5O2awQLDddg9h.jpeg) > Whenever the laptop is powered on it can participate in a mobile ad hoc network with each node operating in a peer-to-peer fashion with other laptops it can hear, forwarding packets across the [local] cloud. If a computer in the cloud has access to the Internet—either directly or indirectly—then all computers in the cloud are able to share that access. I wonder why this project didn't blossom - I guess not enough government funding or public support. And why no other large project didn't emerge as a successor to carry on the concept. What were the flaws in this project? Maybe just economics. These days schools can provide consumer laptops, likely with Windows and built-in surveillance, or expect every student to carry a mobile phone with enough computational freedom to do their work, such as accessing ChatGPT. I hope there are people advocating for healthier technology for education, with free and open-source software, GNU/Linux et al, that respects privacy and security, with locally running AI/LLM, etc. --- Cyberdecks. There's a hobbyist interest in building your own retro-futuristic personal computing device. ![cyberdeck pocket|375x500](upload://2PuzLHCdx8vwI89LHlaww11X46g.webp) These are based on components like ESP32, Arduino, Raspberry Pi, or Intel NUC. With wireless data communication (WiFi and Bluetooth), sometimes long-range radio (LoRa messenger), built-in GPS, connector to [FlipperZero](https://flipperzero.one/), SMS/cell phone, solar energy, etc. ![1kpebah 02 CyberDeck Build – Pi 4B + DSI Display + ESP32 + CC1101 + 10,000mAh PowerBank–web|375x499](upload://3VqIT5f3lZjYezGXqnOh0h8X5a7.jpeg) I haven't heard about inter-connecting these cyberdecks, but surely it's possible by integrating with local area network or other protocols such as [device-to-device communication with ESP-NOW](https://docs.arduino.cc/tutorials/nano-esp32/esp-now/). --- There's a picture I saw from the recent blackout in Spain, where a crowd is gathered around a battery-powered radio, listening for the news. ![blackout in spain battery-powered radio|690x459](upload://6SUseB6SCpeTbrr1zmIkSI3P1Ae.jpeg) No cell phone reception, no Internet. People had to resort to a more primitive and reliable technology like the radio. --- What I'd love to see is more grass-roots tech, hardware and software with hand-made quality, local first, small-scale design. And it's happening, thanks to the proliferation of affordable high-tech components. (Which does depend on global infrastructure, robust but still vulnerable. Nobody can produce a CPU in a home lab.) If I were to build my own cyberdeck, I'll think about incorporating radio and Meshtastic somehow. Ideally with a radio receiver and *transmitter*. With additional devices like that, an ad-hoc web-like network protocol could be built on it. ------------------------- khinsen | 2025-05-20 16:30:44 UTC | #19 We know perfectly well how to do what you describe. It doesn't happen for socio-economic reasons. ------------------------- akkartik | 2025-05-20 17:28:31 UTC | #20 Socio-economic reasons are just as real as physical reasons. In other words, we don't know how to do it. Some of us just have the illusion of knowing how to do it. ------------------------- khinsen | 2025-05-20 19:05:54 UTC | #21 Yes and no. We know how to do it for a small insider group. Such as our little community here. We don't know how to do it at a larger scale. ------------------------- akkartik | 2025-05-20 19:49:53 UTC | #22 I don't even think we know how to do it for small groups of weak ties like here. Strong ties like a family, maybe, as [Robin Sloan describes](https://www.robinsloan.com/notes/home-cooked-app). But in my experience there's a "chasm" of incentives between building for oneself and building for "a large market/vertical". ------------------------- Apostolis | 2025-05-21 01:52:09 UTC | #23 As we discussed before, we have the concept of views to see a system at different levels. What we are missing are the different methods of engagement by the community , and the relation of the experts with the community. Different views / levels will also require different types of social engagement to alter the piece of software. In other words, we need to introduce varying methods of governance according to the level and type of system we work on. There is no single type of governance that works for everything. But there are common quantitative and qualitative properties of governance structures that we need to measure and foster. https://ryaki-org.blogspot.com/2015/10/measuring-democracy-and-its-cost.html?m=1 ------------------------- khinsen | 2025-05-21 09:21:34 UTC | #24 The incentives need to come from the group, and at least one group member needs to have the required competence. One example I vaguely know (meaning I spoke to the developer) is a sports association that uses two [webxdc](https://webxdc.org/) apps to coordinate activities. The platform takes care of sharing private data securely, and the apps on top of it are simple (at the complexity level of a shared to-do list). ------------------------- khinsen | 2025-05-21 09:28:20 UTC | #25 I believe that we need to start small and grow the governance structures along with the complexity of the software. In the sports association example I just mentioned, governance is not an issue: it's provided by the established structure of an association, which even has a solid legal basis. Software for a town-sized community probably requires something more elaborate. ------------------------- Apostolis | 2025-05-22 09:57:46 UTC | #26 (new idea) What i am saying here is that the concept of view is also followed by the concept of projected governance. There is no reason to have a view if it does not help you to change things. A view can create boundaries into what the affected group is. At the same time the type of view is affected by the effectiveness of governance, thus might be created around existing social structures that are easily governed. Some thoughts... Edit: Maybe I need to give an example here.. but I don't have much time at the moment. But looking at "one laptop per child" project, one could construct multiple views, at the level of the economy, at the level of the community, or from the point of view of the engineers , all requiring and constructing different levels of organization. ------------------------- Apostolis | 2025-05-22 10:03:23 UTC | #27 And just to have copyright on this moto :grinning_face: , if glamorous toolkit proposes to have moldable development in which new views are cheap, I propose the concept of moldable governance structures where new types of governance are easy to create on the spot. ------------------------- eliot | 2025-05-22 15:49:38 UTC | #28 > start small and grow the governance structures along with the complexity of the software It's interesting how the concept of a "malleable system" is (seems to be) inherently political, as it involves a power structure among the creators and users of the system; and how openness is directly correlated with security/vulnerability. The smallest unit of governance is a single person, who creates and uses a system, for example a cyberdeck personal computing device, or some home-made program. That person has full control and power over the system, limited by the creators of the components of the system (hardware and software). It's fully (re)programmable, completely malleable. The next level up is two people with the same/similar device or application, interconnected somehow in a network. Already there's potential security and trust issue, like identity, how to verify each person; capability, what each person is allowed to do. Thinking of higher scales of a social network, it reminds me of Dunbar's number. > Dunbar's number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships—relationships in which an individual knows who each person is and how each person relates to every other person. > > This number was first proposed in the 1990s by Robin Dunbar, a British anthropologist who found a correlation between primate brain size and average social group size. That makes sense that our cognitive capacity limits the size of the social group that we can comfortably accommodate. I wonder if there's a software equivalent, some quality that limits its scalability. ([Cyclomatic complexity](https://en.wikipedia.org/wiki/Cyclomatic_complexity)?) Software on a massive global scale, like operating systems and social networks, seem to favor top-down corporate governance structure, with little to no malleability. (FOSS is a huge exception that disproves that notion.) Or maybe it's that malleability and personal freedom must be controlled and limited in larger social groups in order to maintain stability. ------------------------- khinsen | 2025-05-22 16:43:24 UTC | #29 That reminds me of a paper on the evolution of governance that I have seen frequently cited a couple of years ago: [Tribes, Institutions, Markets, Networks: A Framework About Societal Evolution](https://www.rand.org/pubs/papers/P7967.html). Somewhat speculative, but definitely worth reading. "Institutions" stands for hierarchical top-down governance. "Networks" is what many FOSS projects aim for (though some of the bigger ones are *de facto* top-down). ------------------------- akkartik | 2025-05-23 14:38:41 UTC | #30 It just registered that webxdc requires Delta Chat. The name is probably intended to evoke "Web x DC" and I'm just slow. ------------------------- eliot | 2025-05-23 17:10:02 UTC | #31 ..I'm starting to get a more concrete idea, taking exploratory steps in the direction I'd like to go. ![image|560x301](upload://uHbDyuJNGA65AaPRpkbQovOAyV0.png) The concept is an "emulator" of an imaginary retro computer with memory, storage, I/O devices, audio, graphics, networking. A fantasy console in the spirit of [PICO-8](https://www.lexaloffle.com/pico-8.php) and [System 7](https://en.wikipedia.org/wiki/System_7). Written in [C99](https://en.wikipedia.org/wiki/C99), a timeless language - or rather, frozen in time - likely to be portable for decades. It can compile to target Wasm (web and anywhere Wasm runtimes can run); desktop (Linux, macOS, Windows); mobile (Android, iOS); systems on a chip (Raspberry Pi, ESP32). So far it's a mere sketch, but the software stack includes: [Sokol](https://floooh.github.io/sokol-html5/) (examples of [tiny 8-bit emulators](https://floooh.github.io/tiny8bit/) built on it); [Dear ImGui](https://github.com/ocornut/imgui) for user interface; and [uLisp](http://www.ulisp.com/), which I've been porting to Wasm as a side project. It's a small subset/dialect of Common Lisp that runs on microcontrollers with limited memory. (I'm also playing with [`xcc`](https://github.com/tyfkda/xcc), a C99 compiler that can compile itself to Wasm. So I have a compiler that runs in the virtual machine, which can produce binaries targeting it.) I'm hoping to make this Lisp the key to the malleability of the system, if I can create a browser/editor/server environment in Lisp itself. Theoretically, live objects could be serialized, transferred over the network, and re-animated on the other side. Well, mostly dreaming for now, but at least there's an amoeba-like mutant program emerging from the soup. ------------------------- akkartik | 2025-05-23 17:09:50 UTC | #32 I love it! > Written in C99, a timeless language - or rather, frozen in time - likely to be portable for decades. It can compile to target Wasm (web and anywhere Wasm runtimes can run); desktop (Linux, macOS, Windows); mobile (Android, iOS); systems on a chip (Raspberry Pi, ESP32). Bear in mind that the timelessness of your system will be the minimum of the timelessness of all its components. ------------------------- khinsen | 2025-05-23 19:00:11 UTC | #33 My guess for this minimum is the WASM tooling, according to the principle-whose-name-I-cant-remember that says that the expected time for some technology to stay around is the time it has been around already. ------------------------- neauoire | 2025-05-23 20:09:40 UTC | #34 I absolutely [love this idea](https://wiki.xxiivv.com/site/devlog)! Good luck with the project : ) ------------------------- eliot | 2025-05-27 21:24:33 UTC | #35 Thank you @neauoire - I love your work on [uxn](https://100r.co/site/uxn.html). It's been a while since I first read about it, and I see it has blossomed into a whole [ecosystem](https://github.com/hundredrabbits/awesome-uxn). Runs on most platforms, even ESP32. And apps can communicate to each other, like a miniature Unix. ```sh uxn orca.com | piano.rom ``` ![image|690x379](upload://2hiKIgpc8PZQbEpsjPqFt4Xjeen.jpeg) It's brilliant. I really resonate with its design decisions, the thinking that led up to its creation ([Weathering Software Winter](https://100r.co/site/weathering_software_winter.html)), stories of living on a solar-powered sailboat.. This year I'm moving out of the city (of a hundred spires) to a rural off-grid habitat, which makes me appreciate and re-think civilized life down to its "primitives" (water, food, electricity) including computers and communication networks. ..Off on a few tangents below. --- > In computing, language primitives are the simplest elements available in a programming language. A primitive is the **smallest unit of processing** available to a programmer of a given machine, or can be an atomic element of an expression in a language. ![Icons_of_Visual_Programming_Language_--DRAKON--|324x500](upload://kNFOwt143hAqywAyhMMsYCfwG5F.png) This diagram is from: > [DRAKON](https://en.wikipedia.org/wiki/DRAKON) (Дружелюбный Русский Алгоритмический язык, Который Обеспечивает Наглядность, meaning "Friendly Russian Algorithmic Language Which Provides Clarity") is a free and open source algorithmic visual programming and modeling language. > > It was developed as part of the defunct Soviet Union Buran space program in 1986 following the need in increase of software development productivity. The visual language provides a uniform way to represent processes in flowcharts. It's satisfying ("provides clarity") to get down to the basics, working from the ground up. Experiencing the realm of ideas as a tangible reality, manipulating concepts like in [The Glass Bead Game](https://en.wikipedia.org/wiki/The_Glass_Bead_Game). https://upload.wikimedia.org/wikipedia/commons/a/a2/DRAKON_algorithm_animation.gif --- I've continued exploring the rich discussions on this forum, and found: - https://forum.malleable.systems/t/mu-designing-a-safe-computing-stack-from-the-ground-up/51/7 The [README](https://github.com/akkartik/mu?tab=readme-ov-file#mu-a-human-scale-computer) in the project is a fun read, it contains so many good ideas, some of which I've also been rediscovering in my own experiments. I think it's in the same conceptual/artistic direction as uxn, like a wizard conjuring up a microcosmic computer stack from nothing, one word at a time. ![20210624-shell|650x500](upload://s1YFylMalYOzLPkL695XJ2fuGwl.png) Then I found: - [Bicycles for the Mind Have to Be See-Through](https://akkartik.name/akkartik-convivial-20200607.pdf) (PDF) So good! > In order for code to be living structure, even the tools used to make the code need to be living structure. -- Christopher Alexander --- ![loops-and-logic|640x440](upload://Ajv7TGq4w84oqvl1pt2kxP5DESO.gif) This is a demo of a language I've been working on called Loops & Logic. It's written in a superset of HTML for ease of learning, as its original purpose is to serve as a template language. But as I continue developing "smart editor" features with language integration, like: - lint - verify syntax with helpful error messages - format - prettify code formatting, optionally automatic - hints - autocomplete of tags and attributes, suggestions, info on hover about parts of an expression - interactivity - number slider, color picker, select list of available values It seems to be evolving into a Lisp-like environment for interactively creating hypertext documents. The user interface is a common pattern, with code editor and a visual canvas for live feedback. One of the next questions I'm solving is how to provide a more seamless HTML-like syntax for styling (without CSS) and behavior (without JavaScript). I suppose with magic attributes like `color` and `action`. ![LearnableProgramming-Vocab13|640x110](upload://mslenKxVx0rDLOHNDTALlur7c7U.gif) Bret Victor's [Learnable Programming](https://worrydream.com/LearnableProgramming/) is often in the back of my mind. (Also [Seymour: Live Programming for the Classroom](https://harc.github.io/seymour-live2017/).) --- ![ulisp-phone|333x500](upload://jq4naIFOlzev63cBiUn4MTLkqyI.jpeg) Recently saw this [remote Lisp server](http://www.ulisp.com/show?1Q0L) via SMS (short message service). It's lovely how it shows the essence of computation and communication. An expression is received, evaluated, then a value is returned. A function is defined, then later invoked in a subsequent message. The last command is remotely reading the voltage level of an analog GPIO pin. And scary how open it is, someone would only need the phone number or IP address of the server, and a bit of Lisp to access the system. Depending on what's connected, they can blink LED lights, display a message, read sensor data, open curtains and doors, water the plants, feed the cat.. ![image|690x307](upload://lj4S6oYyZluveZaXfLW4408LB0B.png) The above is a "prototype" (text in a window) in my C99 stack. I wonder if/how the Lisp program can actually work as an **HTTP server in a browser**, ideally peer to peer. Last week I finished porting uLisp (8K LOC) from Arduino code to plain C, and wrote a minimal REPL in the terminal. ![image|241x192](upload://70cF8dJ231JBTB4uixlsn98a4l3.png) It's a fun exercise for me to (re)learn C and make my own tiny Lisp machine (virtual for now). I feel I'm going in the right direction because all the code I'm writing is super portable, to all platforms that a C compiler can target including Wasm. --- > My guess for this minimum is the **WASM tooling**, according to the principle-whose-name-I-cant-remember that says that the expected time for some technology to stay around is the time it has been around already. Earlier this year Wasm version 2 was announced, and I'm already nostalgic for the simpler v1 with only a handful of op codes. There are a number of small Wasm compilers (from C, Lisp, [WAT](https://www.webassemblyman.com/wat_webassembly_text_format.html), etc.) that I've seen people write, but Wasm v2 will require significantly more effort to implement. It's no longer simple to understand. > The **Lindy effect** is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age. So C99 should be good for another 25 years (quarter century). Seeing how we have all kinds of emulators of retro computers running in the browser, in big part thanks to WebAssembly, I think we'll see C compilers for centuries, eventually an interplanetary or extraterrestrial language. ------------------------- akkartik | 2025-05-27 16:55:07 UTC | #36 > the principle-whose-name-I-cant-remember Ah, I see you answered your own question further down :smile: ------------------------- akkartik | 2025-05-27 17:05:36 UTC | #37 I too have found myself fumbling several times for the name of the Lindy Effect. DRAKON has been on my mind a fair bit lately during my experiments in spatial Literate Programming. ------------------------- eliot | 2025-05-27 17:19:51 UTC | #38 About Lindy Effect, I was actually quoting @khinsen from an earlier comment, and asked a local LLM for the answer; but it hallucinated a plausible but imaginary history, so I had to resort to legacy search (DuckDuckGo), trying various phrasings until I found it. > > what's the principle that says that the expected time for some technology to stay around is the time it has been around already > > That's called "Hodges' Law" or "The Half-Life Rule," although it was actually popularized by Roy Amara, who paraphrased Hodges as saying: "Technology adoption follows a logistic curve. The half-life of technology (the time it takes for a technology to become obsolete) is roughly equal to the time since its invention." That's a real person, but he's known for a different adage. > "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run," coined by Roy Amara, past president of The Institute for the Future. ------------------------- jay_bird | 2025-05-27 17:30:18 UTC | #39 I've been tempted to post a few times but have told myself to hold off until I've "done more stuff" (software-wise), or even just read and browsed more of the threads in general, but allow me to break my forum silence for this thread here and say - holy moly, that's some good stuff. Bookmarking, will be coming back here again and again. So much to explore! Thank you @eliot and all other contributors here ------------------------- schmudde | 2025-05-29 10:51:00 UTC | #40 [quote="eliot, post:28, topic:332"] It’s interesting how the concept of a “malleable system” is (seems to be) inherently political, as it involves a power structure among the creators and users of the system; and how openness is directly correlated with security/vulnerability. [/quote] Indeed. Technology isn't inert - solely dependent on who uses it. As Langston Winner showed in 1980, [artifacts have politics](https://www.jstor.org/stable/20024652). Regarding some of the OOP/UX citations in this thread. I think it's rather telling that Apple's Swift has moved to a reactive model from the NeXT/Objective C object model for UX elements. One of the most powerful ways to simplify toolbuilding isn't to hide state in objects but to eliminate state - either through ephemeral functions or immutable values. ------------------------- eliot | 2025-05-29 13:16:50 UTC | #41 The article link URL was missing a `2` at the end: - Winner, Langdon. “Do Artifacts Have Politics?” *Daedalus* 109, no. 1 (1980): 121–36. http://www.jstor.org/stable/20024652. Democracy as a form of end-user programming. Maybe the analogy breaks down somewhere, but it brings up correspondences like: society as a global-scale computer running ideological software (nations, governments); the web as our collective brain and embodiment of the [noosphere](https://en.wikipedia.org/wiki/Noosphere); an individual as a function, or living object with mutable state; language as communication protocol (HTTP: Human Thinking Transfer Protocol). - [What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry](https://en.wikipedia.org/wiki/What_the_Dormouse_Said) - [From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism](https://fredturner.stanford.edu/books/counterculture-cyberculture-stewart-brand-whole-earth-network-and-rise-digital-utopianism) --- From "Do Artifacts Have Politics?" > At issue is the claim that the machines, structures, and systems of modern material culture can be accurately judged not only for their contributions to efficiency and productivity and their positive and negative environmental side effects, but also for the ways in which they can embody specific forms of power and authority. > ..In what follows I will outline and illustrate two ways in which artifacts can contain political properties. > > First are instances in which the invention, design, or arrangement of a specific technical device or system becomes **a way of settling an issue** in the affairs of a particular community. > > Second are cases of what can be called ‘‘inherently political technologies,’’ man-made systems that appear to require or to be strongly compatible with particular kinds of **political relationships**. Summarized by [Cogito-1 8B](https://www.deepcogito.com/research/cogito-v1-preview): 1. Technical arrangements designed to settle issues in specific communities - Example: Robert Moses' low bridges on Long Island, NY, intentionally built to exclude buses (typically used by poor people and minorities) from parkways, thus limiting their access to Jones Beach. - Similar examples include Parisian thoroughfares engineered to prevent street fighting during the 1848 revolution. 2. "Inherently political technologies" that require or are compatible with particular kinds of political relationships - Example: The mechanical tomato harvester, which not only changes harvesting practices but also produces tomatoes that are harder and less tasty, benefiting large-scale agribusiness while disadvantaging small farmers and agricultural workers. - Another example is the history of industrial mechanization, where technologies were designed to break labor unions rather than simply for efficiency. --- An interesting example was the contrast between nuclear power plants and solar energy. > Many advocates of solar energy have argued that technologies of that variety are more compatible with a democratic, egalitarian society than energy systems based on coal, oil, and nuclear power; at the same time they do not maintain that anything about solar energy requires democracy. > > Their case is, briefly, that solar energy is decentralizing in both a technical and political sense: technically speaking, it is vastly more reasonable to build solar systems in a disaggregated, widely distributed manner than in large-scale centralized plants; politically speaking, solar energy accommodates the attempts of individuals and local communities to manage their affairs effectively because they are dealing with systems that are more accessible, comprehensible, and controllable than huge centralized sources. > ..Thus environmentalist Denis Hayes concludes, "The increased deployment of nuclear power facilities must lead society toward authoritarianism. Indeed, safe reliance upon nuclear power as the principal source of energy may be possible only in a totalitarian state." > > ..A similar view is offered by a contemporary writer who holds that ‘‘if you accept nuclear power plants, you also accept a techno-scientific-industrial-military elite. Without these people in charge, you could not have nuclear power." Well.. I can't say whether that's right or wrong, which technology is better or worse for society. Of course I'd prefer local-first, modular decentralized energy production that's under my control, just like I prefer free and open-source software. But apparently that's not scalable or robust enough for communities and societies, like food production requires global infrastructure under centralized authority and control of nation states. It seems inevitable, like capitalism. --- There's a punk-ish subculture of [circuit bending](https://en.wikipedia.org/wiki/Circuit_bending). https://www.youtube.com/watch?v=ZAsDVb18MJ4 > **Circuit bending** is the creative customization of the circuits within electronic devices such as children's [toys](https://en.wikipedia.org/wiki/Toy) and [digital synthesizers](https://en.wikipedia.org/wiki/Digital_synthesizer) to create new musical or visual instruments and sound generators. Circuit bending is manipulating a circuit to get an output that was *not intended by the manufacturer*. https://www.youtube.com/watch?v=KHDL9iGxDPM - Reed Ghazala, the Father of Circuit Bending: Sound Builders I like this kind of "subverting the narrative" and questioning the built-in assumption of authority in technology. Well, I enjoy a healthy creative expression of this, because it can certainly go too far and disturb the social order. Similarly, [pirate radio](https://en.wikipedia.org/wiki/Pirate_radio) has always fascinated me. It asks, "Who owns the air waves?" > The airwaves are considered a public resource owned by the people, but they are licensed to private broadcasters by the government, specifically the Federal Communications Commission (FCC). This means that while the airwaves belong to the public, companies can use them under certain regulations to serve the public interest. The early days of BBS (bulletin board systems) and WWW (world-wide web) had that kind of rebellious, subversive social atmosphere. Why it attracted the counterculture types is certainly related to "inherently political technologies" discussed by Langdon Winner. ![Radio_station_WJAZ,_Chicago,_wave_pirates_publicity_photograph_(1926)|690x411](upload://sX4qi7pKWCa1XYEN8KvfK50wPUX.jpeg) > In 1926 WJAZ in Chicago, Illinois, challenged the U.S. government's authority to specify operating frequencies and was charged with being a "wave pirate". - [Remembering The Legendary Radio Caroline](https://msmokemusic.com/blogs/mind-smoke-blog/posts/6508113/remembering-pirate-radio) --- > ..Apple’s Swift has moved to a reactive model from the NeXT/Objective C object model for UX elements. > > One of the most powerful ways to simplify toolbuilding isn’t to hide state in objects but to eliminate state - either through ephemeral functions or immutable values. I've been reading this thread on the forum: - https://forum.malleable.systems/t/rethinking-the-object-oriented-paradigm-what-can-we-throw-away/199/21 In general I find the C-style of primitive programming refreshing, working with plain data structures and functions; in contrast to C++/Java-style of OOP, which is the dominant paradigm with proven scalability (?), seemingly being dethroned. But there are different takes on what "object oriented" means, and I imagine [Swift's Functional Reactive model](https://redwerk.com/blog/reactive-programming-in-swift/) is not necessarily mutually exclusive with some variant of the object model. Though the linked article has a single instance of the word "object", saying: > [The Reactive model] rises up the level of abstraction, so you are able to manipulate the streams instead of objects or functions. Oh, there's even a Reactive Manifesto. - https://www.reactivemanifesto.org/ > We believe that a coherent approach to systems architecture is needed, and we believe that all necessary aspects are already recognised individually: we want systems that are Responsive, Resilient, Elastic and Message Driven. We call these Reactive Systems. > > Systems built as Reactive Systems are more flexible, loosely-coupled and [scalable](https://www.reactivemanifesto.org/glossary#Scalability). This makes them easier to develop and amenable to change. They are significantly more tolerant of failure and when [failure](https://www.reactivemanifesto.org/glossary#Failure) does occur they meet it with elegance rather than disaster. Reactive Systems are highly responsive, giving [users](https://www.reactivemanifesto.org/glossary#User) effective interactive feedback. A library I'm learning recently, [Dear Imgui](https://github.com/ocornut/imgui), is based on the concept of IMGUI (Immediate Mode Graphical User Interface), from the subculture of game developers and graphics software. There's a wiki page ([About the IMGUI Paradigm](https://github.com/ocornut/imgui/wiki/About-the-IMGUI-paradigm)) explaining the difference between this and the classic paradigm of "retained mode" GUI. The crux of it is about how to manage state. > ...At its core immediate mode is about how this state is updated. In classical gui the state of the application was synchronized to the gui state by modification. On the other hand immediate mode is closer to functional programming. Instead of mutating state, **all previous state is immutable and new state can only be generated by taking the previous state and applying new changes**. Both in dear imgui and nuklear there is very little previous state and most is build up every "frame". > ..Counter intuitively this is often less complicated than the traditional "retained mode" style of GUI libraries because there is no duplication of state. That means no setup or teardown of widget object trees, no syncing of state between GUI objects and application code, no hooking up or removing event handlers, no "data binding", etc. > > The structure and function of your UI is naturally expressed in the code of the functions that draw it, instead of in ephemeral and opaque object trees that only exist in RAM after they're constructed at runtime. You retain **control over the event loop** and you define **the order in which everything happens** rather than receiving callbacks in some uncertain order from someone else's event dispatching code. Well, we went from the [state](https://en.wikipedia.org/wiki/State) as a political entity to the [state](https://en.wikipedia.org/wiki/State_(computer_science)) as a program's *data about itself* which determines its next actions and state changes. > A *state* is a description of the status of a system that is waiting to execute a *transition*. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received. > In some programs, information about previous data characters or packets received is stored in variables and used to affect the processing of the current character or packet. This is called a **stateful protocol** and the **data carried over from the previous processing cycle** is called the state. > > In others, the program has no information about the previous data stream and **starts fresh with each data input**; this is called a **stateless protocol**. ------------------------- khinsen | 2025-05-29 08:39:49 UTC | #42 On the relation between technology and democracy, the recent experiments in Taiwan look very interesting. For an overview, I recomment [this podcast interview](https://www.thegreatsimplification.com/episode/169-audrey-tang) with Audrey Tang, former (and first) Digital Minister of Taiwan. There's [a whole book](https://www.plurality.net/) about this story, which I didn't find time to read yet. That story is about technology for democracy. The converse, democracy about technology, matters as well. Ideally, in a democratic society, everyone should have a say about how high-impact technology is designed and deployed. At a small scale, that leads to Ivan Illich's convivial tools, which are roughly the equivalent of direct democracy. There's also the conservative and mostly defensive approach of the Amish, which decide collectively which technology from the outside world they accept in their lives. I am not aware of any attempts to get something like representative democracy about industrial-scale technologies. They have always been designed, and often also deployed, by a small elite, with varying levels of feedback from the people affected. ------------------------- Apostolis | 2025-05-29 09:31:49 UTC | #43 I hope I am on point, I don't follow the conversation closely. Pia Mancini is one activist that tried to create democratic tools. In Argentine, there was a political party that vowed to enact the législation that was constructed through a software platform where people could participate. https://en.m.wikipedia.org/wiki/Pia_Mancini https://en.m.wikipedia.org/wiki/Net_Party She is also one of (the) founders of the open collective, a way for open source software to have legal status and accept donations, avoiding all the beraucracy that laws could require. Opencollective.com Both are inspiring projects. Both are hacks of the current social structures to allow new ones to form. In the first case, we have injected liquid democracy in a representative democracy. In the second, communities have freedom to have any structure they want. ------------------------- schmudde | 2025-05-29 13:49:59 UTC | #44 [quote="khinsen, post:42, topic:332"] I am not aware of any attempts to get something like representative democracy about industrial-scale technologies. [/quote] This could be a stretch - but perhaps the current hot topic of agentic agents qualify? We are doing some work on permissioned agents at my company. The basic gist is that the industrial-scale technologies run by a small elite have no choice but to accept it; the alternative is dealing with armies of bots posing as humans doing the chores that people want to get done. This sort of adversarial interoperability is worse for the megacorps than a protocol. [quote="eliot, post:41, topic:332"] society as a global-scale computer running ideological software (nations, governments); the web as our collective brain and embodiment of the [noosphere](https://en.wikipedia.org/wiki/Noosphere); an individual as a function, or living object with mutable state; language as communication protocol (HTTP: Human Thinking Transfer Protocol). [/quote] I'd pair *Counterculture to Cyberculture* with Yasha Levine's *Surveillance Valley: The Rise of the Military-Digital Complex*. JCR Licklider is commonly positioned as a benign figure behind the utopian idea of the benefits of sharing information on a computer network. But Levine documents that his celebrated published papers were quietly propped up with military meetings that promised the Pentagon a great surveillance network that would identify Communists even before they became a problem. What are the politics of building a network with no primitives for privacy? I think Langston Winner answers that conclusively. Christopher Alexander ([speaking here at OOPSLA 1996 to connect all our threads](https://www.youtube.com/watch?v=z_QzdKci6OY)) would come to a similar conclusion as Winner, but through different means. [quote="eliot, post:41, topic:332"] I imagine [Swift’s Functional Reactive model](https://redwerk.com/blog/reactive-programming-in-swift/) is not necessarily mutually exclusive with some variant of the object model. Though the linked article has a single instance of the word “object”, saying: [/quote] Completely agree. Specifically on the bit about message-passing, Kay's original intent. And as he said (paraphrased), he certainly didn't imagine C++ when he thought of OOP. [quote="eliot, post:41, topic:332"] The crux of it is about how to manage state. [/quote] Indeed. Two other alternatives to consider - Common Lisp Object System gives the most control over how to define the concept of the *object*. Or [Carp](https://www.youtube.com/watch?v=Q1BVfGIhwZI) - which gives you some of the performance compromises you mention (control over specific parts of the execution primitives) wrapped in a Lispy approach. Maybe I'll put it another way: Common Lisp and C++ are both huge. Lisp and C are small. And I think you're gesturing towards large States (and other large enterprises) managing complex state have invested in languages like Common Lisp and C++ (and Java, Ada, etc...). C and Carp seem comparatively egalitarian. ------------------------- eliot | 2025-05-29 17:21:40 UTC | #45 @khinsen Thank you for recommending the interview with Audrey Tang, I'm watching it with great interest. - [ How Pro-Social Technology Is Saving Democracy from ‘Big Tech’ with Audrey Tang | TGS 169](https://www.youtube.com/watch?v=aXgne-9F7uU&t=713s) > A crucial difference is that we're not protesters who only demand something, or against something. We're *demonstrators* that show an alternative. And so we developed a lot of tools.. Shivers. I also liked what he said about our vocabulary shaping the way we think. Examples he gave of how we can intentionally create and use words: - internet of things -> internet of beings - virtual reality -> shared reality - machine learning -> collaborative learning - user experience -> human experience --- I started reading Tools for Conviviality. > **Tools for Conviviality** is a 1973 book by Ivan Illich about the proper use of technology. It was published only two years after his previous book *Deschooling Society*. > > In this new work Illich generalized the themes that he had previously applied to the field of education: the institutionalization of specialized knowledge, the dominant role of technocratic elites in industrial society, and the need to develop new instruments for the reconquest of practical knowledge by the average citizen. Reconquest, what a word. In the history of Spain there was La Reconquista, or the Fall of al-Andalus, when European Christian kingdoms "reconquered" the Iberian peninsula after seven centuries of Muslim rule. Culturally it's not far from *conquistadores* who conquered parts of the Americas, Africa, and Asia. In the context of industrial society, it sounds like a call to action for the public to reclaim and "reconquer" their agency, production of knowledge and tools for living. > He wrote, "Elite professional groups have come to exert a 'radical monopoly' on such basic human activities as health, agriculture, home-building, and learning, leading to a 'war on subsistence' that robs peasant societies of their vital skills and know-how. > > Illich proposed that we should "invert the present deep structure of tools" in order to "give people tools that guarantee their right to work with independent efficiency." > > The book’s vision of tools that would be developed and maintained by a community of users had a significant influence on the first developers of the personal computer. I see, this is part of the philosophical heritage behind the creation of personal computers, the world-wide web, and the [free software](https://www.gnu.org/philosophy/free-sw.html) movement. It's perhaps an irony of history that much of this was funded as military research, related to what @schmudde said about the "military digital complex". > In [Surveillance Valley](http://surveillancevalley.com), Yasha Levine traces the history of the internet back to its beginnings as a Vietnam-era tool for spying on guerrilla fighters and antiwar protesters–a military computer networking project that ultimately envisioned the creation of a global system of surveillance and prediction. Levine shows how the same military objectives that drove the development of early internet technology are still at the heart of Silicon Valley today. That reminds me of: Using 6G to sense objects in the real world (**network as a sensor**). https://www.youtube.com/watch?v=SCCvLPqtur8 In contrast, [Dynamicland](https://dynamicland.org/2022/Progress_report/) is a more convivial vision of networked computing integrated with the environment and objects in the real world. https://www.youtube.com/watch?v=5Q9r-AEzRMA ------------------------- eliot | 2025-05-30 17:02:03 UTC | #46 Dillo+ browser. It's small, true to the original spirit of hypertext, and opinionated in a good way - implementing a curated subset of web standards. - https://github.com/crossbowerbt/dillo-plus > [It] supports the lightweight protocols `gopher` and `gemini` in addition to `http` and `https`. > > It aims to become your default browser for accessing the smol web. ![image|570x500](upload://gsNfh4aSOC0ZmqCPYhLQYWCkdbf.png) > Lightweight browsers are beneficial for loading websites quickly. They can work on older hardware where common browser would take up too many resources. ..Because Javascript is not used, extra bloat is not needed to be downloaded. ![image|570x500](upload://tOW4dMptqBjZ9nZEqyef3U9tYyD.png) > Your online experience can be more secure with Dillo+. Not only does it not use Javascript, rules can also be defined per website domain. These rules can block connecting to certain domains, block ads and trackers, and require sites to use encryption. It respects user agency by design, muy convival. Additional supported URI schemes include: - `zip://` - File browser for zip archive and EPUB - `man://` - Man page viewer And it offloads media playing to the host system. > Dillo+ does not play media, such as audio and video, in the browser. Instead you can run media using your preferred desktop media player. The benefit is that playback will usually be more streamlined --- With this kind of modular and minimalist approach, I get the feeling a "web browser" could be even smaller, like the tiny [hypertext browser](https://akkartik.name/post/luaML2) in Freewheeling Apps. --- As a footnote, I didn't realize how many [commonly used URI schemes](https://www.w3.org/wiki/UriSchemes) there are, some proprietary (Google, Microsoft Teams, Sony, Apple, Slack, Zoom). And Gemini is not (yet) one of the official [IANA](https://en.wikipedia.org/wiki/Internet_Assigned_Numbers_Authority)-registered schemes. But I guess it doesn't matter because applications are free to implement the protocol, and people can use it without permission from a central authority. ..I remember [Beaker Browser](https://en.wikipedia.org/wiki/Beaker_(web_browser)) ([post mortem](https://github.com/beakerbrowser/beaker/blob/master/archive-notice.md)) had `beaker://` and [`dat://`](https://dat-ecosystem.org/) protocols. What about [`onion://`](https://en.wikipedia.org/wiki/Onion_routing) or `torrent://`? (I'm learning as I go, the latter is actually called [Magnet URI scheme](https://en.wikipedia.org/wiki/Magnet_URI_scheme).) Pretty sure none of these are officially registered either. And [`ipfs://`](https://en.wikipedia.org/wiki/InterPlanetary_File_System), it may be interplanetary but was too slow for practical use last time I tried it. So all this is about: https://en.wikipedia.org/wiki/Decentralized_web#Decentralized_protocols ------------------------- eliot | 2025-05-31 16:36:14 UTC | #47 > Common Lisp and C++ are both huge. Lisp and C are small. And I think you’re gesturing towards large States (and other large enterprises) managing complex state have invested in languages like Common Lisp and C++ (and Java, Ada, etc…). C and Carp seem comparatively egalitarian. Rust seems like an example of a huge language, with a steep learning curve, surface area of syntax, complexity of concepts; and Go would be an example of a small language, with a constrained design, intentionally limited syntax, a handful of concepts to learn. Well, they serve different purposes and the design reflects the needs of each language. That's strange how the former is a community effort, seemingly egalitarian project, and the latter is (or started as) a Google project. If that law whose name I forget (Conway?) about how the structure of software is correlated with the organization that produces it.. > **Conway's law** describes the link between communication structure of organizations and the systems they design. It is named after the computer scientist and programmer [Melvin Conway](https://en.wikipedia.org/wiki/Melvin_Conway), who introduced the idea in 1967. His original wording was: > > [O]rganizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations. > > — Melvin E. Conway, How Do Committees Invent? Ah, it's talking about "communication structure" of organizations. Maybe Go was incubated in a [skunkworks](https://en.wikipedia.org/wiki/Skunk_Works)-like small group with a focused vision; and Rust might have been "designed by committee", after it left the hands of the original creator. > In 2009, Mozilla decided to officially sponsor Rust. The language would be open source, and accountable only to the people making it, but Mozilla was willing to bootstrap it by paying engineers. A Rust group took over a conference room at the company; Dave Herman, cofounder of Mozilla Research, dubbed it “the nerd cave” and posted a sign outside the door. Over the next 10 years, Mozilla employed over a dozen engineers to work on Rust full time, Hoare estimates. --- Between **the yuge and the smol**, complex and the simple. The role of entropy in software and the tendency toward disorganization. As a program or language grows larger, how can it maintain simplicity and clarity. That's for academics, practitioners, and creators of systems to solve as best practices.. By imposing ordered structure, modularity, encapsulation; by keeping the building blocks simple, their relationships explicit and dependencies few.. It'd be better if that **constraint to simplicity follows from the design of the system**, instead of having to impose it in a top-down manner. I suppose Go's standard formatter and Rust's borrow checker are examples of how using a language and its tooling as they're designed naturally leads the user toward success, scalable and simple software. (Simple to understand, not necessarily simple of structure or scale.) --- I heard about a programming language where **errors are not possible to make**. Maybe it was one of those visual block programming languages like Sketch or [Blockly](https://developers.google.com/blockly/). This latter is a Google project, a visual programming editor with drag-and-drop blocks, used in educational applications like [Scratch](https://scratch.mit.edu/), [MIT App Inventor](https://appinventor.mit.edu/), [code.org](https://code.org/), [Microsoft MakeCode](https://www.microsoft.com/en-us/makecode). Though I'm a bit skeptical of their corporate interest in educating a new generation of programmers and shaping how they think. On the surface I like the idea of a language where errors don't exist by design. The concept of an error, a mistake, often displayed in a scary red message, is discouraging for a learner - like they did something wrong. Maybe it's only about the vocabulary and presentation: "Congratulations, you've broken the program! That's totally alright, let's figure out how to make it work." On the other hand, errors are useful and necessary to catch unintended behavior, for fixing bugs. So a new programmer should get used to seeing a lot of them. The other day I was running a huge Makefile for someone's big C project, and it threw hundreds of errors - and I just laughed. Eventually I figured it out (conflicting versions of a dependency on the local system). When the build completed successfully, it was satisfying. If there were never any errors, there wouldn't be as much of a learning experience and satisfaction of solving a problem. --- Blockly internally uses an engine called [JS-Interpreter](https://neil.fraser.name/software/JS-Interpreter/docs.html), a sandboxed JavaScript interpreter written in JavaScript. Some neat properties emerge from a language interpreter/compiler written in itself. This particular implemenation I'm intrigued by these features. > **Serialization** > A unique feature of the JS-Interpreter is its ability to pause execution, serialize the current state, then resume the execution at that point at a later time. Loops, variables, closures, and all other state is preserved. > > Uses of this feature include continuously executing programs that survive a server reboot, or loading a stack image that has been computed up to a certain point, or forking execution, or rolling back to a stored state. That sounds useful in any language. I think related to serializing the entire environment - I've also been thinking about [hot module replacement](https://bjornlu.com/blog/hot-module-replacement-is-easy), the ability to update an application while it's running. > **Hot reloading** gameplay code means that you swap out the code that controls the behavior of your game while the game is running. Why? To improve and tweak your gameplay code without having to restart the game. > > Gameplay programming is one of the most creative types of programming, especially when you’re figuring out and tweaking the design while implementing the gameplay, as may often happen for solo developers. Having hot reload helps you in this creative process by minimizing the interruptions to your creative flow. - https://zylinski.se/posts/hot-reload-gameplay-code/ - https://github.com/karl-zylinski/odin-raylib-hot-reload-game-template > **Threading** > JS-Interpreter allows one to run multiple threads at the same time. Creating two or more completely independent threads that run separately from each other is trivial: just create two or more instances of the Interpreter, each with its own code, and **alternate calling each interpreter's step function**. They may communicate indirectly with each other through any external APIs that are provided. > > A more complex case is where two or more threads should share the same global scope. I suppose any interpreter can theoretically support stepping through each instruction of a running program. I actually tried this with the Wasm/Emscripten build of uLisp, where the Lisp virtual machine would call an asynchronous callback to the host (JavaScript side) on every step. It slows down the running Lisp program significantly, but I think a useful optional feature for education or during development, to be able to visually step through (pause/resume) the program. https://kripken.github.io/blog/wasm/2019/07/16/asyncify.html ------------------------- khinsen | 2025-06-02 09:40:21 UTC | #48 [quote="eliot, post:47, topic:332"] The role of entropy in software and the tendency toward disorganization. As a program or language grows larger, how can it maintain simplicity and clarity [/quote] Removing entropy requires work to move it elsewhere. If you made a mess in your kitchen, you have to clean it up, which requires work (and heats up the universe). If you let your software turn into a mess, you have to clean it up as well (and heat up the universe a bit more in the process). Maintaining simplicity and clarity requires revisiting your code periodically and make it simpler and clearer. Work that has no immediate economical reward in most situations. Technical debt is cheap, and defaulting on the debt rather than paying it back is an easy option for individuals, though usually not for the organizations that are in charge of maintaining the code in the long term. Did anyone ever try to create an explicit incentive for clarity and simplicity? For example, award a prize to the clearest code in a company, in an internal competition. [quote="eliot, post:47, topic:332"] I heard about a programming language where **errors are not possible to make**. [/quote] That sounds impossible unless you restrict the meaning of "error" to some narrow and technical definition. [quote="eliot, post:47, topic:332"] A unique feature of the JS-Interpreter is its ability to pause execution, serialize the current state, then resume the execution at that point at a later time. Loops, variables, closures, and all other state is preserved. [/quote] Nice to have indeed. Many modern Smalltalks have it (my experience is with Pharo). You can serialize a stack trace and send it to someone else for debugging. The best reason for never using Smalltalk is that once you have done it, everything else suddenly seems very painful to use. ------------------------- eliot | 2025-06-02 15:22:08 UTC | #49 [quote="khinsen, post:48, topic:332"] [quote="eliot, post:47, topic:332"] I heard about a programming language where **errors are not possible to make**. [/quote] That sounds impossible unless you restrict the meaning of “error” to some narrow and technical definition. [/quote] Maybe it's restricted to syntax errors and other build-time errors that a compiler or linter can catch (and automatically fix if possible). That doesn't include logic errors and other bugs of unintended behavior, like infinite loops ([halting problem](https://en.wikipedia.org/wiki/Halting_problem)), which might be mathematically impossible to determine. An example that comes to mind: as part of my uLisp adventure, I'm preparing a code editor based on CodeMirror. I learned about an editor extension, or rather a feature concept for any Lisp editor, called Parinfer. - https://shaunlebron.github.io/parinfer/ It's a set of rules that an editor can apply to Lisp code as it's being edited, to balance parentheses, quotes, and indentation. It makes it impossible to have unbalanced parantheses because matching pairs are edited (insert/delete) at the same time. ![record-2025-06-02_16.11.14|300x104](upload://ozuHwHkJfUJSAbCF48JlRfzOrdp.gif) It's a bit similar to the Blockly visual block editor, where some kinds of syntax errors are not possible to make due to the design constraints of the interface. Another example is HTML (aside from the fact that it's not a programming language), whose specification requires the browser/renderer to accept malformed syntax and not blow up. The commonly used word is "forgiving", which implies that an error is a kind of sin or breaking the law, with implied guilt. I remember a quote from Tim Berners-Lee about it.. Mm, no, it was in this interview: HTML is messy by design | Marc Andreessen and Lex Fridman https://youtu.be/5KpBYm70Cak?feature=shared&t=97 > Lex: That was fundamental to the development of the web, to be able to have HTML just right there, all the ghetto mess that is HTML, all the almost biological messiness of HTML and then having the browser try to interpret that. > > Marc: Yeah exactly, to show something reasonably well. There was this internet principle that we inherited which was emit conservatively interpret liberally. (Known as the [Robustness principle](https://en.wikipedia.org/wiki/Robustness_principle): "Be conservative in what you send, be liberal in what you accept.") > The design principle was, if you're creating a web editor that's going to emit HTML, do it as cleanly as you can, but you actually want the browser to interpret liberally. You want users to be able to make all kinds of mistakes and for it to still work. Then he goes on to give an example of a child, using View Source to see how a website is built, doing copy & paste to build his own page. > .. they're trying to make a web page for their turtle or whatever, and they leave out a slash and they leave out an angle bracket, they do this and they do that and it still works. Here's where I think errors are actually more useful than silently ignoring them. It's better to learn how to do things correctly, than to have the computer fix things without you even knowing about it. (Maybe a compromise is to highlight the mistakes while fixing them with a click, or tolerating and making best effort to guess the intention.) > Lex: HTML is allowed to be messy in the way biological systems could be messy. It's like the only thing computers were allowed to be messy on for the first time. > > Marc: It used to offend me.. I'm guessing there are other examples of fault tolerance in programming languages and computers in general, like the circuitry designed for "dirty electricity" with voltage stabilization, fuses, etc. A scarier notion: a society where it's impossible to break its laws, or certain classes of crimes are prevented, not by force but by design. --- [quote="khinsen, post:48, topic:332"] Many modern Smalltalks have it (my experience is with Pharo). You can serialize a stack trace and send it to someone else for debugging. [/quote] To be honest I've never experienced Smalltalk (or a Lisp machine), I've read about Pharo before but I think I'll set aside some time to actually run and explore it. (Also with uxn, Freewheeling Apps, Glamorous Toolkit, Guile Hoot, etc.) > Pharo is a pure object-oriented programming language *and* a powerful environment, focused on simplicity and immediate feedback (think IDE and OS rolled into one). This last part is intriguing, an editor environment integrated with the operating system. It's kind of what I imagine I want to develop with my fantasy Lisp machine, written in a single language all the way to the bottom ([bedrock abstraction level](https://www.loper-os.org/?p=55)). The Pharo [Wiki](https://github.com/pharo-open-documentation/pharo-wiki) is informative. > If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual. ------------------------- eliot | 2025-06-02 16:43:40 UTC | #50 ..I'm glad some of threads in this rambling series of posts are coming together. I was feeling a bit guilty for my free-association style of writing, spamming the forum with whatever comes to my mind or found by chance. What I was hoping to achieve was: > Narrative weaving is the technique of intertwining multiple storylines and character arcs into a cohesive narrative, creating a rich tapestry of interconnected tales that enhance the overall storytelling experience. But without a coherent plan or outline, it's always on the verge of falling apart. I kinda like the thrill and uncertainty of winging it, improvizing my way through topics I find interesting; but it might be frustrating to read since there's no clear line to connect all the dots. Some tangents are just left hanging without a resolution. > Why this focus on story? Perhaps it’s easier to consider things from the opposite view: what happens when an account has absolutely no narrative thread? > > ..The effect of having information presented in this way is, at the risk of sounding harsh on my daughter, stultifying. Olson (2015) labels it as the ‘And, And, And’ (AAA) approach. That describes my writing style. > A brain dump is a technique where you write down all your thoughts, worries, and tasks onto paper or a digital tool to clear your mind and reduce stress. This practice helps organize your thoughts and can improve focus and productivity. I have a Markdown file `think.md` that's tens of thousands of lines where I just note down my fleeting thoughts and impressions. When it gets too long, I rename it with a date and move it to an archive (of many years). I often grep in that folder to recall specific information, or browse around to (re)discover poetic phrases, few lines of song lyrics, half-formed stories and concepts waiting to be developed further. > The first step in deploying narrative is knowing what the essence of a story is. > > To effectively tell your story, Olson advocates the ‘And, But, Therefore’ (ABT) approach. > > > There’s an idyllic coastal resort **and** it’s really popular with holidaymakers. **But** a monstrous shark is eating swimmers and the mayor is in denial. **Therefore**, someone with a boat needs to go out and save the day. > > In the ABT approach: > > * A provides context—it sets your story in place and time. > * B is the contradiction: the problem / issue / conundrum. > * T is the consequence of successfully identifying B—it’s the change in behaviour / practice / attitude that will result in the problem being surmounted. Basically the [dialectic method](https://en.wikipedia.org/wiki/Dialectic#Classical_philosophy): thesis, antithesis, synthesis. In the context of this post thread, I think: - A: Invention of the personal computer **and** the web, designed to empower individuals and communities. - B: It spread through society **but** gradually became colonized by commercial and political interests, where certain technologies are designed to disempower people ("users"). - T: **Therefore** we can improve things by understanding the situation, developing tools for conviviality, designing more malleable systems. That over-simplifies the complexity and nuance of the story, though. It's not so clear cut, like: what am I going to do about it personally. Creating my own microcosmos and software stack is fun, but does that help anybody? Well, I should "think globally, act locally." To first help myself and also enjoy the process of taking back control. But I also want to participate in a larger effort, community projects. Maybe one day I'll help out at a [Maker Space](https://en.wikipedia.org/wiki/Hackerspace), teaching children to hack (positively) on electronics and computers. > A hackerspace (also referred to as a hacklab, hackspace, or makerspace) is a community-operated, often "not for profit" workspace where people with common interests, such as computers, machining, technology, science, digital art, or electronic art, can meet, socialize, and collaborate. One way this line of thinking has affected me is, I'm now noticing around me the inherently political technologies and anti-convivial designs in society. What Ivan Illich said about peasant society losing touch with knowledge and tools for living, "health, agriculture, home-building.." We're unnecessarily dependent on large external systems beyond our control, and much of that is by design. Like automobiles and how they require us to depend on the oil industry (and war I guess), multinational corporations, factories, roads.. Roads for cars everywhere, cutting through forests and mountains, covering the living ground with asphalt, constantly pumping the atmosphere with toxic fumes. Or large language models. I'm starting to get in the habit of using local LLMs for fun and maybe profit, but I can't help noticing the inherent political nature of how these models were produced, how they're being used for pattern recognition in mass surveillance, how it can take away personal agency and literally deprive us of the power of thinking for ourselves. ------------------------- natecull | 2025-06-03 11:12:42 UTC | #51 [quote]I'm glad some of threads in this rambling series of posts are coming together. I was feeling a bit guilty for my free-association style of writing, spamming the forum with whatever comes to my mind or found by chance. [/quote] I do the same thing too! I am enjoying your series of posts very much. I was a kid who got to play with BASIC computers in the 1980s. A decade where we constantly feared large computing companies ruling our lives or even destroying it. (The Terminator was not a fantasy film: it was what we all expected to inevitably happen). The small BASIC machines in the 80s, and then the Web in the 1990s, and then Open Source in the early 2000s, gave me a burst of optimism and hope that that bleak techno-dystopian vision was wrong. That we now had small, distributed... malleable... tools that would ensure that automation remained shaped by humans, for humans. But first Mobile, then Cloud, and then Generative AI have all taken that feeling away. Our small open machines are now locked down and they report all our activities back to faceless distant owners. The Web, which I thought would remain free forever, is run on giant central servers. Open Source still doesn't let us change the automation that controls our life. And the billionaires running that automation are, apparently, actual *literal cultists*, who believe they're summoning some kind of machine god/demon, and who talk seriously and approvingly about human extinction. We're back on that familar 1980s doom track of my childhood. But despair is easy. Hope is resistance, and hope is often more powerful when it's small. Even if it's just a fantasy console or a reimplementation of Lisp, it all matters. It's practicing the act of reimagination that's important. ------------------------- khinsen | 2025-06-04 11:45:34 UTC | #52 [quote="eliot, post:50, topic:332"] We’re unnecessarily dependent on large external systems beyond our control, and much of that is by design. [/quote] Recommended: [re:publica 25: Robin Berjon - How We Can Finally Make The Digital World Democratic](https://www.youtube.com/watch?v=BbqZvp7D_nY) ------------------------- eliot | 2025-06-05 00:00:27 UTC | #53 > Our digital sphere is authoritarian and as the internet is infrastructure for society this makes our world authoritarian too. In this talk, we’ll see how to use open protocols, pro-democracy technical architectures, and our understanding of digital infrastructure to build democratic social media. Woo, that talk really hit the spot. He connected the dots and tied the loose ends from my recent learnings and musings about the history of the web; how artifacts have politics; and the call to develop tools for conviviality. I feel I have a clearer understanding of our current situation and the hopeful way forward for society and personally. I'm going to watch it again and take notes. --- [quote="natecull, post:51, topic:332"] It’s practicing the act of reimagination that’s important. [/quote] That's an excellent way to put it, thank you - I'll take it to heart. It's like a journey of learning to become a better wizard. https://en.wikipedia.org/wiki/The_Sorcerer's_Apprentice https://www.youtube.com/watch?v=snB8u_G3jVI Funny how this story is about technology run amok. > Tired of fetching water by pail, the apprentice enchants a broom to do the work for him.. It's a warning about the power of magic, the origin of modern science. --- From the description of the Glass Bead Game: > Playing the game well requires years of hard study of music, mathematics, and cultural history. It proceeds by players making deep connections between seemingly unrelated topics. The game is essentially an abstract synthesis of all arts and sciences. I think **computational media** is an interdisciplinary nexus of art and science I'd like to pursue deeper as a means of creative expression. > computational media is the art of using the computer as a medium of expression, whether it be through designing and developing software, games, films, animations, and other kinds of art. Mm I'm not satisfied with this definition. It's more than art, it's about a way to think beyond the brain, of freeing the thinking process out into the environment, where the thinking is made visible and manipulable through tangible media, including but not limited to computers, microcontrollers, sensors, programming languages - and also any available material, paper and pencil, paint, rocks, optical projectors, sound systems, 2D plotters and 3D printers, fabrics woven with soft circuitry, organic robots, cell cultures in petri dishes.. That reminds me of the scientist [Andrew Adamatzky](https://en.wikipedia.org/wiki/Andrew_Adamatzky) and his field of study called unconventional computing. > Andrew Adamatzky is a British computer scientist, who is a Director of the Unconventional Computing Laboratory and Professor in Unconventional Computing at the Department of Computer Science and Creative Technology, University of the West of England, Bristol, United Kingdom. > > He is known for his research in unconventional computing. In particular, he has worked on chemical computers using reaction–diffusion processes. He has used slime moulds to plan potential routes for roadway systems and as components of nanorobotic systems, and discovered that they seek out valerian tablets, a herbal sedative, in preference to nutrients. He has also shown that the billiard balls in billiard-ball computers may be replaced by soldier crabs. > > Adamatzky is also known for his continued research on fungal electrical spiking behavior. Notably publishing the book Fungal Machines, which compiles many years of work into one book. - Andrew Adamatzky in conversation with Merlin Sheldrake https://www.youtube.com/watch?v=c-LIVCGMD-E > *Fungal Machines: Sensing and Computing with Fungi* explores fungi as sensors, electronic devices, and potential future computers, offering eco-friendly alternatives to traditional electronics. ![image|620x254](upload://j4KQNKRAlNQpydUaKj0UT7DO6PU.jpeg) > Fungi are ancient, widely distributed organisms ranging from microscopic single cells to massive mycelium spanning hectares. They possess senses similar to humans, detecting light, chemicals, gases, gravity, and electric fields. The book covers fungal electrical activity, sensors, electronics, computing prototypes, and fungal language. ![image|620x219](upload://MdswtPSOEfw2lGRQ6IN73qrfAW.jpeg) > ..there is distant information transfer between fungal fruit bodies. In an automaton model of a fungal computer, we show how to implement computation with fungi and demonstrate that a structure of logical functions computed is determined by mycelium geometry. - [Towards fungal computer](https://royalsocietypublishing.org/doi/10.1098/rsfs.2018.0029) ------------------------- khinsen | 2025-06-04 16:45:25 UTC | #54 The Sorcerer's Apprentice is very well known in Germany. Everyone reads it at school. But I doubt that Germans are therefore better prepared to understand today's issues in tech. ------------------------- eliot | 2025-06-04 18:02:23 UTC | #55 Perhaps Goethe's Faust is a more suitable story of modernity's relationship with technology. https://en.wikipedia.org/wiki/Doctor_Faustus_(play) On a positive note, I've always liked the concept of *Gesamtkunstwerk*, "total work of art". > Wagner used the exact term *Gesamtkunstwerk* on two occasions, in his 1849 essays "[Art and Revolution](https://en.wikipedia.org/wiki/Art_and_Revolution)" and "[The Artwork of the Future](https://en.wikipedia.org/wiki/The_Artwork_of_the_Future)", where he speaks of his ideal of unifying all works of art via the theatre. He also used in these essays many similar expressions such as "the consummate artwork of the future" and "the integrated drama". - https://en.wikipedia.org/wiki/Gesamtkunstwerk It feels like that's the direction we would be going if humanity was more focused on collective creative expression through all arts and sciences; instead of our current state of affairs where most of our collective energy is wasted on..*gestures broadly at everything*. --- > A utopia typically describes an imagined community or society that possesses highly desirable or near-perfect qualities for its members. It was coined by Sir Thomas More for his 1516 book Utopia, which describes a fictional island society in the New World. https://en.wikipedia.org/wiki/Island_(Huxley_novel) > On the Beautiful Green, a distant utopian planet, rural vegans master telepathy and mental abilities for interstellar travel. As part of an intergalactic coalition, Mila volunteers to bring a message of self-actualization and harmony with nature to the one planet rejected by all her peers as incorrigible--Earth. https://en.wikipedia.org/wiki/La_Belle_Verte > **Stranger in a Strange Land** tells the story of Valentine Michael Smith, a human who comes to Earth in early adulthood after being born on the planet Mars and raised by Martians, and explores his interaction with and eventual transformation of Terran culture. > > Smith becomes a celebrity and is feted by the Earth's elite. He investigates many religions, including the Fosterite Church of the New Revelation, a populist megachurch in which sexuality, gambling, alcohol consumption, and similar activities are allowed and even encouraged and considered "sinning" only when they are not under church auspices. Smith has a brief career as a magician in a carnival, in which he and Gillian befriend the show's tattooed lady. > > Smith starts a Martian-influenced "Church of All Worlds", combining elements of the Fosterite cult with Western esotericism. https://en.wikipedia.org/wiki/Stranger_in_a_Strange_Land > Steppenwolf (originally Der Steppenwolf) is the tenth novel by German-Swiss author Hermann Hesse. The novel was named after the German name for the steppe wolf. The story in large part reflects a profound crisis in Hesse's spiritual world during the 1920s. > > As the story begins, Harry is beset by reflections on his being ill-suited for the world of everyday, regular people, specifically for frivolous bourgeois society. Anarchist evening entertainment at the magic theater. Entrance not for everyone, for madmen only. Price of admittance: your mind. ![steppenwolf - film|330x500](upload://iuR4l0NUhwUL99nblrCbpZTovep.jpeg) > In his aimless wanderings about the city he encounters a person carrying an advertisement for a magic theatre who gives him a small book, Treatise on the Steppenwolf. This treatise, cited in full in the novel's text as Harry reads it, addresses Harry by name and strikes him as describing himself uncannily. > > It is a discourse on a man who believes himself to be of two natures: one high, man's spiritual nature, the other low and animalistic, a "wolf of the steppes". This man is entangled in an irresolvable struggle, never content with either nature because he cannot see beyond this self-made concept. --- ..OK, that boat veered way off course, not sure what common thread weaves through these stories. About humanity's potential for good and evil, between the angel and the beast. ------------------------- eliot | 2025-06-04 23:21:00 UTC | #56 **Imagineering**. A portmanteau of imagination and engineering. > The term was introduced in the 1940s by Alcoa, and used by Union Carbide in an in-house magazine in 1957. Aloca is an American industrial corporation and the world's eighth-largest producer of aluminum. Union Carbide Corporation is an American chemical company, a wholly owned subsidiary of Dow Chemical Company. It is known for the Bhopal disaster in 1984, when over half a million people in the vicinity of a pesticide plant in India were exposed to the highly toxic gas methyl isocyanate, in what is considered the world's worst industrial disaster. > Disney filed for a trademark for the term in 1989, claiming first use of the term in 1962. Imagineering is a registered trademark of Disney Enterprises, Inc. --- How about **Creative Technology**. > Creative technology is a broadly interdisciplinary and transdisciplinary field combining computing, design, art and the humanities. > > The field of creative technology encompasses art, digital product design, digital media or an advertising and media made with a software-based, electronic and/or data-driven engine. > > Examples include multi-sensory experiences made using computer graphics, video production, digital music, digital cinematography, virtual reality, augmented reality, video editing, software engineering, 3D printing, the Internet of Things, CAD/CAM and wearable technology. Right, that sounds like a similar direction as I was thinking about computational media. > Creative technology has been defined as "the blending of knowledge across multiple disciplines to **create new experiences or products**" that meet **end user and organizational needs**. A more specific conceptualization describes it as the combination of information, holographic systems, sensors, audio technologies, image, and video technologies, among others with artistic practices and methods. The central characteristic is identified as an **ability to do things better**. Better for whom? > Creative technology is also seen as the intersection of new technology with creative initiatives such as **fashion, art, advertising, media and entertainment**. > > As such, it is a way to make connections between **countries seeking to update their culture**; a winter 2015 Forbes article tells of 30 creative technology startups from the UK making the rounds in Singapore, Kuala Lumpur and New York City in an effort to raise funds and make connections. It feels like the potential of this phrase is being exploited for the purposes of control, profit, and **passive consumption** instead of actual creativity by the people as participants and co-creators. Fashion, advertising, entertainment.. These all embody a top-down power structure where the "creators" are impersonal corporations with the public as the "spectators". The wording of "countries seeking to update their culture" is creepy too. Who is designing this new version of the collective mental operating system we call culture. --- A word I like is **autopoiesis**, self-organization. > The term autopoiesis (from Greek αὐτo- (auto) 'self' and ποίησις (poiesis) 'creation, production'), one of several current theories of life, refers to **a system capable of producing and maintaining itself by creating its own parts**. > The term was introduced in the 1972 publication *Autopoiesis and Cognition: The Realization of the Living* by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. > > The concept has since been applied to the fields of cognition, neurobiology, systems theory, architecture and sociology. https://en.wikipedia.org/wiki/Self-organization > **Self-organization**, also called [spontaneous order](https://en.wikipedia.org/wiki/Spontaneous_order) in the [social sciences](https://en.wikipedia.org/wiki/Social_science), is a process where some form of overall [order](https://en.wikipedia.org/wiki/Order_and_disorder) arises from local interactions between parts of an initially disordered [system](https://en.wikipedia.org/wiki/System). > > The resulting organization is wholly decentralized, [distributed](https://en.wiktionary.org/wiki/distribute) over all the components of the system. As such, the organization is typically [robust](https://en.wikipedia.org/wiki/Robustness) and able to survive or [self-repair](https://en.wikipedia.org/wiki/Self-healing_material) substantial [perturbation](https://en.wikipedia.org/wiki/Perturbation_theory). [Chaos theory](https://en.wikipedia.org/wiki/Chaos_theory) discusses self-organization in terms of islands of [predictability](https://en.wikipedia.org/wiki/Predictability) in a sea of chaotic unpredictability. > > Self-organization occurs in many [physical](https://en.wikipedia.org/wiki/Physics), [chemical](https://en.wikipedia.org/wiki/Chemistry), [biological](https://en.wikipedia.org/wiki/Biology), [robotic](https://en.wikipedia.org/wiki/Robotics), and [cognitive](https://en.wikipedia.org/wiki/Cognitive) systems. Examples of self-organization include [crystallization](https://en.wikipedia.org/wiki/Crystallization), thermal [convection](https://en.wikipedia.org/wiki/Convection) of fluids, [chemical oscillation](https://en.wikipedia.org/wiki/Chemical_oscillator), animal [swarming](https://en.wikipedia.org/wiki/Swarming), [neural circuits](https://en.wikipedia.org/wiki/Neural_circuit), and [black markets](https://en.wikipedia.org/wiki/Black_market). ------------------------- eliot | 2025-06-05 02:17:23 UTC | #57 https://www.wired.com/story/youre-not-ready-for-phone-dead-zones/ ------------------------- natecull | 2025-06-05 03:01:55 UTC | #58 [quote="eliot, post:56, topic:332"]Creative technology[/quote] How about repurposing "Creative Computing"? https://archive.org/search?query=creative+computing >Meshtastic Oh now that's very interesting. I was always fascinated by mesh networks, but I thought they were dead. Glad to see they've made a comeback! ------------------------- emery | 2025-06-05 07:41:04 UTC | #59 [quote="eliot, post:47, topic:332"] Rust seems like an example of a huge language, with a steep learning curve, surface area of syntax, complexity of concepts; and Go would be an example of a small language, with a constrained design, intentionally limited syntax, a handful of concepts to learn. Well, they serve different purposes and the design reflects the needs of each language. That’s strange how the former is a community effort, seemingly egalitarian project, and the latter is (or started as) a Google project. If that law whose name I forget (Conway?) about how the structure of software is correlated with the organization that produces it.. [/quote] Go did not start as a Google project, Google was paying Bell Labs people to continue a project started outside of Google. Otherwise it Google hasn't creating any successful languages of their own, but maybe I'm wrong. Go was a product of the Limbo language from the Inferno operating system, itself a product of Plan9. Inferno and Plan9 were convivial operating systems. http://doc.cat-v.org/inferno/4th_edition/limbo_language/limbo ------------------------- eliot | 2025-06-06 05:53:16 UTC | #60 [quote="emery, post:59, topic:332"] Go was a product of the Limbo language from the Inferno operating system, itself a product of Plan9. Inferno and Plan9 were convivial operating systems. [/quote] I see.. How fascinating. Bell Labs is a name I often see when studying about innovative technology, a respected and influential research lab like Xerox PARC. > As a former subsidiary of the American Telephone and Telegraph Company (AT&T), Bell Labs and its researchers have been credited with the development of radio astronomy, the transistor, the laser, the photovoltaic cell, the charge-coupled device (CCD), information theory, the Unix operating system, and the programming languages B, C, C++, S, SNOBOL, AWK, AMPL, and others. The transistor! I'd read about Unix and C, the illustrious crew at Bell Labs that developed them like Ken Thompson, Dennis Ritchie, Brian Kernighan - some of whom, I'm learning now, went on to Plan 9, Inferno, and also invented Go. What a pedigree this language has. It explains the feeling I was getting from learning the language, how the unusually simple syntax is a result of distilled experience. --- Plan 9 had a cute logo. ![image|238x304](upload://HYILDLsvoCr7gZR7f3qZJL2WEt.png) It reminds me of the little creature from uxn. ![image|217x235](upload://8ObBsdxzEDa3uqEOF48GPX2OQkX.png) That makes me want a mascot for my imaginary Lisp machine. It now has a tentative project name `am36a`, like "amoeba". I have an old drawing I did for a science fiction story, that might fit.. ![image|248x132, 75%](upload://jHpJ6MdiHaHu6xDLJJDJCW7zETP.png) Heheh, yeah that's the one. Drawn in pencil would be nice. ![image|167x335, 75%](upload://5CWEtpLPyWBnbDbfEhCWvgKBwj0.png) The story was called 粘菌人類の一瞬王国 (The Momentary Kingdom of Slime-Mold Humanity). ![image|381x500, 100%](upload://dMqILMHhPoNjusNfLvvIfQYTLxq.png) --- What I'm learning from uxn and the imaginative world it grew out of, is that there's a lot of unexplored expressive potential in software, beyond mere utilitarian purpose. There's room for more whimsy, weirdness, and surreal alien artifacts. Best if it's technically marvellous and practically useful, at least for the creator. [TempleOS](https://en.wikipedia.org/wiki/TempleOS) and [SerenityOS](https://en.wikipedia.org/wiki/SerenityOS) are examples of that kind of *personal* software system, where the process of its creation was not only a technical effort but an artistic expression. > By calling this creation an "Artistic Operating System", we assert that it should be unique and personal, even peculiar in its way of representing and interfacing with the rest of the media world. > > In this sense, it is freed from the implicit social requirement that new technological projects conform to standard principles of progress, universality and efficiency. There's no need to claim to be the "Next Big Thing" or to even suggest that anyone, other than the creators of this device, should use it. > > -- [Why Screenless ](http://screenl.es/why.html) Looking into the root word of *technology*, I found: > In Ancient Greek philosophy, techne (Greek: τέχνη, 'art, skill, craft') is a philosophical concept that refers to making or doing. > > While the definition of *techne* is similar to the modern use of "practical knowledge", *techne* can include various fields such as mathematics, geometry, medicine, shoemaking, rhetoric, philosophy, music, and astronomy. Also software, hardware, computers, electronics. > One of the definitions of *techne* led by Aristotle, for example, is "a state involving true reason concerned with production". --- ..As I read more about [Inferno](https://doc.cat-v.org/inferno/) and [Plan 9](https://doc.cat-v.org/plan_9/), I remember I've seen this archive of historical documents before. It's fun to dig around, so many great ideas to learn about, some of which I now understand evolved into modern incarnations that I'm more familiar with. Much of my learning process is about digging back through history, uncovering the layers that led up to the current situation. I especially enjoy finding forgotten brilliant ideas that didn't succeed - like Nikola Tesla's concept for a global wireless power network. https://en.wikipedia.org/wiki/Wardenclyffe_Tower He envisioned transmitting electrical power as well as text, voice/audio, images.. The world wasn't ready for this idea yet, but he saw the future - and was trying to build towards it. ![image|690x392](upload://5YuWYKyaSLCjDCXEtnqnB5WbGoW.png) #### Help: A Minimalist Global User Interface by Rob Pike “[Help](https://doc.cat-v.org/plan_9/1st_edition/help/) is a combination of editor, window system, shell, and user interface that provides a novel environment for the construction of textual applications such as browsers, debuggers, mailers, and so on. It combines an extremely lean user interface with some automatic heuristics and defaults to achieve significant effects with minimal mouse and keyboard activity. The user interface is driven by a file-oriented programming interface that may be controlled from programs or even shell scripts. By taking care of user interface issues in a central utility, help simplifies the job of programming applications that make use of a bitmap display and mouse.” ------------------------- eliot | 2025-06-06 05:55:34 UTC | #61 World Brain (1938) by H. G. Wells. ![image|250x364](upload://qJ0MGxcomp8pHQ1MuTuDTNdstRZ.jpeg) > Wells describes his vision of the World Brain: a new, free, synthetic, authoritative, permanent "World Encyclopaedia" that could help world citizens make the best use of universal information resources and make the best contribution to world peace. > Plans for creating a global knowledge network long predate Wells. Andrew Michael Ramsay described, c. 1737, an objective of freemasonry as follows: > > ... to furnish the materials for a **Universal Dictionary** ... By this means the lights of all nations will be united in one single work, which will be a universal library of all that is beautiful, great, luminous, solid, and useful in **all the sciences and in all noble arts**. ![Repertorio Bibliográfico Universal|559x500](upload://xhkJdd9m2DZKZ9XHKjK2z0blWso.jpeg) > In 1895, Paul Otlet and Henri La Fontaine began the creation of a collection of index cards, meant to catalog facts, that came to be known as the *Repertoire Bibliographique Universel* (RBU). By the end of 1895 it had grown to 400,000 entries; later it would reach more than 15 million entries. > > In 1904, Otlet and La Fontaine began to publish their classification scheme, which they termed the Universal Decimal Classification, originally based on Melvil Dewey's Decimal classification system. It was a bibliographic and library classification representing the **systematic arrangement of all branches of human knowledge** organized as a coherent system in which knowledge fields are related and inter-linked. I see this is the historical context of the idea for URI (Uniform Resource Identifier), "a unique sequence of characters that identifies an abstract or physical resource, such as resources on a webpage, mail address, phone number, books, real-world objects such as people and places, concepts." > In 1926, extending the analogy between global telegraphy and the nervous system, Nikola Tesla speculated that: > > When wireless is perfectly applied the whole earth will be converted into a huge brain ... Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket. ![image|500x500](upload://mZFkXHY3xHn3DbpNHGbRzQ6FP5x.jpeg) > In a series of lectures in 1936-37, Wells begins with the observation that the world has become a single interconnected community due to the enormously increased speed of telecommunications. Moreover, energy is available on a new scale, enabling, among other things, the capability for mass destruction. Consequently, the establishment of a new world order is imperative. ![Sketch of an evolutionary tree of life from Charles Darwin's notebook|306x500](upload://hv15kEkLpEkJHq3G5Q1WKhOPi1N.jpeg) > ..Without a World Encyclopaedia to hold men's minds together in something like a **common interpretation of reality**, there is no hope whatever of anything but an accidental and transitory alleviation of any of our world troubles. > > ..I am speaking of a **process of mental organisation throughout the world** which I believe to be as inevitable as anything can be in human affairs. The world has to pull its mind together, and this is the beginning of its effort. The world is a Phoenix. It perishes in flames and even as it dies it is born again. This synthesis of knowledge is the necessary beginning to the new world. --- ![summa technologiae|333x500](upload://sVkvSBPGWdIeQozbelcPiCKPRxY.jpeg) Summa Technologiae (1964) by [Stanisław Lem](https://en.wikipedia.org/wiki/Stanis%C5%82aw_Lem). 1 - Dilemmas 2 - Two Evolutions 3 - Space Civilizations 4 - Intellectronics. The day will come when machine intelligence will rival or surpass the human one. Moreover, problems facing humankind may surpass the intellectual abilities of flesh and blood researchers. What shall we expect (or fear) in this conception of the future? 5 - [Prolegomena](https://en.wikipedia.org/wiki/Prolegomenon) to Omnipotence. Technological evolution gives us more and more abilities—in fact, sometime in the future we should be able to do everything at all. 6 - Phantomology. Speculations on what is now known as virtual reality. 7 - Creation of Worlds. May it be that instead of painstaking research we can "grow" new information from available information in an automatic way? Starting with this question Lem evolves the concept to the creation of whole new Universes, including the construction of a heaven/hell/afterlife. 8 - [Pasquinade](https://en.wikipedia.org/wiki/Pasquinade) on Evolution. Can humanity engineer their own evolution? 9 - Art and Technology ------------------------- natecull | 2025-06-06 11:41:25 UTC | #62 >techne (Greek: τέχνη, ‘art, skill, craft’) I feel like we maybe need a word that leans into the "art, skill, craft" aspects of τέχνη. We have a lot of "technology" today which actually de-technes its users. But some technology up-technes us. > Help: A Minimalist Global User Interface by Rob Pike Oh! That reminds me of the late-1980s through early 1990s moment, after windowing had been demonstrated on the Macintosh and Amiga and Atari, and well before the Web, when the PC world was still just starting to do pop-up windows with text. And "hypertext" was the new buzzword. Everyone was making their own proprietary hypertext systems. I forget most of them, but a very early use was for documentation systems. Netscape had (or licenced) "Folio Views", which it used for its system documentation well into the 1990s. Microsoft Windows (at least by 1993) had "Hypertext Help". The GNU Unixy ecosystem got "info" at some point - when? When HTML appeared it just instantly destroyed almost all of these proprietary hypertext systems for two reasons, even though it was worse in many ways: one, it was open and free, and two, it was network-aware, with DNS creating a global filesystem, which was the killer app that melted the whole world (and created a million literary allusions to Neuromancer's "cyberspace"). But even without TCP/IP (that even Windows 1995 didn't ship with), you could still use a HTML browser for a local documentation set. tldr, my guess is that Plan 9's "Help" was thought of as a hypertext system at its core, with an initial use-case of browsing system manuals, which is why it would have been called that despite sounding something more like Emacs. Am I anywhere near right? Plan 9 is a fascinating fork in the path and I wish I knew more about it. I was for a while intrigued by its follow-on Inferno - but I'm pretty disappointed to hear that Go is considered to have come from there. Wasn't the whole point of Inferno that it was a live, Smalltalky, *VM*? And not just yet another C-like dumb compiler that takes text blobs to native-machine-code binary blobs? > to furnish the materials for a **Universal Dictionary** Isaac Asimov presumably knew about the Masonic and utopian interest in this too, because that's how his 1940s "Foundation" begins - at least with the cover story of "a project to build a galactic Encyclopedia". ------------------------- eliot | 2025-06-06 18:38:41 UTC | #63 > **Encyclopedia Galactica** is the name of a number of fictional or hypothetical encyclopedias containing all the knowledge accumulated by a galaxy-spanning civilization, most notably in Isaac Asimov's Foundation series. > Theodore Wein considers the Encyclopedia Galactica as possibly inspired by a reference in H. G. Wells's The Shape of Things to Come (1933). The future world envisioned by Wells includes an "Encyclopaedic organization which centres upon Barcelona, with seventeen million active workers" and which is tasked with creating "the Fundamental Knowledge System which accumulates, sorts, keeps in order and renders available everything that is known". Earlier, in the 19th century, Western esoteric philosophers like [Rudolf Steiner](https://en.wikipedia.org/wiki/Rudolf_Steiner) were speaking of: > The [Akashic records](https://en.wikipedia.org/wiki/Akashic_records) are believed to be a compendium of all universal events, thoughts, words, emotions, and intent ever to have occurred in the past, present, or future, regarding not just humans, but all entities and life forms. The concept of a book or library that contains a universe of knowledge is probably ancient. > The apocryphal Book of Jubilees speaks of two heavenly tablets or books: a [Book of Life](https://en.wikipedia.org/wiki/Book_of_Life) for the righteous, and a Book of Death for those that walk in the paths of impurity and are written down on the heavenly tablets as adversaries of God. > > The origin of the heavenly Book of Life is sought in Babylonia, where legends speak of the Tablets of Destiny and of tablets containing the transgressions, sins, wrongdoings, curses and execrations of a person who should be "cast into the water"; that is, blotted out. A strange religious imagery I remember is some monk or saint *eating* a book. ..Ah yes, it was a woodcut by Albrecht Dürer from his Apocalypse of St John series. Saint John Devouring the Book ![saint_john_devouring_the_book](upload://rJBWnrHOlYFnWFcO46Gr3ce07bU.jpeg) What does it mean to eat a book? The saint is consuming angelic knowledge and making it a part of his being. --- The Argentinian writer Jorge Borges has a short story called [The Aleph](https://en.wikipedia.org/wiki/The_Aleph_(short_story)), published in 1945. It's not exactly about a book, but a kind of multidimensional crystal. > The Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping, or confusion. > On the back part of the step, toward the right, I saw a small iridescent sphere of almost unbearable brilliance. At first I thought it was revolving; then I realised that this movement was an illusion created by the dizzying world it bounded. The Aleph's diameter was probably little more than an inch, but all space was there, actual and undiminished. Each thing (a mirror's face, let us say) was infinite things, since I distinctly saw it from every angle of the universe. I saw the teeming sea; I saw daybreak and nightfall; I saw the multitudes of America; I saw a silvery cobweb in the center of a black pyramid; I saw a splintered labyrinth (it was London); I saw, close up, unending eyes watching themselves in me as in a mirror; I saw all the mirrors on earth and none of them reflected me.. > Borges has stated that the inspiration for this story came from H. G. Wells' short stories "[The Crystal Egg](https://en.wikipedia.org/wiki/The_Crystal_Egg)" and "[The Door in the Wall](https://en.wikipedia.org/wiki/The_Door_in_the_Wall_(short_story))". And we're back to Herbert George Wells. He's like a Bell Labs of writers. History may not have a coherent narrative structure, but it sure knows how to rhyme. ------------------------- natecull | 2025-06-07 03:01:50 UTC | #64 [quote="eliot, post:63, topic:332"]The Aleph is a point in space that contains all other points.[/quote] And that Borges story will be why the McGuffin (spoilers for a 37-year-old book) at the end of William Gibson's 1988 "Mona Lisa Overdrive" is... a giant hard drive called an "Aleph" which contains a copy of the entire Internet. The concept seemed a lot cooler and more mystical in 1988, before search engines, web-scrapers, and the Internet Archive. Not that the Archive isn't cool! It's just, Bill, did you somehow never hear of floppy disks? You do know that data can be copied, right? Maybe the concept was that the Aleph in MLO was some weird quantum-mechanical thing that was a *live* image of all Cyberspace data via spooky entanglement. If so, that wasn't ever really explained. >Help (https://doc.cat-v.org/plan_9/1st_edition/help/help.pdf) My guess about Rob Pike's Help being specifically hypertext was wrong. Although it's from 1991, so in the right era, but no, not "help" as in lookup of textual manuals, but "help" as in a generalised assistant. >This is a revision of a paper by the same title published in the Proceedings of the Summer 1991 USENIX Conference, Nashville, 1991, pp. 267-279. It's inspired by Niklaus Wirth's Oberon specifically. >The inspiration for help comes from Wirth’s and Gutknech’s Oberon system [Wirt89, Reis91]. Oberon is an attempt to extract the salient features of Xerox’s Cedar environment and implement them in a system of manageable size. It is based on a module language, also called Oberon, and integrates an operating system, editor, window system, and compiler into a uniform environment. Its user interface is disarmingly simple: by using the mouse to point at text on the display, one indicates what subroutine in the system to execute next. In a normal Unix shell, one types the name of a file to execute; instead in Oberon one selects with a particular button of the mouse a module and subroutine within that module, such as Edit.Open to open a file for editing. Almost the entire interface follows from this simple idea. >The user interface of *help* is in turn an attempt to adapt the user interface of Oberon from its language-oriented structure on a single-process system to a file-oriented multi-process system, Plan 9 [Pike90]. That adaptation must not only remove from the user interface any specifics of the underlying language; it must provide a way to bind the text on the display to commands that can operate on it: Oberon passes a character pointer; help needs a more general method because the information must pass between processes. I wish we had something like help on Windows or Linux today. We've got copy-paste of text and that's all. And on the other end, we've got insanely over-engineered IDEs which are all universally terrible to use. There is a reference to hypertext, though! >Help is similar to a hypertext system, but the connections between the components are not in the data — the contents of the windows — but rather in the way the system itself interprets the data. When information is added to a hypertext system, it must be linked (often manually) to the existing data to be useful. Instead, in help, the links form automatically and are context-dependent. In a session with help, things start slowly because the system has little text to work with. As each new window is created, however, it is filled with text that points to new and old text, and a kind of exponential connectivity results. After a few minutes the screen is filled with active data. Compare Figure 4 to Figure 11 to see snapshots of this process in action. Help is more dynamic than any hypertext system for software development that I have seen. (It is also smaller: 4300 lines of C.) The next question I have though is: what was Xerox Cedar, that inspired Oberon? It was a language on the Alto and Star, developed from a thing called Mesa. It inspired Modula-2, so I guess that's the Wirth connection. https://en.wikipedia.org/wiki/Mesa_(programming_language) > * In 1976, during a sabbatical at Xerox PARC, [Niklaus Wirth](https://en.wikipedia.org/wiki/Niklaus_Wirth) became acquainted with Mesa, which had a major influence in the design of his [Modula-2](https://en.wikipedia.org/wiki/Modula-2) language.[[10]](https://en.wikipedia.org/wiki/Mesa_(programming_language)#cite_note-10) >* [Java](https://en.wikipedia.org/wiki/Java_(programming_language)) explicitly refers to Mesa as a predecessor.[[11]](https://en.wikipedia.org/wiki/Mesa_(programming_language)#cite_note-11) Also! Ah, this is why Smalltalk's assignment syntax is weird! > Due to PARC's using the 1963 variant of [ASCII](https://en.wikipedia.org/wiki/ASCII) rather than the more common 1967 variant, the Alto's character set included a left-pointing arrow (←) rather than an underscore. The result of this is that Alto programmers (including those using Mesa, Smalltalk etc.) conventionally used [camelCase](https://en.wikipedia.org/wiki/CamelCase) for compound identifiers, a practice which was incorporated in PARC's standard programming style. On the other hand, the availability of the left-pointing arrow allowed them to use it for the assignment operator, as it originally had been in ALGOL. Fun fact: "PETSCI" on the Commodore PET and C64 also followed that 1963 ASCII standard, to the extent of having that weird left-arrow key in prime real estate on the keyboard. Used to baffle the heck out of me, because BASIC didn't use that symbol. Nothing ever used it! It wasn't backspace/delete, it wasn't left-cursor, it just printed an utterly useless arrow! And it took up a whole key! In the C64, right on the top left where Escape should be! Madness! But presumably the reason was because Commodore thought "an assignment symbol" would be super important for programming languages (because ALGOL I guess). Instead, everyone *except Xerox PARC* just used = or := ... and PARC just removed themselves from the entire OS and programming-languages conversation. Brett Victor has a copy of the 1984 "Cedar Midterm Report" describing the Cedar Programming Environment, which looks very Macintosh-y. (I mean obviously the arrow of causality from Xerox to Apple goes the other way, but Cedar is now out of the critical path, it's after GUIs have been commercialised and PARC is losing its influence). https://worrydream.com/refs/Teitelman_1984_-_The_Cedar_Programming_Environment,_A_Midterm_Report_and_Examination.pdf >In 1978, the computing community at PARC consisted of three distinct cultures: Mesa, Interlisp, and Smalltalk. Both the Smalltalk and Mesa communities programmed primarily on the Alto, a small personal computer that had been developed at PARC [32]. The Interlisp programmers continued to operate on a time-shared, main-frame computer called MAXC, a home grown machine that emulated a PDP-lO and ran Tenex. >But there were aspects of the Alto design that did not work out well. In particular, the limitations on the size of the address space and on the amount of real memory were serious. t2 As a result, a great deal of time was spent squeezing software into the limited space available. However, we did not take this as an indictment of personal computing, but merely an indication that we had to think bigger. This memory-size thing is my big worry with Uxn. It's super cute! But if we get married to Uxn, or anything else 8/16 bit shaped, we end up hitting the same wall Xerox hit in 1978. Do we really want to do that? >After much painful deliberation, we decided to design a new machine, the Dorado, to overcome these obstacles.t3 We intended that the Dorado would provide the hardware base for the next generation of computer system research at PARC. ... Most Dorados currently have 2 megabytes of main storage expandable to 8 megabytes. (The Alto had 128K bytes of memory, later expanded to 512K via additional memory banks.) >The EPE working group recommended that "CSL should launch a project on the scale of the Dorado tll to develop a programming environment for the entire laboratory starting from either Lisp [Interlisp] or Mesa" [H].t12 We also concluded that the laboratory could support only one major programming environment. Interesting that Smalltalk wasn't even in the running! At all!!! So much for the myth of Smalltalk being PARC's secret sauce. They were actively marketing it as the Next Big Thing - but weren't relying on it for internal infrastructure. Between Mesa and Lisp, Lisp lost (and there's another big fork in the path!) for social rather than technical reasons. >The rest of Xerox had a fairly large and growing commitment to Mesa, and none to Interlisp. Remaining largely compatible with the rest of the corporation had both advantages and disadvantages, but the advantages predominated. With respect to research communities outside of Xerox, either choice would reduce communication with important (but different) research communities in the outside world: Lisp favors the AI community, Mesa favors the programming language and systems programming community. > Although the efforts required were about the same, a somewhat larger number of qualified people were available to work on a Mesa-based EPE. It was noted, however, that if Mesa were chosen, some effort would be needed to ensure that those members of the Lisp community concerned with programmer assistance, programs as data bases, and integrated ublanguages were able to provide enough input to ensure that Cedar would be of use and attractive to them >We wanted to keep abreast of developments in computer science in the world at large. Most of the knowledge representation. expert systems. automatic programming, and programmer's assistant type of work is done in various Lisp dialects. However, much of the formal specification and verification work was directed towards Pascal dialects, and would therefore be more easily applicable to Mesa. Also Ada being based on Pascal is much more similar to Mesa than to Lisp, so that work on Ada environments would be highly relevant. Ironic in retrospect that Pascal was the ALGOL-derived language that was being considered the industry standard in 1979, with C not even seen as a speck on the horizon. I guess there was just a massive gap between PARC and Bell Labs culture. >We wanted to move implementors into the project easily, and were even more concerned that it be easy for users to convert to the use of Cedar. t20 Within CSL, there were roughly comparable numbers of hard core users in each camp. so that the issues of migration seemed roughly symmetric. However, within Xerox, Mesa was much more widely known and used. Outside of Xerox, generic Lisp was more widely known than Mesa, but generic Pascal was more widely known and used than Interlisp. >As a result, the decision was made in early 1979 to launch a project to build Cedar, an experimental programming environment, starting with Mesa. and later on, we see the result of this choice: >The principal shortcomings of Cedar are in the area of providing support for various aspects of the Lisp style of programming (and to a lesser extent Smalltalk), and can be attributed to the selection of Mesa as a starting point for Cedar and the fact that the overwhelming majority of the Cedar implementors and users came from the Mesa community. These shortcomings include not reaching Cedar's original goals with respect to: fast turnaround for small program changes, support for wide range of (i.e., late) binding times, easy use of programs as data, and inheritance/defaulting (Smalltalk subclassing). In short, with respect to the fundamental principle stated in the EPE report [8] that "the present Lisp, Mesa, and Smalltalk programming styles all must be supported in a satisfactory way," it is fair to say that Cedar has not (yet) succeeded. The rest is a description of the Cedar environment, which is basically a whole windowing system of its own, that's also an IDE and also a language. It's got a bit of a Windows 95 feel by way of late-1990s XWindows in that there's a reserved icon row on the bottom of the screen (a bit like the Taskbar), except that it's just a chunk of the desktop; but "maximised" windows don't obscure it. imo the concept of a "Desktop", which would immediately and always be obscured by actual working windows, was a huge mistake. Is it too soon to say that? I'm saying it. ------------------------- spenc | 2025-06-07 08:46:37 UTC | #65 What's wrong with a desktop? For a "moving files around" type operation I like that I have a scratchpad to dump a file and have it be in the exact same place x & y when I come back to look for it. ------------------------- natecull | 2025-06-07 11:18:43 UTC | #66 In my experience two things are wrong with the "Desktop" concept: 1. It's often not treated as well as a real folder is. Even if the Desktop is mapped to a literal actual disk folder, it's often fragile. In Windows particularly, the Desktop has usually been the first folder to get corrupted. 2. If you maximise windows - and you almost always want to maximise windows because screen real estate is valuable - it's always hidden. Even if you have a weirdly small working window, much of your desktop is still obscured. To actually use a desktop as a scratchpad, you have to fully minimise all your working windows. If you're very lucky there'll be some kind of half-hidden gesture to do this for you, but you can't just "bring the Desktop to the front" like you can a normal folder window. An always-visible icon bar like a Menu, Launcher, Dock or Taskbar, on the other hand, is extremely useful. I think if we were doing say the Macintosh or Windows 95 from scratch today, a smarter solution than a special-case, constantly hidden Desktop would be a button on the icon bar which opens a normal folder (the same one each time) as a Scratch area. The Desktop was imagined as a sort of launcher / home screen, where you'd only work on one or two windows/apps at once and then you'd close them again, returning you to the Desktop. In practice, we just don't use GUIs like that. We have zillions of open windows and they're usually fullscreen (because they don't work if they're not; I try to at least have two tiled half-screen web browser windows so I can use my 16:9 landscape screen, but sometimes even that breaks layouts) and we never close them, we switch between them. So the Desktop never gets a chance to become visible. ------------------------- spenc | 2025-06-07 16:19:27 UTC | #67 Have you seen the desktop of the picotron? https://www.lexaloffle.com/picotron.php Its very cute: Apps are always full screen and are a fixed size. You can pull down from the top to get a scratchpad desktop ------------------------- natecull | 2025-06-08 05:18:02 UTC | #68 Oh, that's interesting! Yes, the "tooltray" seems like exactly that kind of UI design. I keep running up against that RAM/storage limit though. With a 256K data limit I still can't code even my personal "tiny, toy" Javascript apps. (A Chinese language translator, and a star map based on the AT-HYG dataset.) My AT-HYG JSON data file is 38 MB and my Unicode Chinese database is 45 MB. I can't make those datasets any smaller because *those are the datasets*! They're reduced subsets as it is! The raw Unicode CJK files are 169 MB. The full AT-HYG is 190 MB. The datasets are the whole reason why my Javascript apps exist, to search them. I get wanting to start again with a clean slate. I get that. I get wanting to be "small and simple". I have that desire myself. I get the hunger for the approachable ROM based retro systems we had in the 1980s. I grew up with them. I want that feeling back too. But arbitrarily restricting a toy machine's RAM/disk size, so that we deliberately can't put real-world datasets and workloads in.... that's not how we get actual simplicity. With a well-designed object or module model, a VM ought to be able to remain safe and usable as a unified information space right up to the size of a modern desktop, and beyond. I mean multiple gigabytes of RAM and terabytes of disk space. That's a modern cheap home machine's capacity. I want a VM I can put all my stuff in. ------------------------- akkartik | 2025-06-08 05:50:52 UTC | #69 Remind me, what was the deal-breaker for you with my LÖVE-based approach? I kinda went through a similar thought process before I settled on it, of not wanting to artificially limit RAM. ------------------------- spenc | 2025-06-08 06:37:45 UTC | #70 Yeah, my impression is that its not meant to be a usable desktop for anything other than making small to medium scoped video games. They do plan to release the runtime as source-available in the coming years so that more serious projects can edit the restrictions if needed but you'd lose the ability to share it on their BBS, and the joy of having no external dependencies. One of many projects I have that's not going anywhere is to port their devtools (all written in Lua with source editable :slight_smile:) to [Arcan](https://arcan-fe.com/2021/09/20/arcan-as-operating-system-design/), another Lua-based desktop engine with a focus on network transparency and being a "real deal" daily driver. ------------------------- spenc | 2025-06-08 06:58:46 UTC | #71 [This video](https://www.youtube.com/watch?v=87jfTIWosBw) (which i like but its long and not that relevant to malleable computing so not suggesting you watch it) covers the motivation for the Pico8, the even smaller predecessor to the picotron. & the reason for the super tight constraints was said to explicitly take away decisions on scope and art style and distribution etc., and to give creators something to blame for why their game is so tiny. With the point being to make a platform for games that's inspiring and cozy and fun. And for that, I think it works great. I've been messing around with Picotron this week and I already want to make a flappy bird clone, while figuring out what game I want to make with Android Studio feels overwhelming. There's so many options and being able to make something big makes me feel kinda sad at myself if I don't. I think to have a similar sort of experience for non-videogame software you'd pick different limitations, and which ones that would be fun and inspiring and which would be frustrating and pointless depend totally on what sort of thing you're making. And of course for a real general computer I don't think I'd want any artificial limits like that. ------------------------- eliot | 2025-06-08 22:52:12 UTC | #72 [quote="natecull, post:68, topic:332"] a VM ought to be able to remain safe and usable as a unified information space right up to the size of a **modern desktop, and beyond**. I mean multiple gigabytes of RAM and terabytes of disk space. ..I want a VM I can put all my stuff in. [/quote] Recently I'm thinking about how to gradually replace my existing personal computer usage with a homebrew language and software environment. Starting with a tiny Lisp VM that can fit on a microcontroller "as little as 2 Kbytes of RAM and 32 Kbytes of program memory". That smallness and simplicity is so satisfying and refreshing compared to the conventional modern computing stack. It's cute, cozy, comfortable - and most of all, conceptually beautiful. At that primitive level of computing, it's how I imagine the mystic mathematicians of [the Pythagorean school](https://en.wikipedia.org/wiki/Pythagoreanism) envisioned the underlying design of the universe. It's how I feel about [Lambda calculus](https://en.wikipedia.org/wiki/Lambda_calculus). I ported/forked the ESP32 version of the Lisp VM to C99, so it can be compiled to target WebAssembly, which has a 32-bit architecture and limited to 4 GiB of memory. (Actually I see there's a [Wasm64 target](https://webassembly.org/docs/portability/), but I haven't seen anyone using it - and probably depends on Wasm runtime or host if it's supported.) So far I'm still learning how to be fluent in the language, and to do practical things with it. By default the VM is allocated 64K of memory, and I haven't hit the limit yet in the small programs I'm writing. PICO-8 and Picotron are designed to be small, where the limited capacity is defined in the rules of its universe. But I think Uxn was created for practical daily computing, albeit an exotic environment. If there's a need for larger memory or disk storage, I bet it has room for someone to develop a solution, maybe based on historical examples of 8 or 16-bit computers, like using [paging](https://en.wikipedia.org/wiki/Memory_paging) or swapping. With continuing miniaturization and commodification of computing and electronic components, I feel like we're in a similar era like the days of [Homebrew Computer Club](https://en.wikipedia.org/wiki/Homebrew_Computer_Club) and the [Creative Computing](https://en.wikipedia.org/wiki/Creative_Computing_(magazine)) magazines you mentioned. And, as it turned out with personal computers, it has big potential not only for hobbyists but for "serious" purposes like R&D in academia and business. I love that it gives a new generation of people a chance to revisit history and re-experience some of the limitations of past computers, but this time in an even more micro form factor and with the benefit of hindsight - to do things better, and take that other fork in the road if we wanted. I'm curious to see how practical it is to try and **move my computing needs to a small malleable system built from scratch**. Ideally I want to live in it, to use it as my daily driver - which suggests a homebrew VM/OS/IDE with code editor, terminal shell, file system, hypertext browser, media player, recorder, tools and libraries to make music, art, animation.. Since I already have a working system built up from components made by companies and other people, that more or less meets my needs - the "reconquest" of my software tools and environment is going to be partly a process of [deconstruction](https://en.wikipedia.org/wiki/Heideggerian_terminology#Destruktion), and rebuilding it part by part, a spaceship of [Theseus](https://en.wikipedia.org/wiki/Ship_of_Theseus). --- Typically a retro fantasy console like PICO-8 has limited screen resolution in pixels. For my use case, I was wondering about a (virtual) **display device based on vector graphics** that can adapt to any available or desired resolution. For example, to print the output of a program on large-format paper or canvas; or connect the real-time output to a light projector and cast on a screen or wall of a building. Similarly with audio, I don't want to be limited to a fixed "resolution" (sample rate, bit depth, channels) due to the design of the VM. What's the audio equivalent of vector graphics? (LLM tells me "pure tone" or "sinusoidal waveform", like analog synthesizers. I'll need to think on this further..) What I was picturing is, just like SVG (and web Canvas API) is made up of graphics instructions, a data format to describe **audio and musical operations**. Something like MIDI or OSC ([Open Sound Control](https://en.wikipedia.org/wiki/Open_Sound_Control)). I know MIDI values (notes, velocity) are integers in the range 0~127, so OSC seems better suited - I see it supports [float32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) values. > Open Sound Control (OSC) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance or show control. So it's a general-purpose messaging protocol, and unlike MIDI, it doesn't specify any message types of its own - it's up to the implementation what the messages mean. > OSC is sometimes used as an alternative to the MIDI standard, when higher resolution and a richer parameter space is desired. > > OSC messages are transported across the internet and within local subnets using UDP/IP and Ethernet. OSC messages between gestural controllers are usually transmitted over serial endpoints of USB wrapped in the SLIP protocol. Gestural controllers.. First thing that comes to mind is a theramin, but also wearable computers/sensors. https://www.youtube.com/watch?v=ci-yB6EgVW4 (I like how she talks about "pitch, yaw, and roll" of hand movements, like an aircraft or ship.) > OSC messages consist of an address pattern (such as `/oscillator/4/frequency`), a type tag string (such as `,fi` for a float32 argument followed by an int32 argument), and the arguments themselves (which may include a time tag). > > Address patterns form a hierarchical name space, reminiscent of a Unix filesystem path, or a URL, and refer to "methods" inside the server, which are invoked with the attached arguments. It's like a one-way remote procedure call. I wonder, can the receiver return values? ..No, as far as I can see, OSC is unidirectional (one-to-one or one-to-many broadcast) but bidirectional messaging *can* be achieved by each participant implementing a client and server. > Applications of OSC include real-time sound and media processing environments, web interactivity tools, software synthesizers, programming languages and hardware devices. > > The protocol has achieved wide use in fields including musical expression, robotics, video performance interfaces, distributed music systems and inter-process communication. Cool! I'll have to explore this deeper. --- Speaking of protocols, I've been trying to achieve the seemingly simple task/challenge from early in this thread: to exchange messages peer to peer (between an instance of a VM to another over the network), directly without going through anyone else's server. I want to use [WebRTC](https://en.wikipedia.org/wiki/WebRTC) (real-time communication) as one of the supported data channels. > ..It allows audio and video communication and streaming to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps But it's been difficult to use, especially from plain C. Partly it's my lack of knowledge and experience with C, but the design of the protocol is complicated, I suppose due to the nature of networking in the modern age. As part of the initial negotiation between peers, they need to consult a [TURN](https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT) server (Traversal Using Relays around NAT). > A complete solution requires a means by which a client can obtain a transport address from which it can receive media from any peer which can send packets to the public Internet. This can only be accomplished by relaying data through a server that resides on the public Internet. Traversal Using Relays around NAT (TURN) is a protocol that allows a client to obtain IP addresses and ports from such a relay. As I'm learning, most example applications demonstrating the use of WebRTC (like [the official examples](https://webrtc.github.io/samples/) which are great otherwise) have a hardcoded list of "public" STUN servers, the top ones being Google's unofficial servers which apparently everyone decided were good enough for demo purposes, and I'm certain that countless production/commercial application with WebRTC are using them. There's even a list of "publicly available" STUN servers, refreshed every hour with automated testing: [`valid_hosts.txt`](https://raw.githubusercontent.com/pradt2/always-online-stun/refs/heads/master/valid_hosts.txt). Except, it's about 90 domains of questionable origin - including Google, Cloudflare, SignalWire, NextCloud, and a picturesque `stun.ru-brides.com`. To **self-host a TURN server**, the most popular and comprehensive solution is [coturn](https://github.com/coturn/coturn). I also found a minimal implementation called [violet](https://github.com/paullouisageneau/violet), also written in C11. The latter is great for studying how it works - and maybe integrating into my Lisp VM. Its author is a guy at Netflix who's written other networking libraries like: * [libdatachannel](https://github.com/paullouisageneau/libdatachannel): WebRTC and WebSockets C/C++ standalone library * [datachannel-wasm](https://github.com/paullouisageneau/datachannel-wasm): C++ WebRTC Data Channels and WebSockets for WebAssembly * [libjuice](https://github.com/paullouisageneau/libjuice): Lightweight UDP Interactive Connectivity Establishment (ICE) library * "I've also been contributing to [WebTorrent](https://webtorrent.io/), and [libtorrent](https://github.com/arvidn/libtorrent), in which I added WebTorrent support. This first native WebTorrent implementation opens exciting possibilities for Peer-to-Peer file exchanges between Web browsers and native clients!" I'm learning a lot in the process. For now I encountered an obstacle: I typically use NGINX as reverse proxy and load balancer for server applications; however, it turns out, a TURN server requires the use of a block of ports, and due to its nature it's best to run on its own server. Which I'm willing to invest in, because I've always wanted to set up and self-host a P2P video chat app, like Jitsi but ideally hand-rolled. - https://github.com/jitsi/jitsi-meet --- ..Well, it will be an ongoing process to reconquer my computing stack, I'm only on the first few steps. I see it will require a kind of stubbornness, long-term determination, and willingess to sacrifice some convenience in exchange for the greater good and independence. Like being a vegetarian or refusing to drive automobiles, or use phones, social media, AI, even the internet or computers altogether like the Amish. It has parallels to off-grid living, it's accepting constraints by choice and design. It renews my admiration for people like Linus Torvalds and Richard Stallman, who are demonstrating how to build and use your own tools for personal computing. It's not for everyone, but there's something important at the bottom of it, the act of questioning the given social/cultural/personal situation, to understand what layers and components it's built on, and to learn how to be more independent in thinking and living, even at the expense of convenience. ------------------------- eliot | 2025-06-09 01:22:42 UTC | #73 A Cult AI Computer’s Boom and Bust: History of the Lisp Machine https://www.youtube.com/watch?v=sV7C6Ezl35A ------------------------- natecull | 2025-06-10 09:01:04 UTC | #74 [quote="akkartik, post:69, topic:332"] Remind me, what was the deal-breaker for you with my LÖVE-based approach? [/quote] I'm honestly not sure, but downloading it again today I guess LÖVE is very very heavily oriented towards graphics and games, which isn't quite where my head's at. It's probably a very good game engine! My toy stuff is just text though. I'd honestly just run them in web browsers if I could open files, but browsers now go out their way to not allow people to access their own filesystem from web pages running on their own machine. Lua for Windows would seem to be the more "normal scripty text" kind of Lua environment, but it a) seems to come with a whole wall of added libraries that I'd rather not have, and b) has been abandoned since 2018, so I assume all those libraries are on fire and full of security vulns? Otherwise it sort of seems like to run Lua in a simple text console I need to be running Linux, or figure out how to write a C wrapper and compile it with Visual Studio. I guess I'll be running Linux again soon enough since I have to get off Windows 10 by October (my hardware can't run 11 and I see no reason to e-waste it). But if I keep running Node, I get cross-platform-ness and a text console for free. I just can't easily do graphics if I need it. It seems hard to bridge both worlds. Edit: Been playing with Love. It reminds me a lot of P5.js except in its own window. One thing though that I really want and seems like I can't have in a canvas-based system, is copy-paste. I really really like being able to generate text and be able to copy it out of a window. ------------------------- natecull | 2025-06-10 09:12:15 UTC | #75 [quote="eliot, post:72, topic:332"]That smallness and simplicity is so satisfying and refreshing compared to the conventional modern computing stack. It’s cute, cozy, comfortable - and most of all, conceptually beautiful. At that primitive level of computing, it’s how I imagine the mystic mathematicians of [the Pythagorean school](https://en.wikipedia.org/wiki/Pythagoreanism) envisioned the underlying design of the universe. It’s how I feel about [Lambda calculus](https://en.wikipedia.org/wiki/Lambda_calculus).[/quote] Oh, it absolutely is. At least to think about. I spend far too much time dreaming about fantasy tiny computer environments just because it feels so good to be able to understand the basic workings of a machine, and trust that nothing is happening that you didn't make happen. I still have a fondness for the late-DOS-era "windowing using IBM text box characters" system. Visual BASIC for DOS - that was a thing! I loved it so much. And fixed font text. You knew exactly how to put stuff precisely where you wanted it. No mess, no fuss. The Windows graphics toolkit was a nightmare and even HTML table layouts felt like quicksand. ------------------------- akkartik | 2025-06-10 15:48:18 UTC | #76 I don't want to take over this thread, but maybe give [my old talk transcript](https://akkartik.name/freewheeling) a quick look. In brief, LÖVE is not perfect but it's available now and (along with a few other things, but just focusing on it here) has many good attributes while we come up with something better. It does do stuff I don't understand or use yet, but it's only 5MB so the "bloat" seems bounded. I didn't realize this when I started using it but its rep on the street seems to be that it has better text primitives than most game engines. I use it mostly for text-based applications and [have increasingly been liking](https://akkartik.name/debugUIs.html) having an escape hatch for graphics. [My editor](https://akkartik.name/lines.html) supports (just text) copy-paste, and is extremely reliable at this point with a 1200-line core. And lack of releases is a strong point in my book. Lua has [very few bugs](https://lua.org/bugs.html), and LÖVE has had [3 releases since 2018](https://github.com/love2d/love/releases). It does require a computer from say the last 10-15 years. Not very power-efficient. Also, zero support for screen readers. I think about that a lot but have made zero headway there. Anyways, if it seems interesting we should probably take further conversation about it over to [the thread on Freewheeling Apps](https://forum.malleable.systems/t/freewheeling-apps/52). ------------------------- natecull | 2025-06-10 22:49:36 UTC | #77 I feel like the thing that's the trickiest today with all our software fragmentation is answering: how does data get in, and get out, of an individual app, or a tool, or a component? How do things connect? What's the "bus"? Is it "standard input and output"? Is it "the filesystem"? Is it "Berkeley sockets"? Is it an "event queue"? Is it "the stack"? Is it hooking up listeners and callbacks to an in-process object system? Or an inter-process object system? Can callbacks be hooked and unhooked at runtime, or does the tool/app/component need an external installation routine? Is the communication medium a database, and if so, is there a whole conplicated setup dance that needs to be orchestrated to get database permissions, user accounts, etc? Is it a "framework" that the conponent needs to be registered with, like an OS system service dispatcher or a large application or a web server? Or on the other end of the complexity scale, is it raw memory access? Is it raw CPU interrupts (including OS call interrupts)? Are there firewall permissions, DNS or perhaps entire Ethernet subnets needed? Is there a hypervisor, container, VM or runtime API, or a software-defined-network, simulating or virtualizing any and all of the above? How does debugging access work with any of these comms channels, and how can it be made safe and secure so components can't elevate and steal debugging permissions without the user's awareness? I know it's easy to say "there are too many communication standards, let's add one more to be a universal one", but.... something just doesn't feel right here. It just all feels like epicycles and we're overdue a Newton. Of these, stack and standard I/O (pipes) feel conceptually simplest. But even stack or pipe systems seem to also need a storage system (RAM or the Unix filesystem), and a lot of complexity gets shunted off to there. ------------------------- khinsen | 2025-06-11 09:21:59 UTC | #78 Fragmentation is a huge problem, at all levels of our software stacks. I suspect it's an expression of software people being idealists rather than pragmatists. They see what's imperfect in someone else's work and aim to do better, mostly because they feel they can. What's missing is a shared value of interoperability that is strong enough to make people actually invest effort to make it happen. ------------------------- khinsen | 2025-06-11 09:24:59 UTC | #79 My one-line summary of @akkartik's freewheeling apps is "LÖVE as a dependency is a lesser evil compared to a browser." Which I agree with. Still, there's a deal breaker for me in that LÖVE doesn't support network access, which I need for most projects I could imagine using LÖVE for. Looking forward to a future version that might have what I need. ------------------------- natecull | 2025-06-11 09:49:21 UTC | #80 I've downloaded Lua Carousel and I'm playing with it now. It's a lot of fun! Reminds me very much of the old BASIC machines.Maybe I can use this to bootstrap my Lua skills from Javascript. Also looking at the source code of Lines to see how you do copy/paste in it. ------------------------- akkartik | 2025-06-11 09:59:17 UTC | #81 https://www.love2d.org/wiki/love.system has the scoop. ------------------------- eliot | 2025-06-13 18:07:09 UTC | #82 **Local-First Conf 2025** > Learn from engineers, designers, academics, startups, and indie developers who are putting local-first into practice and reaping the benefits of a cloud-optional architecture. The conference happened last week in Berlin. Just learned about it, I think because some folks at Ink & Switch also presented there - and have been enjoying the series of talks about local-first software. Here's a playlist: - https://www.youtube.com/playlist?list=PL4isNRKAwz2MabH6AMhUz1yS3j1DqGdtT How does local-first architecture relate to malleable systems.. It's about shifting priority and primacy toward local data within control of the user. As Illich put it, to “invert the present deep structure of tools” in order to “give people tools that guarantee their right to work with independent efficiency.” I get the feeling local first is only a piece of the puzzle, though an important and fundamental part. There are commercial closed-source software touting local first, often combined with interoperable formats like Markdown or JSON - which is a valuable feature - but the system itself is not malleable, only the user data is able to be liberated. It seems there are degrees of malleability, from the data layer (local first, decentralized), to style/interface/behavior settings, to scripting and APIs, down to the application layer if it's open to be modified by the user. (And further down to the operating system and hardware..) Open source addresses that lower layer, but requires familiarty with the programming language it's written in, as well as good enough documentation, code organization, comments.. --- **Rethinking Programming ”Environment”** -- Technical and Social Environment Design toward Convivial Computing https://www.youtube.com/watch?v=qR2MAfRFBIM > Computers have become ubiquitous in our life and work, and the way they are programmed needs fundamental improvements. The prior effort often aims at improving programming experience for people with specific technical backgrounds (e.g., programmers, end-users, data scientists), respectively. In contrast, throughout this paper, we discuss **how to make programming activities more inclusive and collaborative**, involving people with diverse technical backgrounds. We rethink the programming environment from both technical and social perspectives. > > First, we briefly introduce our previous technical effort to share the programming environment between the developers and users of the programs, **eliminating the distinction between programming and runtime environments** and fostering communication between them. Second, we introduce our social effort to support people with visual impairment to implement customized smart glasses that read words with a camera and speakers. We design their programming environment to consist of a software/hardware toolkit and engineers with domain expertise called evangelists. > > Learning from these experiences, we discuss several perspectives on convivial computing. To conclude, we argue that both technical innovations on **user interfaces for programming** and understandings on the **socio-technical aspect of domain-specific applications** are critical for the future of programming environments, and accordingly, convivial computing. --- How A Blind Developer Uses Visual Studio https://www.youtube.com/watch?v=94swlF55tVc --- **The Lost Ways of Programming: Commodore 64 BASIC** ![image|687x445](upload://g4SBODV5098SWxskpCqUbT4ctuV.png) https://tomasp.net/commodore64/ By the same author, a Czech professor of programming languages and systems.. **Write your own tiny programming system(s)!** > Hands-on introduction to fundamental programming language techniques, algorithms and systems. https://d3s.mff.cuni.cz/teaching/nprg077/ A series of hour-long lectures (video): - TinyML: Tiny functional programming language interpreter - TinyBASIC: Tiny imperative interactive programming system - TinyHM: Tiny Hindley-Milner type inference algorithm - TinyProlog: Tiny declarative logic programming language - TinySelf: Tiny prototype-based object-oriented programming system - TinyExcel: Tiny incremental spreadsheet system What a world it would be if we could build actually practical software with the same ease as building these toy systems for educational purpose. Maybe what's missing are robust building blocks for malleable systems that are easy enough to use by non-technical people. The C language is a supremely portable and malleable computational medium. The more I study the language and its ecosystem, I realize how foundational it is. If only it were a Lisp, the entire elaborate edifice of modern computing might have been more malleable all the way down to the bottom. --- I've been vaguely thinking how **writing** is an amazingly malleable technology for **thinking made visible**, with programming languages being an outgrowth of it. Also, spoken language as a communication protocol, and how **thinking in words** is related to the development of rational logic. Whereas before words, there was **thinking in images**, a different mode of visualization and dreaming. https://www.youtube.com/watch?v=IrS-QTLvxjA https://tokyotypedirectorsclub.org/en/award/2025_grandprix/ ..And how **design** is about a visual way of thinking, that intersects with software and computers, like human-centered design or user-experience design. And how malleability is about the *design* of tools, not only about the technical aspects which exist to serve that design. I think there's a tendency to get lost in purely technical aspects and implementation details, without stepping back to think through the design. https://www.youtube.com/watch?v=JP2728BtJ34 --- From "The Lost Ways of Programming: Commodore 64 BASIC": > 1. I believe that **how we interact with a programming environment** when programming is more important than the specific programming language that we are using. > 2. This has never been widely studied and we have interesting things to learn from past systems, including Commodore 64 BASIC. > 3. We should look at the history and recreate past programming experiences in order to learn from them, following a method that a historian of science, Hasok Chang, calls complementary science. > 4. Reading about interactions is not enough. To get a sense of how the interaction worked, you need to experience it yourself, at least in a limited form. This is best done with an interactive article. **Complementary science** From [History and Philosophy of Science: The Myth of the Boiling Point](http://www.sites.hps.cam.ac.uk/boiling/Complementary.htm) > This paper on the fickleness of the boiling point illustrates the potential of what I call "complementary science," which contributes to scientific knowledge through historical and philosophical investigations. Complementary science asks scientific questions that are excluded from current specialist science. It begins by **re-examining the obvious**, by asking why we accept the basic truths of science that have become educated common sense. Because many things are protected from questioning and criticism in specialist science, its demonstrated effectiveness is also unavoidably accompanied by a degree of dogmatism and a narrowness of focus that can actually result in a loss of knowledge. > > History and philosophy of science can ameliorate this situation, and seek to generate scientific knowledge in places where science itself fails to do so; I will call this the complementary function of history and philosophy of science, as opposed to its descriptive and prescriptive functions. ------------------------- natecull | 2025-06-13 11:37:24 UTC | #83 I love the Commodore interface (because it was my first experience of computing), but playing with it again as an adult I realise just how terrible BASIC was as a programming language. No names! Line numbers were okay, but a decent language would have named functions. A late 1970s hobbyist BASIC social ecosystem but running on Forth would have been an interesting fork in the path. Would it have made us smarter, faster, less eager to jump to compilers and user/programmer walls, kept the habits of open acess to the machine around a bit longer? Or nah? The Commodore thing where you could move the cursor around the screen and pressing Enter would execute the line the cursor was under - there was a magic in that. There was something even about just the cathode glow of the TV that drew you in. Also the graphics symbols - just shift a letter and you instantly have the ability to draw. We lost a lot when we lost that. When I moved "up" from Commodore to IBM, I grieved those beautiful lines and cross-hatches for years. When I moved further "up" from DOS to Windows, I grieved the loss of even the chunky IBM box chars. We can get them back again, now, I guess, if we have the right Unicode font installed. How many gigabytes of font packs does that need? > I’ve been vaguely thinking how **writing** is an amazingly malleable technology for “thinking made visible”, with programming languages being an outgrowth of it. Yes. The post-1945 computing culture, I believe, was glued together by Telex, even more than by punched cards. The keyboard, teleprinter and cuttable, spliceable punched paper tape was the universal human-computer interface... up to and including Microsoft BASIC on the Altair. And that's why we have text consoles and text editors even today. Screens are nice but in so many ways, they're still an "output-only" technology without keyboards. Or without *some* way of clustering strokes into glyphs, and glyphs into larger glyphs. (Our current typewriter keyboards always annoy me: I wish we had regular rectangular arrays of keys, then we could supply our own keycaps/images and maybe build our own languages that way.) The PET 2001 keyboard wasn't all that great, but still, just look at that bad boy. You felt you were touching something unthinkably futuristic and alien. It breathed graphics: it wanted you to draw on it, not just type. https://www.masswerk.at/pet/ We got so close to having a more literate, shareable, media with early 1990s compound documents! I thought copyright was the problem. When StarOffice was open-sourced, I really thought we might get that happening again. But nobody in the 2000s-2010s Linux scene was interested in documents as interfaces. It was all a rush to apps, apps, apps and then sandbox those apps, and sell them as cloud services. Mitch Kapor was interested in going beyond the app model - there was something very cool in Lotus Notes (even if it became a terrible IBM Enterprise behemoth) and he at least tried to get something happening on Linux with Chandler. But even he couldn't sell the dream, explain why it was important. ------------------------- eliot | 2025-06-13 21:02:04 UTC | #84 I wanted to export this forum thread, as I'd like to return to the many topics mentioned and study further, as well as replies, suggested paths and ideas that have gone un-replied to yet. I learned Discourse has a `/raw` path. ``` https://example.com/raw/topic-id https://example.com/raw/topic-id?page=2 ``` From this current URL: ``` https://forum.malleable.systems/t/next-oop-www-html/332/83 ``` The last route part `83` is the comment ID within the thread, and `332` is the topic ID. So this link below renders the entire thread in Markdown. - https://forum.malleable.systems/raw/332 --- The Role of the Human Brain in Programming https://www.youtube.com/watch?v=1WC8dxMC4Xw --- Creating a Robot for My Childhood Self https://www.youtube.com/watch?v=8bKAo6rMBEo ------------------------- akkartik | 2025-06-13 18:19:25 UTC | #85 [quote="natecull, post:83, topic:332"] Also the graphics symbols - just shift a letter and you instantly have the ability to draw. We lost a lot when we lost that. [/quote] I don't follow this! Could you elaborate? ------------------------- khinsen | 2025-06-13 18:53:11 UTC | #86 [quote="natecull, post:83, topic:332"] A late 1970s hobbyist BASIC social ecosystem but running on Forth would have been an interesting fork in the path. [/quote] Sounds much like the [Jupiter Ace](https://en.wikipedia.org/wiki/Jupiter_Ace). Which was a commercial failure, so there never was much of an ecosystem around it. ------------------------- natecull | 2025-06-14 04:51:30 UTC | #87 [quote="akkartik, post:85, topic:332"] I don’t follow this! Could you elaborate? [/quote] I mean the graphical characters (usually above 128) from Commodore's proprietary PETSCII code, which were not representable in ASCII or IBM Extended ASCII, and until very recently were not a part of Unicode either. They are now, but still might not be representable on the system you are using to view this. They happen to be viewable on my Windows 10 notebook right now. They might not be viewable on my Android (I'll check). https://en.wikipedia.org/wiki/PETSCII You could type a PETSCII graphics character (if the machine was in "Uppercase Mode") by simply shifting a letter on the keyboard. No messing around with a character selection interface, etc, etc. It made drawing on the screen really easy, and then you just had to insert 10?" in front and you had a BASIC line that would print your image. These things: |Ax |NBSP |▌ |▄ |▔ |▁ |▏ |▒ |▕ |🮏 |◤ |🮇 |├ |▗ |└ |┐ |▂| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |Bx |┌ |┴ |┬ |┤ |▎ |▍ |🮈 |🮂 |🮃 |▃ |🭿 |▖ |▝ |┘ |▘ |▚| |Cx |─ |♠ |🭲 |🭸 |🭷 |🭶 |🭺 |🭱 |🭴 |╮ |╰ |╯ |🭼 |╲ |╱ |🭽| |Dx |🭾 |• |🭻 |♥ |🭰 |╭ |╳ |○ |♣ |🭵 |♦ |┼ |🮌 |│ |π |◥| |Ex |NBSP |▌ |▄ |▔ |▁ |▏ |▒ |▕ |🮏 |◤ |🮇 |├ |▗ |└ |┐ |▂| |Fx |┌ |┴ |┬ |┤ |▎ |▍ |🮈 |🮂 |🮃 |▃ |🭿 |▖ |▝ |┘ |▘ |π | ------------------------- akkartik | 2025-06-14 05:06:18 UTC | #88 Ah, ASCII art! I see. ------------------------- natecull | 2025-06-14 05:23:47 UTC | #89 Kind of, yeah! In that it was art constructed of sequences of symbols. But because PETSCII had actual thought put into its choice of graphical symbols, the art you could make with it was so much more beautiful than what could be achieved with 7-bit ASCII, which was primarily intended for compatibility with Telex and typewriters, not art. The other 8-bit micros all had their own graphical characters. The TRS-80 didn't have the beautiful PET diagonals and lines but it had 2x3 pixel grid characters which allowed for a "super lo-res graphics mode". The BBC Computer had Teletext/Videotext compatible graphics, including colour codes, and that could make some pretty awesome and really memory-cheap splash screens. And of course the IBM PC had characters mostly intended for drawing boring coporate box charts, but when ANSI colour sequences finally hit, it was good enough to rock the BBS world. Just in time for the Web to make BBSes obsolete a few years late. The top of the line though was the C64's completely reprogrammable character set, which Varvara does a pretty good job of emulating. Of course, you couldn't transmit those to any other system. And looking at this page on Firefox on my Samsung Galaxy A12: about half of those PETSCII characters are visible, the other half are gray boxes. It seems random as to which are which. So, we're still not anywhere near pervasive rollout of these ancient 1977 characters in Unicode across all devices. ------------------------- akkartik | 2025-06-14 05:23:26 UTC | #90 I think I'm more of a graphical rather than ASCII supremacist at this point, after spending my youth on the other side of the fence. No matter how good a character set is, it's just weird to draw using a grid of characters, depend on them to be monospace, etc. Then you're editing it in a line-oriented editor, and drawing downward requires the smarts to insert lots of spaces, moving things to the right gets complicated, etc., etc., etc. Just use a vector-based representation like SVG, I say. So my ideal is the computers from this era (I don't remember which; I didn't actually live through it) that would let you type in a statement of BASIC right after boot up and draw a line on screen -- _in the same space as the text_. That's what the Carousel experience aims to replicate and improve on. ------------------------- natecull | 2025-06-14 05:55:49 UTC | #91 Yeah, I'm not saying that those of us on PETs didn't look enviously over at the C64 or PC with CGA which had full bitmap graphics. Of course we did. It's just that once you were finally staring at a bitmap screen and you found that the only primitive you had was LINE(x1,y1,x2,y2) and maybe FILL() if you were lucky... Well, it felt very intimidating. You couldn't just easily doodle with premade shapes. It was like going from building with Lego, to being handed a blank block of granite and a chisel. There were lots of things that could have filled that gap: tile-based systems, libraries of vector shapes, etc. But mostly that all happened in the "application development" world, as for-pay libraries, or in large desktop publishing packages. I guess a lot of it got folded into the game development world, and then into 3D engines. Blender's still terrifying to me, though. So an interesting experiment might be: an 80s-style bitmap machine, and then some kind of BASIC or Forth-like interactive programming system, with a constructive graphics interface of some kind. A tile or shape library, maybe. And see where that might have gotten us. Probably SVG is the closest thing to that today. I dunno if you could build a whole desktop rendering system on it, on a small machine. ------------------------- akkartik | 2025-06-14 08:52:27 UTC | #92 Depends on what you mean by "desktop rendering system". For me a couple of answers seem fairly adequate for a system that rewards curiosity without selling its soul to capture the incurious: 1. When I launched Lua Carousel one of the example screens was this set of abbreviations: ```lua -- Some abbreviations to reduce typing. g = love.graphics pt, line = g.points, g.line rect, poly = g.rectangle, g.polygon circle, arc, ellipse = g.circle, g.arc, g.ellipse color = g.setColor min, max = math.min, math.max floor, ceil = math.floor, math.ceil abs, rand = math.abs, math.random pi, cos, sin = math.pi, math.cos, math.sin touches = love.touch.getTouches touch = love.touch.getPosition audio = love.audio.newSource -- Hit 'run', Now they're available to other -- panes. ``` In particular, pt, line, rect, poly, circle, arc, ellipse. Each with an optional fill mode. That seems fine for doing most things with? I don't know if you consider this Lego blocks or granite-and-chisel. 2. I've mentioned LuaML before, and that hypertext browser that is now also bundled into Carousel for its help system is a thousand lines or so to implement, augmenting the above graphical primitives with hierarchical blocks of text arranged in `rows` and `cols`, with the ability to configure colors and fonts. Again, I don't know if you consider this Lego blocks or granite-and-chisel. Both these, to me, feel like I already have the "80s-style bitmap machine with some kind of BASIC or FORTH-like interactive programming system" that I want. But I'm looking forward to understanding more what you mean by these phrases that I'm not thinking about.. ------------------------- natecull | 2025-06-15 08:29:26 UTC | #93 [quote="akkartik, post:92, topic:332"]In particular, pt, line, rect, poly, circle, arc, ellipse. Each with an optional fill mode. That seems fine for doing most things with? I don’t know if you consider this Lego blocks or granite-and-chisel.[/quote] From personal experience as a kid: Yes, I consider line, rect, poly, circle, etc to be *very much* granite-and-chisel graphics primitives compared to a character generator or font system. Once you've got named recursive functions (but SAFE recursive functions! ie, functions that don't automatically get access to all of your program's environment space! which even languages like Lua still don't give us!), then we're getting a bit closer to something like what I mean. Add Logo-style relative-positioning turtle graphics, so everything can be repositioned on a screen. Then on that, build font systems and libraries of premade scalable graphics blocks. Would probably also need a bitmap/blit/sprite kind of functionality as well, starting with a way of representing bitmaps in text/code. Probably also need some kind of "responsive" business to detect the size of variable-sized viewports and adjust (that's a whole pile of pain right there). Finally, we'd also need not just graphics primitives that can be "executed" and are then immediately forgotten with only the bitmap remaining (remember what I said about a bitmap screen being an "output only device" compared to a text console? This is exactly what I mean by that) - but we'd want some kind of graphics buffer that stores *the instructions themselves*. I mean as glyph shapes, or functions, but not just one-way compiled - retaining all their human-readable names. Something like Display Postscript, is probably what it would be like. The idea being that in order to have a *literate, symbolic* usage of graphics - not just a one-way outputting of graphics symbols to bitmaps - those graphics need to be *structured* into glyphs and clusters/sequences of larger glyphs. *And that structure needs to be preserved* and that structure needs to be what's input and output, not just tht bitmap. That structure is the "language". I don't mean in the sense of a "programming language" that's just immediate-mode instructions for a machine; I mean a language in the sense of a structured series of symbols with meaning attached, that can be parsed equally by humans or machines. Am I making sense? I know this is a complicated idea to convey, because it's not really how we're used to thinking about "graphics". And specifically, because this idea is *something that's been missing* in our computing experience for decades. So since it's not implemented, all I can point to is the vague outline of the hole of where this thing that should be, isn't. And some of the things that are a little like it, but aren't quite it, because we didn't get it. To the extent that we "got something like it", we got either a very simple beginning (line/box/circle functions) or we got very large, highly developed, but mostly proprietary/business based systems of apps/objects/APIs (everything from windowing widget sets through Postscript/PDF through Office through CAD systems and 3D modelling/gamedev systems) that don't really decompose or integrate with each other. One place I think where the path forked as that as we moved from "characters" to "bitmaps", we lumped together "glyphs/shapes" and "behaviour" - and that happened roughly alongside Smalltalk, which actively encouraged mixing the two. Ideally, I think, we would have kept the graphics and the behaviour separate. But for example, see what happened with Digital Research's Graphics Kernel System, which started out as something like what I'm talking about, but quickly became merged into the API of a windowing system, and that was the end of any chance of it becoming a "language" between systems. https://en.wikipedia.org/wiki/Graphical_Kernel_System ------------------------- akkartik | 2025-06-15 08:45:37 UTC | #94 I'm not sure I'm getting it yet. But let me try this: a while ago I [shared](https://forum.nouveau.community/viewtopic.php?t=16) a 60-line prototype that I consider to be the core of what I've learned building a text editor atop LÖVE. The core is a way to track the glyphs I draw on screen in a way that lets me position the cursor on them, select them with the mouse, copy them to the clipboard, etc. Took me a while to converge on the right approach, and I'm sure it's not novel, but now that I have it it's trivial in both dev time and run time. Also, [my original editor](https://akkartik.name/lines.html) has always supported picking up the shapes in a drawing and moving them around. That part was easy to come up with as I recall. So I guess it doesn't feel like that much of a leap for me between a) what seems to you like the granite and chisel of my vector graphics, and b) what seems to me like your notion of symbolic usage. The key is that I never track the bitmaps that result from my vector drawing primitives, and I never relinquish the vector primitives that lead to a drawing. ------------------------- akkartik | 2025-06-15 08:27:51 UTC | #95 > functions that don’t automatically get access to all of your program’s environment space! which even languages like Lua still don’t give us! I don't quite follow this either.. I do have closures in Lua, if that's what you mean. ------------------------- natecull | 2025-06-15 08:53:26 UTC | #96 [quote="akkartik, post:95, topic:332"]I don’t quite follow this either.. I do have closures in Lua, if that’s what you mean.[/quote] Yes, scripting languages like Lua and Javascript have closures. But that's not the quality of "safety" that I'm referring to, and which I think is essential for any language which can be considered suitable for sending information between mutually untrusting systems. For example, for the use case of serialising graphics or hypertext documents. The quality I mean is: it needs to be absolutely and provably *impossible*, within a phrase of such a language (let's say a function, though phrases still might not always be functions), to access any variable which is not either explicitly passed to that phrase as a parameter, or explicitly passed into the environment used to evaluate that phrase. If we're going to exchange information between systems about structured graphics and we're going to do it as executable Lua or Javascript code, then that code must run in an absolutely tight sandbox. It absolutely MUST NOT be able to access the builtin "system" or "OS" or even "current webpage" variable which is usually available on these scripting systems. It obviously must NEVER IN A BILLION YEARS be able to access raw RAM, the filesystem or the Internet! It also must not be able to find out what's on the call stack, what other tasks are running, or any debugging states. It probably also shouldn't get access to a system clock at high precision, read the keyboard or mouse, read the contents of the screen or the object structures used to draw the screen, and it must not be able to enter an endless loop or allocate RAM to exhaustion. It would be best if it just can't access *anything* other than a few highly specified graphics primitives and maybe some control/looping constructs. Maybe the general-purpose scripting language of your choice, like Lua, can do all these things if the code is evaluated through a custom environment. Maybe Lua in particular has been battle-tested in multiplayer gaming. Hopefully. But scripting languages are often not our friends in the security battle; they often "helpfully" provide access to far too many objects. That's what created Office macro viruses - and why now all corporations warn their staff never to open PDF or DOC attachments from unknown senders. We don't want to become Microsoft Office. Even PDF I think removed some Turing-completeness from Postscript in order to become a somewhat safe graphics/document exchange format. Some kind of hard limit on the depth of recursion or stack/RAM allocation is probably a requirement in this kind of case. ------------------------- akkartik | 2025-06-15 08:55:18 UTC | #97 Ah I see. Yes, Lua doesn't provide this level of safety by default. [But it _is_ possible to sandbox Lua code.](https://git.sr.ht/~technomancy/capabilities.fnl) (That repo uses Lisp syntax, but it gets translated fairly straightforwardly to Lua, so the capability of capabilities (heh) is available to Lua as well in principle. [A more basic example in straight Lua.](https://stackoverflow.com/questions/1224708/how-can-i-create-a-secure-lua-sandbox#1224752)) ------------------------- akkartik | 2025-06-15 08:58:44 UTC | #98 I spent a while thinking about sandboxing for my [Teliva](https://github.com/akkartik/teliva) project in 2021/2022. I gave up on it when I realized the hard part is not technical but social. A scripting language can absolutely be made rock solid at running untrusted code. But nobody in today's society is interested in using something that rock solid, in putting up with the level of inconvenience it entails. In some ways Freewheeling Apps are my attempt at carving out a stepping stone to a more mature society of laypeople that internalize why security matters. ------------------------- natecull | 2025-06-15 09:15:18 UTC | #99 Yeah, the ``` setfenv(user_script, env) ``` thing in Lua is part of the answer, I think. Ideally there'd just be an "eval(string, env)" and you couldn't blanket "eval" something in the current environment, because 999 times out of 1000 in scripting, that's a terrible life decision which you will regret. However I think something even stricter might be good in a language. Something like, all names inside a function/method/block/namespace are local, and you literally cannot access any variables in the enclosing scope unless you do it through a single reserved "global" name. This wouldn't just be for safety/security, but also to make it easy to define domain-specific languages which can also be used as interactive control languages. Smalltalk maybe has this property? ------------------------- spenc | 2025-06-16 06:19:05 UTC | #100 I'm excited for [Oaken](https://spritely.institute/news/announcing-spritely-oaken.html), which seems to have some of your ideas for a safe sandboxed language :slight_smile: -------------------------