NeXT, OOP, WWW, HTML

I guess it’s a bit of both, autonomy and symbiotic dependence. But I have no idea how an ecosystem needs to be structured to support symbiosis of communities, and in particular favor symbiosis over competition and predation.

One rule that seems obvious (though it’s still speculative!) is “never be dependent on a single outside entity”. Any single entity can disappear, or become malicious. I depend on farmers to feed me, but I don’t depend on any single farmer. In software, we had this kind of relation when standards were common, but standards have become unfashionable. I can’t think of any software-related standard that is less than 20 years old.

2 Likes

I see it’s distributing risk by not depending on an individual centralized entity/vendor, but on a free collective of shared interests. Everyone involved is motivated by mutual self-interest to work together for the group, like a flock of starlings against a hawk or falcon.

It’s also about reducing uncertainty and indeterminacy inherent in a person, group interactions, economics, psychology, or nature of physics itself. There’s that idea of information being the opposite of entropy, as a measure of order and structure.

to support symbiosis of communities, and in particular favor symbiosis over competition and predation .. Any single entity can disappear, or become malicious.

It’s a slime-eat-slime world of business out there, the perpetual war of dominating resources, consolidating political power and capital, and ruling over the empire. I’d imagine there’s strategic interest in preventing such emergent behavior of interdependently supported free communities, technology and life stack, a self-sufficient and sustainable system. But perhaps this utopian is getting older and cynical of .. gestures broadly at everything. I need to walk out of the city and return to the foresty mountains.

Reading of the role of oscillations in learning and memory, I wondered about community as a living system and pattern of behavior over time. How it learns, remembers, communicates among its members and transfers knowledge. What are bottom-up ways in which the primitive elements share information with each other, and how it propagates to higher layers of the organism. And from the top down, how does the whole direct the will of the organization.

1 Like

That is what I try to build, communities as living organisms.

With regards to the structure of communities, i don’t know the perfect structure for symbiosis. That is why I want malleable communities, communities whose structure changes easily.

A membrane , an abstraction, defines the constrains that a community must uphold and the conditions on the environment under which such a community can survive.

The first case enables the community to change its organization as it wills and the second identifies its dependencies.

In other words, we need abstractions, loops and community malleability.

From that, people will figure out what is the best organizational structure for them.

I / We need to study other scientific fields to actually find answers to some of the general questions on the structure of communities.

2 Likes

Inversely, interdependently supported free communities need to develop an immune system to defend themselves against domination. Ideally as early as possible, as in Athenian ostracism:

Ostracism was often used preemptively as a way of neutralizing someone thought to be a threat to the state or a potential tyrant.

An idea.

Case study of malleable system communities: Decker, LÖVE, PICO-8, Uxn

Those are the first examples that came to mind, of active software communities I’ve seen where people are learning, teaching each other, and in general having fun making silly and serious things, self-expression through programming and creative computing. They’re delightful in their own weird ways.

All have community participation on itch.io. But none of them are scalable or practical for daily computing use, one might object. With Uxn, its minimalist design and philosophical foundation has a focus on post-collapse practicality. How it’s bootstrapped from a handful of compute primitives, it’s arguably more practical than relatively enormous commercial software systems with globally entangled external dependencies, reliance on corporate funding and control for its development and direction.

An aspect I love about each of the cozy cute systems, is that they’re free in important ways, both for the creator of the system and the creators on the system.

Well, I don’t know about an actual study, but there are interesting parallels among them conceptually. Perhaps some lessons can be gained from their success as ecosystems.

PICO-8 and the Search for Cosy Design Spaces - PRACTICE 2018: Joseph White

3 Likes

This sounds like a direction toward healing the existential isolation of the modern individual. And an immunity against fake communities, artifically constructed illusion of community designed for the purpose of exploitation, like mega-churches, political parties, public spectacles and simulacra of world events.

Samizdat (“self publishing”) was a form of dissident activity across the Eastern Bloc in which individuals reproduced censored and underground makeshift publications, often by hand, and passed the documents from reader to reader. The practice of manual reproduction was widespread, because printed texts could be traced back to the source. This was a grassroots practice used to evade official Soviet censorship.

The Russian poet Nikolay Glazkov coined a version of the term as a pun in the 1940s when he typed copies of his poems and included the note Samsebyaizdat (Самсебяиздат, “Myself by Myself Publishers”) on the front page.

Tamizdat refers to literature published abroad (там, tam ‘there’), often from smuggled manuscripts.

Why this word comes to mind, about indie publishing of magazines and books. It ties to the romanticism of pirate radio, deviant subcultural transmissions, rogue oscillations and controlled dissonance, a quiet evolution through the arts. It’s about reclaiming public space of our minds and culture. Oh but professor, what is “culture”? I suppose it’s something like the collective living software of a community.

2 Likes

11 posts were split to a new topic: Dependencies, robustness, fragility

I have been thinking about this for quite some time, how can one survive in a hostile environment? How can one deal with economic crises or hostile attacks, like the ones we witness nowadays between China and the US. (or dictatorships).

Can an internal higher level network between the communities lead to increased robustness? But at the moment, I have only worked at the meta level, let there be easy tools to build complex social structures between communities, they will figure out the rest.

I believe one could find resources about risk management , in the complex systems community, but I do not know where.

1 Like

I’ve heard of continuity management, previously called crisis management, used in business and I think government institutions. There are international conferences with experts on the subject. Risk assessment is a big part of it, systematically listing all imaginable risks, planning how to recover or adapt in those scenarios. They probably assume plans are always incomplete, so prepare for surprises and unknown-unknown’s. To have a plan for what to do when the unexpected happens and all other plans fail. Oh there’s even ISO standards for it:

In the introduction to systems thinking, I noticed the phrase “leverage points” of a system. Those sound like risks, critical points that the system relies on for its stability, where a small change can have large effects. So managing risk is about reducing uncertainty in an inherently chaotic system and environment.

A curious wording I saw that I’ve been thinking about, how “surprises” are related to entropy and the amount of information in a signal.

Surprisal is a measure of the amount of information gained from an event, defined mathematically as the negative logarithm of the probability of that event occurring. It quantifies how surprising an event is, with rarer events yielding higher surprisal values.

In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. The Shannon information can be interpreted as quantifying the level of “surprise” of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimal source coding of the random variable.

The Shannon information is closely related to entropy, which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is “on average”.

The word “surprise” seems anthropomorphized, like it gives it a subjective dimension. Maybe that’s intentional, because it assumes a recipient of information, a subject and observer who experiences and interprets the signal into meaning. Or it’s just another way of describing probability, from “totally expected” to “wow didn’t see that coming at all”.

I think it was in relation to Stage X, I heard of the concept of software supply-chain integrity and continuity of infrastructure.

57 billion requests in Q4 of 2025 for package dependencies from vendors, in order of requests:

  • Maven, npm, Docker, YUM, PyPI, Helm, Nuget, Debian, Conan, Gradle, RubyGems, Go, OCI, Cargo, Sbt, Helm, Ivy, Composer, Terraform, Opkg, Conda, P2, Pub, Swift, Alpine, Cocoapods, Cran, VCS, Chef, Vagrant, Terraform, Bower, Ansible, Puppet, Hugging Face

In 2024, security researchers around the world disclosed nearly 33,000 new CVEs (Common Vulnerabilities and Exposures), a 27% increase from the previous year, surpassing the growth rate of packages (24.5% YoY).

Most common types of vulnerabilities

  • Improper Neutralization of Input During Web Page Generation
  • Out-of-bounds Write
  • Improper Neutralization of Special Elements used in an SQL Command
  • Out-of-bounds Read
  • Use After Free
  • Improper Limitation of a Pathname to a Restricted Directory
  • Improper Neutralization of Special Elements used in an OS Command
  • Cross-Site Request Forgery
  • Buffer Copy without Checking Size of Input
  • NULL Pointer Dereference
  • Unrestricted Upload of File with Dangerous Type
  • Improper Authentication
  • Concurrent Execution using Shared Resource with Improper Synchronization (‘Race Condition’)
  • Improper Locking
  • Improper Neutralization of Special Elements used in a Command
  • Incorrect Authorization
  • Missing Authorization
  • Integer Overflow or Wraparound
  • Improper Input Validation
  • Improper Restriction of Operations within the Bounds of a Memory Buffer
  • Uncontrolled Resource Consumption

How Organizations are Applying Security Efforts Today

  • Sourcing restrictions
  • Scanning
  • Establishing visibility and control across application pipelines

Organizations need to control, or at least have strong visibility into what is coming into their software supply chain via their developers and the dependencies referenced in their applications. Over 71% of organizations allowing developers to download directly from the internet is concerning, and a major violation of software supply chain security best practices. An artifact management solution to proxy public registries should be in place at every organization.

By the time you’re having to establish a proxy package manager to filter and verify the endless stream of software written by other people running on your machines.. That doesn’t sound secure at all - like the “hero of Haarlem”, the Dutch boy Hans Brinker who saves his village by sticking a finger in a leaking dam.

The report shows a concerning state of affairs with modern software and computing supply chain, systemically vulnerable and insecure, with only exceptional islands of verified safety.

What’s not mentioned is the bottom-up approach of thinking very small, starting again from scratch, to rethink and rebuild everything from first principles and primitives. As Richard Feynman said, There’s Plenty of Room at the Bottom. It seems to me that’s the surest way to verify an entire system part by part, down to the bottom. But maybe that’s not practical for most business situations where there’s already an existing pile of running code of unbounded complexity.


There’s a site infinitemac.org, where you can run emulators of classic Macintosh and NeXTStep. Tim Berners-Lee wrote WorldWideWeb.App on this version, running on the NeXTcube pictured above.

It’s based on a WebAssembly port of the NeXT emulator called Previous.

At a glance, the code is typical of emulators and virtual machines: time, signal, CPU, memory, I/O, audio, video, SDL.. Ah, I’m looking in the wrong place, the object-oriented magic of NeXT, if there is such a thing, would be on the software side. How the operating system and applications are organized and communicate with each other.

But I’m guessing those are binary blobs without source code. Then it’s worthless post-collapse, the system is not bootstrappable, it cannot “create its own parts” to build new instances of itself. Bootstrappability is a strict requirement to meet, it’s merciless in filtering out over-engineered software, that includes a majority of commercial systems and large language models too. If we can’t make it ourselves, it’s a risk of external dependency.

With NeXT, and general interest in retro machines, I think what’s valuable is not only the specific hardware/software implementations, but the conceptual design, how it makes you imagine a new horizon of what computers can be, a higher order integration of art and science, aesthetics and technology. Function and form, practical and beautiful. Like Macintosh System 7 is an eternal nostalgia, a historical fantasy and imaginary past, a childhood wonder at the magic of computers. And what is history but what we imagine it to be? At least for me I’m motivated to study these software artifacts not for its own sake, but to consume and digest the best ideas for the purpose of making it mine, and creating new things out of the rich ground.

NeXT + OOP + WWW + HTML = ?

Next-generation networked objects and hypermedia programming language.

A play on words worthy of wooing Silicon Valley venture capitalists - or is there something real in this vortextual nexus of conceptual pearls. What would such a chimera look like.

I have a mutant language project incubating for a while, for which malleable systems research is relevant. It started as a mathematical expression evaluator that grew into a small language of infix notation, a superset of JSON, familiar and easy to learn - but that’s not the important or interesting part. The curious thing was that the code parsed into an abstract syntax tree that ran on a Lisp machine/interpreter, with macros and tail-call elimination. This latter I wrote inspired by mal, the Make-A-Lisp project.

It’s a Lisp implemented in 89 languages:

  • Ada, Ada.2, BASIC (C64 and QBasic), BBC BASIC V, Bash 4, C, C#, C++, C.2, ChucK, Clojure, CoffeeScript, Common Lisp, Crystal, D, Dart, ES6 (ECMAScript 2015), Elixir, Elm, Emacs Lisp, Erlang, F#, Factor, Fantom, Fennel, Forth, Functional tests, GNU Guile 2.1+, GNU Make 3.81, GNU Smalltalk, GNU awk, Generating language statistics, Go, Groovy, Hare, Haskell, Haxe (Neko, Python, C++ and JavaScript), HolyC, Hy, Io, Janet, Java 1.7, Java, using Truffle for GraalVM, JavaScript/Node, Julia, Kotlin, LaTeX3, LiveScript, Logo, Lua, MATLAB (GNU Octave and MATLAB), Mal, NASM, Nim 1.0.4, OCaml 4.01.0, Object Pascal, Objective C, PHP 5.3, PL/SQL (Oracle SQL Procedural Language), PL/pgSQL (PostgreSQL SQL Procedural Language), Performance tests, Perl 5, Perl 6, Picolisp, Pike, PostScript Level 2/3, PowerShell, Prolog, PureScript, Python2, Python3, Q, R, RPython, Racket (5.3), Rexx, Ruby #2, Ruby (1.9+), Rust, Rust (1.38+), Scala, Scheme (R7RS), Self-hosted functional tests, Skew, Standard ML (Poly/ML, MLton, Moscow ML), Starting the REPL, Swift 2, Swift 3, Swift 4, Swift 5, Tcl 8.6, TypeScript, VHDL, Vala, Vimscript, Visual Basic Script, Visual Basic .NET, WebAssembly (wasm), Wren, XSLT, Yorick, Zig, jq, miniMAL.

The Make-A-Lisp Process

  • Step 0: The REPL
  • Step 1: Read and Print
  • Step 2: Eval
  • Step 3: Environments
  • Step 4: If Fn Do
  • Step 5: Tail call optimization
  • Step 6: Files, Mutation, and Evil
  • Step 7: Quoting
  • Step 8: Macros
  • Step 9: Try
  • Step A: Metadata, Self-hosting and Interop

This cross-platform kernel and abstract machine is what interests me. It can be a compile target for any prefix/postfix/infix-notation language, that can be parsed and converted to the same Lisp-y machine instructions. The project has ten thousand stars, so people do appreciate its educational value. But I feel like there’s untapped practical potential for a simple language that can be implemented in so many languages. Like a primitive organism that requires very few elements to survive, I’d imagine such a language would have great post-collapse utility and adaptability.

The same I feel about elvm (EsoLangVM Compiler Infrastructure), a minimal C compiler implemented in, or converted to, dozens of backends including a Turing machine in the game of life. As long as one of the backends survives the downfall of civilization, programs written in Universal Lisp or Forever C99 can be revived to the warmth of our descendants.

My utopian hypermedia language, whose latest name is Expreva (“expression evaluator”), has gone through rough sketches and prior generations of imperfect forms, including actually shipped products with thousands of users, from a library of shortcodes to extensible HTML template language with loops and logic, as well as intermediary experiments with names like Bonsai (tree editor) and Eleme (reactive stateful elements). It might be too ambitious what I’m trying to go for, but I’ve thought about it for years from different angles.

It involves an integrated development environment, structural editing, interactively explorable interface, and live view of running code/templates. A language that unifies document structure, design/styling, interactivity/scripting - as if reinventing HTML/CSS/JS/Wasm in a coherent design with the benefit of hindsight. And somehow all written from scratch in a cross-platform way, C99 and/or self-hosted Lisp-to-Wasm compiler, that can build binary executables for a wide range of targets, virtual and real machines.

Well, the proof is in the pudding - make it work, make it right, make it fast. “I’ll be your Xanadu, will you be my Shangri-La.” (© 2026 for my next album)

There’s the computational hypermedia I dream of, and the reality of prima materia I’m working with. A related practical project is using Electron/Chromium or other webview library to open tiled transparent windows on my desktop environment, that lets me use web technology to build bidirectional interactions with the backend via WebSockets. The tech stack is not ideal, but it’s the quickest route to achieve what I want - I’m exploring text/terminal-based interface in parallel. So far I’m building my own file manager and media player, because I’m not satisfied with any of the open source solutions. I’m hoping some components will be useful for web applications with local storage or remote servers. I wish I could write in the same language a dynamic HTML-based app that runs in the web browser or interact with in the terminal.

In the back of my mind I’m often on the look-out for common threads that tie together or reveal relationships among the myriad of concepts I’m interested in. A code editor is such an inter-conceptual nexus, particularly how it handles many programming languages in a unified way. I’ve worked with CodeMirror fairly deeply, and would like to develop more sophisticated editor features that integrate with the language, like interacting with values, suggesting keywords or snippets, documentation on hover, inline examples; a tree view interface to visually build a program synced to code view; always pretty format code on every key, so there’s no need to manually indent.

A web-based code editor would be useful for any language, as a friendly playground to explore it without having to install anything. Examples of what I’ve experimented with:

  • HTML template editor with live view on every change
  • C editor that compiles the code to Wasm and runs it on every keypress
  • Lisp editor with live view of parsed nodes and evaluated result

I can extrapolate to:

  • Lua editor for making LÖVE games in the browser
  • Lua/YueScript for Cardumem, Pharo (?) for HyperDoc
  • Uxn editor with canvas.. I checked, it already exists: learn-uxn

With CodeMirror, the editor is specific to the implementation so I can’t reuse its data structure or logic in other contexts, for example, if I want a minimal terminal code editor for my Lisp variant written in C - or a REPL with syntax highlight, as a single- or multi-line editor in the terminal. I’ve been reading the source code and studying:

  • jart/bestline - Bestline: Library for interactive pseudoteletypewriter command sessions using ANSI Standard X3.64 control sequences. Fork of linenoise, a minimal reimplementation of readline, by antirez, the creator of Redis.
  • kilo - A text editor in less than 1k lines of C, also by antirez. The tutorial Build Your Own Text Editor extends it further as a learning exercise.
  • micro - a modern and intuitive terminal-based text editor (written in Go)
  • fresh - a terminal editor built for discoverability (Rust)

This last one is pretty nice, the TUI (textual user interface) reminds me of the editor environment for Turbo Pascal.

The terminal interface will likely survive and thrive in the post-apocalyptic computing environment, with constrained resources of CPU, memory, networking if any. Plain text in ASCII may rule the earth again.


Of loops and vortices..

In the tale, a man recounts how he survived a shipwreck and a whirlpool. It has been grouped with Poe’s tales of ratiocination and also labeled an early form of science fiction.

Where have I seen that word..

The calculus ratiocinator is a theoretical universal logical calculation framework, a concept described in the writings of Gottfried Leibniz, usually paired with his more frequently mentioned characteristica universalis, a universal conceptual language.

It’s so charming and poetic how these philosophers dared to dream so far.

Leibniz intended his characteristica universalis or “universal character” to be a form of pasigraphy, or ideographic language. This was to be based on a rationalised version of the ‘principles’ of Chinese characters, as Europeans understood these characters in the seventeenth century. From this perspective it is common to find the characteristica universalis associated with contemporary universal language projects like Esperanto, auxiliary languages like Interlingua, and formal logic projects like Frege’s Begriffsschrift.

Leibniz said that his goal was an alphabet of human thought, a universal symbolic language (characteristic) for science, mathematics, and metaphysics.

In May 1676, he once again identified the universal language with the characteristic and dreamed of a language that would also be a calculus—a sort of algebra of thought.

Leibniz gave Egyptian and Chinese hieroglyphics and chemical signs as examples of real characteristics writing. He even includes among the types of signs musical notes and astronomical signs (the signs of the zodiac and those of the planets, including the sun and the moon). It should be noted that Leibniz sometimes employs planetary signs in place of letters in his algebraic calculations.

Leibniz acknowledged the work of Ramon Llull, particularly the Ars generalis ultima (1305), as one of the inspirations for this idea. The basic elements of his characteristica would be pictographic characters unambiguously representing a limited number of elementary concepts. Leibniz called the inventory of these concepts “the alphabet of human thought.”

I see what you mean, Doktor Leibniz speaking through the centuries, we’re working on it.

In modern terminology, Leibniz’s alphabet was a proposal for an automated theorem prover or ontology classification reasoner written centuries before the technology to implement them.

3 Likes

I’d be curious to hear more about this language project of yours…! Perhaps you could share more about it (might be best to do so by starting a new thread focused on that topic)…?

Sure, I’ll try that - I think it will look like a public experiment, since I’ve struggled with bringing order to the project. Having a space to think and “build in public” may be the right motivation I need to put together working demonstrations of ideas.

2 Likes

Leverage Points

Adapted list, originally from Leverage Points: Places to Intervene in a System - Donella Meadows (PDF). Whole Earth, Winter 1997. “In increasing order of effectiveness.”

  • Parameters
    Constants, parameters, numbers - such as subsidies, taxes, standards. Changing numbers in a system, like physical constants or financial incentives, can alter system behavior.
  • Buffer size
    The sizes of buffers and other stabilizing stocks, relative to their flows. Adjusting the size of buffers, like inventories or capacities, within a system can stabilize or destabilize it.
  • Structure
    The structure of material stocks and flows - such as transport networks, population age structures. Changing the physical layout or structure of a system affects how it operates.
  • Delays
    The lengths of delays, relative to the rate of system change. Modifying the time delays in feedback loops can change the behavior of a system.
  • Negative feedback loops
    The strength of negative feedback loops, relative to the impacts they are trying to correct against.
  • Positive feedback loops
    The gain around driving positive feedback loops.
  • Information flows
    The structure of information flows - who does and does not have access to what kinds of information. Altering who does and does not have access to information can have a big impact on the system.
  • Rules
    The rules of the system - such as incentives, punishments, constraints.
  • Adaptation
    The power to add, change, evolve, or self-organize system structure. Systems can gain resilience or adaptability by changing themselves or evolving.
  • Goals
    The goals of the system. Changing the goal or purpose of a system can lead to very different outcomes.
  • Paradigm
    The mindset or paradigm out of which the system, its goals, structure,rules, delays, parameters arise. A paradigm shift can change every aspect of a system, as it alters fundamental assumptions.
  • Transcendence
    The power to transcend paradigms, the current ways of thinking.

I like the direction of thinking that the list expresses, but would prefer a more mathematical model - the definition of a “leverage point” in a system, how to map the structure and dependencies of a system, to quantify and visualize it.

For example, I want to take LÖVE2D project’s source code and use static analysis to generate a map of all its dependencies, runtime and build toolchain, as a measure of its boostrappability. ..Couldn’t find an automated way to do it, but here’s the list to get an idea.

Dependencies

  • C and C++ compiler toolchain
  • SDL3
  • OpenGL 3.3+ / OpenGL ES 3.0+ / Vulkan / Metal
  • OpenAL
  • Lua / LuaJIT / LLVM-lua
  • FreeType
  • harfbuzz
  • ModPlug
  • Vorbisfile
  • Theora

It’s a C++ codebase, with the new version 12 of LÖVE2D using SDL3, that makes sense.

For each of the dependencies in the above list, there can be a similar list of dependencies (if any), and recursively until everything is identified.


C++, I’ve occasionally written in it but the language is too big for comfort for my primate brain. If I want a stack I can understand, customize and extend - I suppose I’d prefer most of it written in C99 (or a cross-platform Lisp, or Lua, or maybe a subset of TypeScript) - simple enough to build executables with a minimal compiler to Wasm. And then I’d need a minimal Wasm runtime written in C, I have a fork of iwasm. But how would I build the C-to-Wasm compiler from source without a precompiled binary of the compiler itself..

Back to study Stage 0.

  • bootstrappable/stage0.md
  • stagex - A container-native, full-source bootstrapped, and reproducible toolchain to build all the things - codeberg.org
  • live-bootstrap - An attempt to provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system.

The Bootstrapping Wiki has good articles and links, it talks about Stage 0 and similar efforts, exploring related topics like: small C compilers, virtual machines, minimal operating systems. Make-a-Lisp is mentioned under Ubiquitous Implementations.

Is that enough to rebuild a practical system for daily computing? There’s an ocean of great open source projects written in all kinds of languages, that I’d want to use and interface with. Would I be able to use SDL3?

It’s a big library but written in C and built with CMake, I can at least read through it and have a basic understanding of its internals.

What about sokol, a cross-platform C library with graphics and audio - it’s part of my uLisp stack I’m exploring. In its build and test script (workflows/main.yml#L40), I see:

sudo apt-get install libgl1-mesa-dev libegl1-mesa-dev mesa-common-dev xorg-dev libasound-dev libvulkan-dev

Dependencies

And the operating system. Each has its own dependencies, not only in terms of software but social aspects, groups, members, funding, goals.


In a general sense, one could look at the dependency graph of the self, or a group of human beings in its biological/social/cultural/political/ecological contexts, to get an overview of the system and help identify its potential risks and leverage points. I suppose a negative example would be companies like Palantir at the forefront of such social network analysis and data visualization. Similarly with other innovations like large language models - is there a way such corporate technology and research can be repurposed for good, to benefit humanity? With open-source transparency and collaboration, community governance, ethical funding - socially responsible, mutally beneficial, symbiosis and synergy.

Cybernetics seems like a mix of hippie-ish philosophizing and government/military interest in the human use of human beings, to study principles of control that applies to animals and machines. It was an intellectual trend that passed, and there are newer fields, concepts and frameworks. But there’s a continuous lineage of thought and influences, how related concepts and sciences developed over decades and centuries, shaped by historical forces.

It’s the same river that produced “Augmenting Human Intellect: A Conceptual Framework” - Douglas Engelbart (1962). The research was funded by Air Force Office of Scientific Research, then later ARPA (Advanced Research Projects Agency). At the lab he founded, Engelbart embedded a set of organizing principles which he termed “bootstrapping strategy”. He designed the strategy to accelerate the rate of innovation of his lab.

These diagrams are like “enterprise woo”, they’re arbitrary constructions of plausible words, but no science to it. In a way it’s the same kind of magical thinking in the Renaissance or medieval age, with hierarchies of angels and seven levels of heaven.

Somewhere I read that the computer/technology “augmenting the human” approach was in contrast to another stream of thought that approached it from the other side, the human augmenting the machine, or becoming integrated within a mechanically designed system.

there were two main philosophical approaches to human-machine interaction:

Augmentation Theory - Engelbart’s Approach

  • Focuses on enhancing human capabilities
  • Views technology as an external tool that extends and amplifies human potential
  • Emphasizes maintaining human agency and autonomy
  • Seeks to reduce manual work while increasing cognitive/creative capacity

Cybernetics/Theory of Integration - Wiener, McCulloch’s Approach

  • Focuses on integrating humans into mechanistic systems
  • Views technology as an extension of the human system itself
  • Emphasizes control mechanisms and feedback loops
  • Seeks to optimize the entire human-machine system as a unified whole

Maybe a false dichotomy, I see they largely overlap.

Where did I see the name McCulloch recently.. This thread:

Neural nets use synthetic neurons as the base computational unit. A McCulloch-Pitts neuron uses threshold saturation, excitatory and inhibitory fibers.

It’s an alien concept, I want to take more time getting to understand this.

I wonder if there could be a program that crawls through a software project of any language, mapping its dependency graph. I’m guessing researchers have explored the open-source corpus on GitHub and elsewhere using static analysis over large amounts of code of various languages.

How about a project that takes the 100 most used software libraries and inter-translate them to a set of the 10 most used languages. Theoretically that’d benefit all those ecosystems - but practically, probably not.

In 1943 American neurophysiologist and cybernetician of the University of Illinois at Chicago Warren McCulloch and self-taught logician and cognitive psychologist Walter Pitts published “A Logical Calculus of the ideas Imminent in Nervous Activity,” describing the McCulloch-Pitts neuron, the first mathematical model of a neural network.

Building on ideas in Alan Turing’s “On Computable Numbers”, McCulloch and Pitts’s paper provided a way to describe brain functions in abstract terms, and showed that simple elements connected in a neural network can have immense computational power. The paper received little attention until its ideas were applied by John von Neumann, Norbert Wiener, and others.

Before I forget, I wanted to highlight a phrase from earlier in the thread, about how the slime-mould’s learning and memory are an “alternative architecture” for processing information.

the oscillating networks of slime moulds are not precursors of nervous systems but, rather, an alternative architecture. Here, we argue that comparable information-processing operations can be realized on different architectures sharing similar oscillatory properties.

It makes me think that a “computer” or “programming language” in their current form is an implementation detail, one of many different architectures and mediums (media) that can perform the same computation and flow of logic, data and algorithms.

So why don’t we have a collective library of common algorithms and data structures expressed in a universal and ubiquitous language, that can be compiled to the whole range of programming languages, runtimes, platforms. “Write once, run anywhere.” I suppose package managers and central code repositories is an answer to that need, while accepting/respecting that people will use all kinds of languages they prefer.

From the abacus to quantum computers, to smart living spaces that think with and for you. A cyberpunk writer wrote decades ago, how a city is a giant computer we walk around in. Augmented with a network of satellites, drones, data centers, processors (a trillion semiconductor chips manufactured globally every year) - a human-machine system of astronomical scale, a cybernetic planet. If there is a kind of brain for the planet, how the whole can direct the will of the parts - it’s through us humans, we are the neocortex layer. And how are we doing as self-promoted captain of spaceship Earth, our stewardship of the living system travelling through the universe. The ones at the helm steering it aren’t worthy of trust, and there’s a smell of mutiny in the air, a phase transition.


Unconventional computing

According to the Center for Nonlinear Studies at Los Alamos National Laboratory announcing the conference “Unconventional Computation”:

It is an interdisciplinary research area with the main goal to enrich or go beyond the standard models, such as the Von Neumann computer architecture and the Turing machine, which have dominated computer science for more than half a century.

These methods model their computational operations based on non-standard paradigms, and are currently mostly in the research and development stage.

This quest, in both theoretical and practical dimensions, is motivated by the huge gap between information processing in nature and in artifacts and by the hope that certain challenges that computational sciences face today might be tackled efficiently by alternative paradigms.

1 Like

@eliot Check out the debtree package in Linux. It quickly creates dependency graphs like https://merveilles.town/@akkartik/107230485193333197

1 Like

Aha, I think I found the source:

Cool, it’s a Perl script that produces a DOT file for Graphviz. I might try building it from source.

..Or not: Requirements - Dependencies. I’ll apt-install it and see, it looks useful.


A small SVG renderer might suffice, it’s a friendly format to ouput, that can draw any graphs.

  • NanoSVG - a simple stupid single-header-file SVG parse. The output of the parser is a list of cubic bezier shapes. It is accompanied with really simpler SVG rasterizer.
  • Stanford CS248 Assignment: A Simple SVG Rasterizer - Literally a do-it-yourself renderer
  • https://gitlab.gnome.org/GNOME/librsvg - A library to render SVG images to Cairo surfaces. GNOME uses this to render SVG icons. Outside of GNOME, other desktop environments use it for similar purposes. Wikimedia uses it for Wikipedia’s SVG diagrams.
  • sammycage/plutosvg - PlutoSVG is a compact and efficient SVG rendering library written in C. It is specifically designed for parsing and rendering SVG documents embedded in OpenType fonts, providing an optimal balance between speed and minimal memory usage. It is also suitable for rendering scalable icons. Used by Dear ImGui.
  • Related library: plutoprint/plutobook - PlutoBook is a robust HTML rendering library tailored for paged media. It takes HTML or XML as input, applies CSS stylesheets, and lays out elements across one or more pages, which can then be rendered as Bitmap images or PDF documents. It implements its own rendering engine and does not depend on rendering engines like Chromium, WebKit, or Gecko.
1 Like

What we need is a modeling language.

To find the leverage points, you need to model your live system, at a specific level of abstraction so as to be able to detect points that could affect its behavior.

It is also necessary in a system where we want to have the controlled humans to change their controller.
In other words, to create self-referentiality loop that is necessary in a democratic autopoiesis.

The importance is in the details. There are many tendencies and paths that I personally do not agree with. It is a very difficult landscape to traverse, if you do not have the prerequisite knowledge, like me.
Melanie Mitchell did two online courses that anyone could participate that might be helpful.

Cybernetics does not have human agency as a goal, just the study of cyberphysical systems. We need to put human agency ourselves, through autopoiesis , self-reflection and democratic control.

That is why it can be used to control people. In fact, all complex scientists have probably positive proposals about society but they do not consider themselves as part of a democratic autopoietic system. We havent closed the loop as we need to.

That is why we need a modeling language, because a modeling language can handle all forms of “computation” / interaction. Digital and other physical phenomena are treated the same in a unified way.

1 Like

In the realm of human languages, that was the idea behind Esperanto: a universal second language for everyone. It both succeeded and failed, depending on the perspective taken: its user community is big and diverse enough to demonstrate that it works in real life, but not big enough to have much impact on society at large.

What would be a reasonable goal for a programming version? My take: a communication medium about algorithms for humans that also happens to be executable.

4 Likes

The main job of graphviz is to come up with a good-enough layout automatically. That’s not an easy job, and graphviz arguably does a mediocre job (though I am not aware of anything better). I am not sure that SVG tools would be good enough for dependency graphs.

In my experience blurring the distinction between developers and users is deeply related with how communities of practice (cfg Etienne Wenger) make learning in them more intentional and explicit. Our approach in the local hackerspace for this was related with making live coding (of prose and code) an active part of our meetings and workshops, instead of learning as a set of invisible rituals between (future) peers to get shared knowledge.

In that sense hackerspaces are recursive goods (cfg, Kelty, Two bites) as they are build by/for the communities that enjoy such good and reciprocally are defined by such good as the community around it (the hackerspace founders/members). This means that hacker communities create hackerspaces that create future hackers. And locally, we try to extend the hacker concept/identity to keep it open, generative and inclusive, in the sense talked by Coleman and Wark. So, some of us try to convert our hackerspace in a place where we can blur together and progressively the distinctions between teacher and learner in different topics including but not limited to coding, including music, library sciences, Sewing and embroidery, among other topics/practices.

Hopefully, in our search to see computing as a medium instead of a professional class, we can get closer to a convivial computing where the constant self-production of the community, its practices and its knowledge, is more organic and we don’t need “developers” (external or internal to the community) as computing and socio-technical malleability are a common practice inside the community with explicit learning dynamics.

2 Likes

Similarly in my community, there is a group that tries to do the same, called “networks of learning”, inspired by Illich’s first book.

I don’t remember why I said the above, in my opinion we will need professionals as well, but both directions need researching, to find their limits.

What better way to find out if we need professionals if not by creating spaces where they are not needed.

(I will look into your references)

In our community, we still need people that knows more programming that others, despite of not being professional programmers or, at least, not being paid all the time for writing code (in that sense some are hobbyist coders, that some times get paid). But the tools we build, including the explicit curricula try to make our systems progressively “traversable” by learners as they learn more, exploiting the self referential capabilities of both our local grassroots communities on one hand and, on the other the Pharo powered artifacts and workflows. Cardumem will try to iterate on such practices and ideas, while being more practical as a wiki engine, instead of the Grafoscopio computational notebook, that requires more specialization.

Hopefully with the two approaches you and me are talking about, will see how much can we deconstruct/require expertise and how it is acquired in specific communities and the relationship with the digital artifacts they use/(de/re)construct.

2 Likes