NeXT, OOP, WWW, HTML

First-time poster here, thank you for this forum of interesting ideas. I’ve long been fascinated by the history and future of personal computers and end-user programming. I’m enjoying reading the posts, exchanges, explorations.

Without having any particular point to make.. Sometimes I think about the birth of the world wide web, its vision of empowering people, augmenting the human intellect; about how it’s turning out in reality; and the creative liberating potential of the medium.

I feel like there’s a reason why the web was invented on the NeXT computer. Not to praise this particular model or brand, but to acknowledge that the conceptual design of this machine, its architecture and user interface - inherited and evolved from the explorations at Xerox PARC research lab - this personal computing environment must have played a role in the creative thinking process that led to the innovation.


NeXTSTEP had been such a productive development environment that in 1989, just a year after the NeXT Computer was revealed, Sir Tim Berners-Lee at CERN used it to create the WorldWideWeb.

During the 1997 MacWorld demo, Jobs revealed that in 1979 he had actually missed a glimpse of two other PARC technologies that were critical to the future.

One was pervasive networking between personal computers, which Xerox had with Ethernet, which it invented, in every one of its Alto workstations.

The other was a new paradigm for programming, dubbed “object-oriented programming,” by Alan Kay. Kay, working with Dan Ingalls and Adele Goldberg, designed a new programming language and development environment that embodied this paradigm, running on an Alto. Kay called the system “Smalltalk”..

Smalltalk’s development environment was graphical, with windows and menus. In fact, Smalltalk was the exact GUI that Steve Jobs saw in 1979.. During Jobs’ visit to PARC, he had been so enthralled by the surface details of the GUI that he completely missed the radical way it had been created with objects. The result was that programming graphical applications on the Macintosh would become much more difficult than doing so with Smalltalk.

With the NeXT computer, Jobs planned to fix this exact shortcoming of the Macintosh. The PARC technologies missing from the Mac would become central features on the NeXT.

NeXT computers, like other workstations, were designed to live in a permanently networked environment. Jobs called this “inter-personal computing,” though it was simply a renaming of what Xerox’s Thacker and Lampson called “personal distributed computing.” Likewise, dynamic object-oriented programming on the Smalltalk model provided the basis for all software development on NeXTSTEP.


In March 1989, Tim laid out his vision for what would become the web in a document called “Information Management: A Proposal”. Believe it or not, Tim’s initial proposal was not immediately accepted. In fact, his boss at the time, Mike Sendall, noted the words “Vague but exciting” on the cover.

The web was never an official CERN project, but Mike managed to give Tim time to work on it in September 1990. He began work using a NeXT computer, one of Steve Jobs’ early products.

By October of 1990, Tim had written the three fundamental technologies that remain the foundation of today’s web..

  • HTML: HyperText Markup Language. The markup (formatting) language for the web.
  • URI: Uniform Resource Identifier. A kind of “address” that is unique and used to identify to each resource on the web. It is also commonly called a URL.
  • HTTP: Hypertext Transfer Protocol. Allows for the retrieval of linked resources from across the web.

Tim also wrote the first web page editor/browser (“WorldWideWeb.app”) and the first web server (“httpd“).


It’s significant that the first web browser was also a web authoring tool.

Tim Berners-Lee wrote what would become known as WorldWideWeb on a NeXT Computer during the second half of 1990, while working for CERN. WorldWideWeb is the first web browser and also the first WYSIWYG HTML editor.

The browser was announced on the newsgroups and became available to the general public in August 1991. By this time, several others..were involved in the project.

..The team created so called “passive browsers” which do not have the ability to edit because it was hard to port this feature from the NeXT system to other operating systems.

Passive browsers.. It’s a small step from that to the “passive web”, as a kind of faster horse of television.

8 Likes

So what happened to NeXT and the original vision of WorldWideWeb.app, a unified object-oriented and networked environment that was a web browser, authoring tool, and server all in one.. Did it scatter like a broken mirror, each piece reflecting what could have been?

Interface Builder, HyperCard, and XCode..

Interface Builder is descended from the NeXTSTEP development software of the same name.

It was written in Lisp and deeply integrated with the Macintosh Toolbox. Interface Builder was presented at MacWorld Expo in San Francisco in January 1987.

Denison Bollay took Jean-Marie Hullot to NeXT after MacWorld Expo to demonstrate it to Steve Jobs. Jobs recognized its value, and started incorporating it into NeXTSTEP, and by 1988 it was part of NeXTSTEP 0.8. It was the first commercial application that allowed interface objects, such as buttons, menus, and windows, to be placed in an interface using a mouse.

One notable early use of Interface Builder was the development of the first web browser, WorldWideWeb by Tim Berners-Lee at CERN, made using a NeXT workstation.

MacPaint

HyperCard was created by Bill Atkinson (as described in this interview ~10:50). Work for it began in March 1985 under the name of WildCard. In 1986, Dan Winkler began work on HyperTalk and the name was changed to HyperCard for trademark reasons.

It was released on 11 August 1987 for the first day of the MacWorld Conference & Expo in Boston,

hypercardb

I see, so the Interface Builder and HyperCard were released in the same year. Must have been something in the air, or water. In the linked interview, Bill says he was working on HyperCard when Steve Jobs left Apple, and he (the latter) tried to convince him to continue working on HyperCard at NeXT.

HyperCard is a software application and development kit for Apple Macintosh and Apple IIGS computers. It is among the first successful hypermedia systems predating the World Wide Web.

HyperCard combines a flat-file database with a graphical, flexible, user-modifiable interface. HyperCard includes a built-in programming language called HyperTalk for manipulating data and the user interface.

“What I wanted to make essentially was a software construction kit that allowed non-programmers to put together pre-fab modules, drag-and-drop a field here and a button there. ..With automatically retained information - you put something in a field, and unplug the computer, it’s still there..”

The database features of the HyperCard system are based on the storage of the state of all of the objects on the cards in the physical file representing the stack. The database does not exist as a separate system within the HyperCard stack; no database engine or similar construct exists. Instead, the state of any object in the system is considered to be live and editable at any time.

..The system operates in a largely stateless fashion, with no need to save during operation.


Browsers went one way: Viola, Samba, Mosaic..

Mosaic was inspired by ViolaWWW, which was inspired by HyperCard, which was inspired by Smalltalk (and LSD), which was inspired by Engelbart’s NLS, who was inspired by the 1945 article “As We May Think” by Vannevar Bush.

Editors went another way: GUI frameworks, widget toolkits, IDEs, visual app builders..


From the concept of an “object” in Smalltalk..

Smalltalk emerged from a larger program of Advanced Research Projects Agency (ARPA) funded research that in many ways defined the modern world of computing. In addition to Smalltalk, working prototypes of things such as hypertext, GUIs, multimedia, the mouse, telepresence, and the Internet were developed by ARPA researchers in the 1960s.

..the Smalltalk language and environment were influential in the history of the graphical user interface (GUI) and the what you see is what you get (WYSIWYG) user interface, font editors, and desktop metaphors for UI design.

The powerful built-in debugging and object inspection tools that came with Smalltalk environments set the standard for all the integrated development environments, starting with Lisp Machine environments, that came after.

Then I discovered that Gosling’s Emacs did not have a real Lisp. It had a programming language that was known as “mocklisp,” which looks syntactically like Lisp, but didn’t have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.

I concluded I couldn’t use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user’s Lisp programs.

HyperCard’s never to be released successor, SK8, which was originally Lisp-based and eventually migrated to HyperTalk, which evolved into SK8Script. There’s an entire lesser known history of Apple and Lisp behind this, e.g., MacFrames, another predecessor to SK8, used Coral Lisp, which eventually was acquired by Apple and became Macintosh Common Lisp.

..SK8 (pronounced “skate”) was a multimedia authoring environment developed in Apple’s Advanced Technology Group from 1988 until 1997. It was described as “HyperCard on steroids”, combining a version of HyperCard’s HyperTalk programming language with a modern object-oriented application platform.

The project’s goal was to allow creative designers to create complex, stand-alone applications. The main components of SK8 included the object system, the programming language, the graphics and components libraries, and the Project Builder, an integrated development environment.

Bill Atkinson, the inventor of HyperCard was a student in the Smalltalk classroom series. Alan Kay, of the Language Research Group at Xerox PARC & inventor of Smalltalk, would advise him on Hypercard.

HyperCard also included the Hypertalk programming language, which would serve as one of the inspirations behind JavaScript.

Is React a poor man’s Lisp, or an evolutionary offspring of HTML to fill a niche, dreaming of becoming a direct manipulation programming tool?

What about web servers, confined to the realm of specialists - doesn’t everyone deserve the ease of serving their own data and programs from their own personal computer or device? And emails, why can’t we send messages directly from my screen to yours, without having to go through someone else’s server?

And how about WebAssembly, a universal low-level language that runs on (almost) any platform, which (almost) any language can compile to. That sounds awfully like a mythical Lisp to express computation and to build an inter-personal computing enviroment with. Shouldn’t it be simple, basic, and accessible enough for children and non-technical people to learn to write - or at least visually build with?

Or Web Components. It sounds like the building blocks of the future, with which we can create our own www.app, a “web browser, authoring tool, and server all in one”.

But our timeline didn’t turn out that way, at least not yet.

4 Likes

In the main I resonate with your sentiments. But I want to push back on the “doesn’t everyone deserve” framing. The cosmos has absolutely nothing to do with fairness or what people deserve. Might as well ask if everyone deserves to be able to travel the galaxy, or to never want for anything, or to be able to live in peace.

Computers would seem miraculous a short 100 years ago. It’s incredible what we can do with them. Many things are hard, and we should work hard to make them easy. Absolutely nothing has ever happened because we “deserve” it.

For example, we can’t run webservers or send emails to someone else’s computer because there are security risks associated with them. On the one hand computers can be miraculous, but on the other hand computers are also dark forests full of predators. These are real, hard problems. There’s no reason to expect us to have solutions in our lifetimes.

In darker moments I would also argue humanity does not “deserve” such things, because we so often misuse the miracles we are granted.

3 Likes

Your original posting hit hard because I recently published a hypertext browser – without the ability to edit hypertext from within the browser.

2 Likes

Ah yes, I saw the hypertext browser recently, made with Lua and LÖVE. It’s lovely, I’m inspired by how simple and small it is, like re-thinking the web browser from first principles. The project was on my mind at times as I wrote down my rambling thoughts on where the web came from and where it’s going.

It feels Lisp-y, a tree, a list of lists. I like that all the nodes have the same structure, the type/tag/function and its data/arguments/attributes/children. If it were to go the way of Emacs - a questionable proposition - it could become a self-hosted hypertext browser and editor (written in hypertext?) to make “the data structures of the internals of the editor exposable and manipulable by the user’s Lisp programs”, or in this case their hypertext document/application. Not sure what good that would do, I just like the concept of a self-hosted language, like a Lisp interpreter written in Lisp, or a C compiler that can compile itself.


..push back on the “doesn’t everyone deserve” framing. The cosmos has absolutely nothing to do with fairness or what people deserve.

True, maybe “deserve” was not the best word to use here, with its connotation of privilege and entitlement. The direction I wanted to point at was about civil rights, the right to life, liberty, and general-purpose computing, as a kind of public good like water or electricity.

Do people have a right to water and electricity? Well, maybe it’s not about human rights, or what anybody deserves, it’s about access and social infrastructure. But what if a corporation like Nestlé came in and bought up the water rights in an entire region, leaving the public unable to access it except by going through this middleman (middle-person), paying a toll for what should be (mm? questionable wording there) - common property, a public good.

That’s getting political though, and I can imagine arguments against the concept of having any “public good” at all. Who will own this public good, the state? Well they’re the ultimate middleman.

What I meant was more like: the world would be a better place if people had open access to personal computing and networked communication without too many unnecessary gatekeepers.

Ideally no third-party intermediaries at all, so we can compute and communicate among each other directly. That’s a naïve notion, I admit, that requires too much trust. As you said about email and web servers, it would be exploited by spammers, scammers, malicious actors ruining the public good.

we can’t run webservers or send emails to someone else’s computer because there are security risks associated with them. ..[C]omputers are also dark forests full of predators. These are real, hard problems.

True.. But I still want to push the limits of vulnerability (ha!) and the risk of immediacy. I’m too optimistic about technological solutions to social questions, but it feels like with cryptography (authentication, authorization) and the right kind of system design, we could achieve much more open and direct interpersonal computing.

What that would look like, I don’t know. Like this whole post is without a concrete point or idea, I’m trying to make sense of the history of how we got here, why web technology is the way it is. I’m also trying to be humble, to understand that things are the way they are because generations of smart people worked on it, innovating and improving, and mostly with good intentions to contribute to the common good.

3 Likes

You’re right, the precise words don’t matter – the distinction between entitlement and right and unsolved problem – if we agree on the broad sentiment.

The thing I wonder about is what one small step might look like towards the infrastructure I want to see everyone have over time. In that context, it’s interesting to ask what one very privileged person should have today that would let anyone send them a message without any intervening servers, without them taking on undue risk, that would motivate them to actually install it and use it.

I’m digging deeper into Freewheeling Apps, and I see you’re making tons of experiments and variations on the concept, including many “self-editing” applications, like the chess app where one can edit the underlying logic within the app itself. It’s a joy to see such a “meta” app that lets the user modify itself as it’s running. And the smallness of each experiment is inspiring, like bite-sized thoughts exploring the problem space, discovering insights.

what one small step might look like towards the infrastructure I want to see everyone have over time

Mmm, a small but very big question. What comes to mind in the direction of such an infrastructure..

Distributed decentralized mesh networks, where everyone is a node receiving and sending messages, including dynamic executable messages. (Not necessarily textual code as interface, it could be visually built programs in a shared environment.) The hardware and software, a thin stack as close to the metal as possible, so the entire system can be understood - and (re)built from scratch - by a single person, as in permacomputing.

I imagine a world-wide network of Lisp machines with a HyperCard-like operating system. The next NeXT, the hyper-HTML. (Begone, curse of Xanadu!) But to return to the one small step..

let anyone send them a message without any intervening servers, without them taking on undue risk, that would motivate them to actually install it and use it.

Can a browser communicate to another browser directly, without intervening servers? WebRTC can do peer-to-peer communication, though it requires a TURN (STUN?) server to do the initial negotiation. In any case the browser seems too big, it could never be understood entirely much less built from scratch. It should be at most an optional dependency.

Can a native application with or without GUI communicate with another instance of itself across the network? Of course, TCP/IP, HTTP, yea you know me. So there would be a discovery mechanism where these distributed apps can find each other.

How about a tiny self-hosted hypermedia browser/editor/server app whose instances can connect to each other with a handshake, that lets you exchange messages, programs, objects and environments.

Is there a “smol web”? Oh there is, and they even use a subset of HTML. How cute! Similarly, there was that Gemini protocol.

Gemini is an application-layer internet communication protocol for accessing remote documents, similar to HTTP and Gopher.

I think that’s getting closer, something like this. Well, I’m going to be thinking for a while about what this “one small step” can look like, toward the kind of infrastructure I want to see.

4 Likes

Since we are dreaming of possible futures here, I’ll add mine: I’d like to see simple but extensible protocols for a future Web. Gemini, for example, is too limited for what I’d like to be able to do. But it would be sufficient for 90% nevertheless. Why not have Gemini/base (or some HTML subset instead), and then optional add-ons? Clients, servers, or peers would negotiate which add-ons they can use together.

4 Likes

@eliot Beyond discovery there’s a problem of firewalls. The world outside our computers is increasingly hostile, and firewall rules increasingly necessary. But that means it takes specialized knowledge to poke a hole through your computer’s firewall rules, and expose a service to outside your computer.

I’d say the reason everybody cannot run a server today is the hostility of inter-computer space and the difficulty of working with the life-support systems that help our computers survive in inter-computer space.

3 Likes

Hi all! On the topic of sending messages from one screen to another without jumping through central servers, I wanted to briefly mention two projects I have experimented with.

The first is the Reticulum Networking Stack this is an alternative to the traditional TCP/IP + TLS stack that allows a collection of computers to self-organize into a mesh network. A Reticulum network provides p2p encrypted links using a ~200byte handshake. The low data cost to establish connections makes it usable over all sorts of weird hardware transports including lora and other types of packet radio.

Reticulum is authenticated and encrypted at the packet level, and has a massive address space, so firewalls and NAT traversals aren’t really needed[*]. From the developer side, having all my devices directly addressable over a “private mesh” has been really wonderful to build small networked applications on top of.

[*]: For private networks, you can create a whitelist of allowed peers, which works great. But for adversarial public networks, I’m not exactly what the story is for spam-prevention. Unclear to me if any architectural decisions of RNS make it easier or harder to DDOS when compared to TCP/IP.

How about a tiny self-hosted hypermedia browser/editor/server app whose instances can connect to each other with a handshake, that lets you exchange messages, programs, objects and environments.

The Reticulum Community has their own text-based browser that uses it’s own home-grown markup language called “micron”: GitHub - markqvist/NomadNet: Communicate Freely
Interestingly enough, peer & content discovery on Reticulum is done primarily by attaching small amounts of data to the announcement packets that the underlying mesh routing algorithm uses to organize the network, so if you connect to a community network and listen for a day or two, you’ll come back with a list of sites that other’s have announced without needing a central index or search engine.
Right now you can only really exchange micron and text messages between NomadNet sites, but there’s no reason someone couldn’t hook up a Smalltalk environment over Reticulum and use it to send objects and/or VM images over the wire. Overall I think RNS offers an interesting networking base to build an alternative “smol web” or similar.

The other project, which more directly tries to address the security concerns of p2p, is Spritely Goblins which is trying to build a capability-based collaborative object system that will also work with the web. Allowing direct connections from the public modern web is scary, but I really do think capabilities, paired with modern authentication & encryption primitives, gives us a strong tool to allow us to do this safely. For further discussion on how capabilities can provide this, there is a great talk by Mark S Miller covering exactly this: Architectures of Robust Openness – Mark S. Miller keynote at ActivityPub 2019

7 Likes

Thanks @avon for bringing up Reticulum. Do you know if it requires adjusting any existing firewalls a computer may already have?

1 Like

Also, do you happen to have any pointers to documentation about NomadNet’s markup language? I’d love to read more about it, but can’t find anything in the repo beyond the mention in the Readme.

Thanks @avon for bringing up Reticulum. Do you know if it requires adjusting any existing firewalls a computer may already have?

The unsatisfying answer to this is that it depends on the underlying point-to-point connections you use between nodes. You could use a packet radio to connect to a nearby node, and in theory that needs only a serial connection. You can use traditional TCP or UDP, and for that you (usually) need to do the painful port forwarding/firewall setup.
An interesting option that reticulum also supports is i2p which will be less performant than TCP or UDP, but in theory allows users to connect together over the internet without needing to set anything up manually.

Tor and i2p are actually great at doing automatic NAT traversal and giving users a relatively safe way of exposing resources on a device to the global internet. For example, right now I can use Onionshare to host a little html file on my laptop, and it will give me a long string of letters and numbers that anyone in the world will be able to visit via the Tor browser. No firewall configuration needed.

If trading off privacy for better performance is deemed acceptable, you could also potentially use a project like Iroh or Yggdrasil to do the p2p connections. In this same vein of “clearnet p2p” I think it’s worth mentioning the now-defunct Beaker Browser and it’s spiritual successor, Agregore. Both very cool projects trying to use p2p tech to make browsers content-creation machines.

Also, do you happen to have any pointers to documentation about NomadNet’s markup language? I’d love to read more about it, but can’t find anything in the repo beyond the mention in the Readme.

Ah sorry I couldn’t get back to you sooner, this actually stumped me for ages, the only documentation around for Micron is actually inside the NomadNet’s “Guide” section. If you don’t want to download and install NomadNet, here’s the source code for the guide page where you can kinda decipher the general syntax: NomadNet/nomadnet/ui/textui/Guide.py at master · markqvist/NomadNet · GitHub

And here’s the micron parser code if you’re interested: NomadNet/nomadnet/ui/textui/MicronParser.py at master · markqvist/NomadNet · GitHub

2 Likes

Thanks @avon for all those pointers to the Reticulum universe, of which I had never heard before!

Looks interesting in many ways, but also a bit scary because all of that infrastructure software written in Python, and thus likely to break frequently.

2 Likes

Meshtastic, a LoRa mesh radio project.

An open source, off-grid, decentralized, mesh network built to run on affordable, low-power devices

I’d read about it briefly before, but today I learned how this technology was useful during the 2025 Iberian Peninsula blackout which affected Spain and parts of Portugal and France.

Someone from Portugal posted on a forum about preppers and digital preparedness.

No electricity (still none now), and for a long time we had zero cell reception — even now it’s patchy and unreliable.

The Meshtastic community absolutely came through for us: people shared real-time updates, advice, and positive vibes. It made a HUGE difference for our safety and peace of mind. Honestly, we felt connected even when everything else was down.

Another commented:

When electricity, internet, and phone lines failed, Meshtastic still ran strong since it is a completely standalone over-the-air radio mesh network. Everyone who takes part of the mesh helps strengthen and extend it, so if one node goes down, the mesh ‘rebuilds’ itself to find alternate paths.

There are public channels, great for getting info like severe weather alerts, major traffic issues, daily weather forecasts, private channels for select family/friends, and direct private messages can all be passed along the mesh to the destination while retaining 256-bit encryption. No license of any kind is needed for them.

It’s still a sort of “hobbyist” thing, but it’s very quickly becoming popular, and during the European blackout, Meshtastic proved itself to be a great tool to pass along real-time updates, advice, and help.

Apparently Ukraine has one of the biggest mesh networks. A link to a Grafana interface: Meshtastic Ukraine.


That souds like an alternative infrastructure similar to the web, but more decentralized and distributed. Well, with very low data bandwidth for small messages, not for web-like usage.


@avon Thank you for suggesting Reticulum Network and Spritely Goblins. Aah yes, down the rabbit hole I go.

3 Likes

Mark S. Miller, I learned about him from:

Brilliant phrase, robust openness.

He is known for his work as one of the participants in the 1979 hypertext project known as Project Xanadu; for inventing Miller columns; and the open-source coordinator of the E programming language.

Prescient in many ways. Also known for his work in object capabilities, And he’s been involved in the development of WebAssembly.

Miller columns (also known as cascading lists) are a browsing/visualization technique that can be applied to tree structures. The columns allow multiple levels of the hierarchy to be open at once, and provide a visual representation of the current location.

It is closely related to techniques used earlier in the Smalltalk browser, but was independently invented by Mark S. Miller in 1980 at Yale University. The technique was then used at Project Xanadu, Datapoint, and NeXT.

Browsing tree structures.. I wonder what navigating a tree structure of local and remote content would look like, maybe similar to mounting a file system over SSH.

Miller’s research has focused on language design for secure open systems. At Xerox PARC, he worked on Concurrent Logic Programming systems and Agoric Open Systems. At Sun Labs, he led the development of WebMart, a framework for buying and selling computing resources (network bandwidth, access to a printer, images, CD jukebox etc.) across the network.

It seems like a precursor to ideas in the blockchain and cryptocurrency scene.

..Miller has been pursuing a stated goal of enabling cooperation between untrusting partners. He sees this as a fundamental feature required to power economic interactions, and the main piece that has been missing in the toolkit available to software developers.

His most prominent contributions have been in the area of programming language design, most notably, the E Language, which demonstrated language-based secure distributed computing.

“Object capabilities” sounds like a fine-grained permission system where object instances start out in a secure sandbox and gradually given access to the outside world (memory, file system, network, I/O of any kind)..

WebAssembly has an architecture like this, on a per-module basis. Or rather, WASI (WebAssembly System Interface) does:

2 Likes

That’s definitely what I dream of too. And worry about how to make those “dynamic executable messages” secure on tiny home machines with untrained system administrators.

In the corporate world, the rise of ransomware in the last five years has led to absolute paranoia and a centralized surveillance culture about anything executable. Compilers are blacklisted. EXEs require code certificates to run. Scripting languages are being blocked. All email attachments are monitored. All SSL encrypted “private” web requests are being intercepted and decrypted and deep-packet-inspected by the corporate firewall. All process activations, all command lines, and sometimes all keystrokes are being centrally collected and stored as evidence of intrusion. (Often, all this intimate data is being stored in foreign countries by foreign-owned corporations!) All passwords are also being stored centrally, often also in foreign countries. All personal privacy is gone, just vaporized.

It’s an absolute repressive crackdown, and will have major anti-democratic and authoritarian implications on a planetary scale, but it’s being driven by the utter failure of the “lolz no responsibility for malware sux to be u i guess” security culture of desktop operating systems over the last 30 years. And the “ship before its done and just keep on fixing it in post” mentality.

I wish we could have some kind of tiny, understandable, and verifiably-not-insane object model that home users could run where you know that in the worst case, if a particular software object is subverted or runs wild, there are hard limits on how much of your entire life it can destroy.

We used to have such hard limits, by accident, back in the days of removeable floppy disks. You ran a virus? It can only touch the media you’ve inserted, and a hard boot + a read-only tab gives you a trusted known good OS.

We need something like that again, but in a form that can scale.

Edit: There are two main ways malware can destroy your life and which a hypothetical tiny home machine needs to protect against:

  1. By deleting / modifying local data or code. Protected against with “Virtual removeable media”, ie, sandboxing read and write permissions, read-only / copy-on-write files, and by automatically doing backups/versioning (for data that is safe to persist)
  2. By transmitting secret data. Protected against with “Virtual removeable cables”, ie sandboxing network read/write permissions.

Fundamentally, we need to start with filesystem and network permissions being something you can grant on a per-folder or per-channel level, without an app’s knowledge or involvement. Ideally the network would be something like a distributed filesystem, so the two would be one concept.

4 Likes

OLPC XO-1, Children’s Machine, $100 laptop. Circa 2007-2012, RIP.

..low cost laptop computer intended to be distributed to children in developing countries around the world, to provide them with access to knowledge, and opportunities to explore, experiment and express themselves (constructionist learning).

The rugged, low-power computers use flash memory instead of a hard disk drive (HDD), and come with a pre-installed operating system derived from Fedora Linux, with the Sugar graphical user interface (GUI).

Mobile ad hoc networking via 802.11s Wi-Fi mesh networking, to allow many machines to share Internet access as long as at least one of them could connect to an access point

This is (would have been) a realization of Alan Kay’s Dynabook concept with (at the time) modern technology choices and evolved conception.

Whenever the laptop is powered on it can participate in a mobile ad hoc network with each node operating in a peer-to-peer fashion with other laptops it can hear, forwarding packets across the [local] cloud. If a computer in the cloud has access to the Internet—either directly or indirectly—then all computers in the cloud are able to share that access.

I wonder why this project didn’t blossom - I guess not enough government funding or public support. And why no other large project didn’t emerge as a successor to carry on the concept. What were the flaws in this project? Maybe just economics.

These days schools can provide consumer laptops, likely with Windows and built-in surveillance, or expect every student to carry a mobile phone with enough computational freedom to do their work, such as accessing ChatGPT.

I hope there are people advocating for healthier technology for education, with free and open-source software, GNU/Linux et al, that respects privacy and security, with locally running AI/LLM, etc.


Cyberdecks. There’s a hobbyist interest in building your own retro-futuristic personal computing device.

These are based on components like ESP32, Arduino, Raspberry Pi, or Intel NUC. With wireless data communication (WiFi and Bluetooth), sometimes long-range radio (LoRa messenger), built-in GPS, connector to FlipperZero, SMS/cell phone, solar energy, etc.

I haven’t heard about inter-connecting these cyberdecks, but surely it’s possible by integrating with local area network or other protocols such as device-to-device communication with ESP-NOW.


There’s a picture I saw from the recent blackout in Spain, where a crowd is gathered around a battery-powered radio, listening for the news.

No cell phone reception, no Internet. People had to resort to a more primitive and reliable technology like the radio.


What I’d love to see is more grass-roots tech, hardware and software with hand-made quality, local first, small-scale design. And it’s happening, thanks to the proliferation of affordable high-tech components. (Which does depend on global infrastructure, robust but still vulnerable. Nobody can produce a CPU in a home lab.)

If I were to build my own cyberdeck, I’ll think about incorporating radio and Meshtastic somehow. Ideally with a radio receiver and transmitter. With additional devices like that, an ad-hoc web-like network protocol could be built on it.

4 Likes

We know perfectly well how to do what you describe. It doesn’t happen for socio-economic reasons.

1 Like

Socio-economic reasons are just as real as physical reasons. In other words, we don’t know how to do it. Some of us just have the illusion of knowing how to do it.