The Idea Processor, or "What is the use case?"

Over on the “Next, OOP, WWW, HTML” thread, Akkartik challenged me with the question:

It’s not clear to me what use cases motivate your search.

I’ve been trying to bite my tongue and I’m failing… I feel some impatience with this thread

I’m less interested in coming up with the perfect tools that make doing any abstract thing a pleasure, and more interested in getting on with the doing of concrete things in rigorous even if unaesthetic ways.

And that brought me up short. I hadn’t thought that the “use case” I have in mind and keep sketching out approaches to was either abstract or frustrating and impatience-causing, but apparently it is!

I mean, that’s not wrong. My search is a source of frustration to me personally. As a use case it must seem “abstract”, since nobody seems to have built something that implements it yet; I keep looking at lots of software and go “no, that’s not it, that’s maybe part of it, but it’s not it. And it is important.”

At least that’s my feeling. It is hard to translate that feeling into “the doing of concrete things”, yes. Or even into a list of desirable qualities. And my own sense of frustration at my own inability to do that translating maybe shows through, where perhaps it shouldn’t.

So yeah. What is the magical it that I’m looking for is the use case?

And why did the Commodore PET 2001 in particular - more than other 1977 8-bit machines - spark partial recognition of that magical it for me?

The short answer, I think is that what I’m looking for is an “idea processor”.

An idea processor might not be what everyone else interested in “malleable software” is looking for, which is probably why there’s confusion over why I want this very specific, perhaps very odd, and probably very hard to obtain, list of wants. Why I think this thing that doesn’t exist is so important.

The name “idea processor” came to me, and has stuck in my head ever since (yes, for 40 years), from a January 1984 Byte Magazine article about Thinktank by Living Videotext. That product was essentially just what we now call an “outliner”, a word processing mode that got quickly subsumed into Microsoft Word and, I suppose, Emacs Org Mode:

To date, most personal computer software has enabled us to do familiar tasks more efficiently. With a word processor, for example, we can prepare documents, alter them, and generate clean copies more easily than is possible using a typewriter, paper, liquid correction fluid, and an eraser. While the Block Move commands in word processors make rearranging text simpler and encourage writers to revise, nothing about word processors makes it easier to create a document when the ideas are still in a formative stage or to analyze a document that is already complete.

Spreadsheets are the great exception. They let us analyze related numerical parameters in a way never before available to individuals. Some may dispute this view, saying that spreadsheets just let us look at more alternatives than we could using paper and calculator. The contribution of the spreadsheet program, in this view, is merely an improvement in efficiency over customary methods.

But spreadsheet software contributes much more than increased efficiency. As you watch the effects of changing one figure ripple through a whole model, you are able to think in larger terms than a calculator and paper allow. Furthermore, the visual rippling itself sometimes is tantamount to a graphical simulation of the problem under study. You gain a better understanding of relationships in the model and a feeling for the probable consequences of different kinds of changes. When a change causes startling results, you realize that either the model doesn’t reflect reality or the reality is different in some important way from what you thought. The spreadsheet software provides a ready means for analyzing the startling result.

The time has come for text-editing software to surpass the power of the paper operations it mimics. To date, word processors have taken conceptually complete documents and printed them out neatly. The most interesting and difficult part of preparing many text documents, however, comes much earlier— when concepts are inchoate and their interrelationships are dimly understood. Text editors for programmers have provided “macros” that permit automatic execution of a series of operations, but the operations chained are usually a series of search-and-replace operations designed to achieve a result that conforms to the rigid syntax of a programming language.

Can software help us grope through the earlier stages of text composition? Can it help us analyze finished documents?

Two programs that go beyond word processing are now on the market: Thinktank, from Living Videotext in Palo Alto, California, and Zyindex, from Zylab Corporation in Chicago, Illinois. Both these programs recognize that a collection of words can be much more than a finished document. They give the user new ways of getting hold of information contained in text files.

In the case of Thinktank, available for the Apple IIe for $150 and the IBM PC for $195, you obtain information through a tree structure. A document is a large outline. There are headings and subheadings ad infinitum, but you see only as much of the outline as you wish. The EXPAND and COLLAPSE commands control what you see. Thinktank displays a " + " in front of every heading that has further information under it. You know you have reached the end of a branch when a " - " sign precedes a heading. If you use the EXPAND command on an item that is preceded by a " + " the next level of branches will be displayed. You can expand your way through level after level. When you want to concentrate on the more general levels of the outline, you use the COLLAPSE command to hide the levels that give details. By expanding and collapsing, you can delve into the tree of data or concentrate on only the broad headings. You can also move items about in the structure, promoting them to higher levels or moving them to other parts of the outline. You can insert new items and merge old ones as well.

Thinktank, then, is most useful when a document is in the formative stages. As your ideas change and various aspects of them become more and less important, you can alter the document to reflect the changes. However much rearranging you do, Thinktank leaves you with a neat outline that looks as if you understood the subject when you started out.

In a way, Thinktank is analogous to a spreadsheet. A spreadsheet lets you rearrange numbers and relationships until you find a coherent arrangement that shows how to attain your goals. Thinktank manipulates text in a way that lets you rearrange its internal logical relationships until the structure supports a main idea. A spreadsheet is a number manipulator. Thinktank is a text manipulator.

Living Videotext calls Thinktank an “idea processor” to distinguish it from a word processor. While the term is a bit optimistic, it will have to do until a better one comes along. Thinktank’s specific contribution is to apply tree structures to text. Its more general contribution is to treat a body of text as a database whose parts happen to be text.

Zyindex, too, treats text as a database. This program lets you search a body of text for occurrences of any word, or of any two or more words within a specifiable number of words of each other. What makes Zyindex so much more valuable than the Search function of a word processor is the availability of Boolean operators. You can look for any occurrence of either “Johnson” or “Holmes” within 100 words of “embezzlement.” Or any occurrence of both “Johnson” and “Holmes” within 100 words of “embezzlement.”

If anything is holding back development of software for text manipulation, it is the failure of those of us who create, edit, and analyze text to specify features that would be useful to us. In the age of the typewriter, there was no point in dreaming of such features. In the age of the personal computer, expressing our dreams may result in products that make such features a reality.

So that was 1984. Some of the “idea processor” (the easy parts - the outliner) moved into Word and other word processors. The free-text search part moved into search engines and, basically, became Google. (Because everyone stopped doing any kind of free-text search on their own systems).

LLMs are now threatening to take the “idea processor” idea and combine it with natural language and bring it fully, lurchingly, to life. I say “threatening” rather than “promising” because the LLM approach deeply disturbs me and is likely to massively change the world for the worse (arguably it already has). LLMs are huge and centralized and are both a hallucination and a privacy disaster. I want something that does something similar, but on a small, local, and predictable level.

This is why my first attempt in this direction is SUSN, which is both a syntax / datamodel, and a tiny Javascript script for Node.js, together which implement an outliner and a tiny search engine. Some of SUSN’s weird features as a syntax (like, why I don’t just use JSON or even Lisp S-expressions - why those aren’t “good enough” for the use case) are specifically to enable freeform text entry, on multiple devices (standard Windows notebook and Android phone) with as little friction as possible.

I’ve used SUSN on a text/database of my own creation, specifically to track semi-structured data which begins as ideas, and I’ve found it very useful. It solves a particular problem that only I have. But it’s not the whole of the solution, because it’s not the whole of the use case. It’s one tiny corner of it.

My idea of an “idea processor” is not the same as Living Videotext’s, which yeah, was just an outliner. I see it as a thing somewhere in the space of Smalltalk and the Dynabook. A little adjacent to the Commodore PET, but not it.

Let me digress first to explain why the PET 2001 in particular sparks a sense of recognition to me. What its use case is and isn’t.

In short, I believe the PET was intended as a Computer Aided Instruction machine for the classroom. There were three contending forces in 1977 for the microcomputer: one, videogames; two, business; three, education. Computers in education, specifically between 1960 and 1984, were seen through the lens of CAI.

The definitive early CAI system was PLATO. Alongside PLATO was PILOT, and then BASIC (which was essentially interactive FORTRAN; the line numbers emulate a deck of punched cards). CAI was based around the idea of educational resources which were programs: which would be both simulations, and tests.

There were dedicated CAI languages, but BASIC was picked because it was also popular in education. Mathematics was in BASIC because it was in FORTRAN. String handling was in BASIC when it was very minimal in FORTRAN because the CAI concept was that the machine would prompt a learner with text and receive text answers.

To standard BASIC, the PET added graphical characters and cursor animations - an otherwise strange and unmotivated thing to add - because, I believe, it was intended to do the sort of thing that Macromedia Authorware and Director would do later: (Adobe Authorware - Wikipedia). Authorware itself was a descendent of the PLATO project.

The PET, then - I think like the Apple, which may have had broader use cases, but which did heavily target education - was intended as a “courseware machine”. Not just “playback” of courseware (simulations and exams), but also its creation. That was another reason for using BASIC - so that teachers could quickly make courseware of their own.

Education wasn’t the only use case. Even by launch time, a name which would have made perfect sense as “Personal Electronic Tutor” became the extremely awkward “Personal Electronic Transactor”, and then Commodore quickly followed the PET with the CBM range, as “Commodore Business Machines”, emphasising a business-friendly typewriter keyboard and printers.

But the education and CAI (and PLATO) DNA remains in PETSCII, those “beautiful” characters. Why are they so beautiful when they don’t have to be? So that it’s quick and simple to create courseware and animations with them.

The early Commodore magazines, like the early Apple magazines, and like Creative Computing and the BASIC scene before them, were full of educational and CAI use-cases. By around 1984, the CAI tide had fully retreated, and gaming and business had taken over. There was still educational software, but it felt “lame” and wasn’t a driving force like it had been in 1977.

But it was that educational vibe of the PET that hit me as a kid, and particularly the sense that a computer could be not just a business or gaming thing, but a true “mental amplifier” or “bicycle for the mind”: not just a distraction or an efficiency machine, but something that could help you explore, develop, amplify and transmit ideas in themselves. Particularly through simulation.

I never had the chance to experience PLATO (only a very few highly privileged college students could, and I was in the wrong country), but I did touch just the edge of that scene through the PET and its associated literature. I could feel the aspirations, if not the concrete achievements, of the CAI drive.

Those aspirations - as well as those of Alan Kay later on - woke something in me which has never slept 40 years later. I’m sorry if that something is “abstract”: it has to be, since it’s not yet concretely realised; or it is, but only in small pieces so far. But it’s not vague either. It has a centre and it has edges. It has things it clearly isn’t, and things that it’s more and less like.

CAI itself is not the “idea processor”. CAI (as is “courseware” today) was structured around the idea of a machine running “modules” that students would interact with. For that use-case, a standalone machine even with a tape drive was fine. PLATO had a network, but the PET couldn’t afford that. It wasn’t so much about document production. It was about students experiencing an exam or simulation - which could be very much interactive, and a little bit tailored to the student (which is why all those BASIC programs asked you your name) but didn’t need to really exist in a networked environment. The output was the learning the student walked away with.

An idea processor would be more than that. Kay’s Dynabook captures more of that dream. What we have in laptops and mobiles today, however, is still not really even the Dynabook. At best, they are substrates on which a Dynabook could one day be built… if Silicon Valley were not currently determined to destroy the ability to build one.

The specific things I want in an idea processor, and the reasons for needing them, are:

  1. Ideas when they strike us are fleeting and fragile, but nonetheless valuable. We need to be able to capture them quickly and with as little friction as possible. That means we need an instant-on interactive interface. We need something like an “editor” or a “notebook” to be the default mode of interaction. We need to quickly be able to jot down thoughts in their simplest form, unstructured, as they come to us, and expand them later. We need those thoughts to be preserved without loss, without switching or distracting our thoughts.

The idea processor, in other words, needs to capture, enhance, and amplify a person’s “flow state”. That’s why “beauty” (such an abstract principle, yes, but you know it when it’s not there) is required. It’s the elimination of distractions.

That’s why particularly the PET’s “immediate mode” with its “more immediate than immediate mode” ability to just scribble on the screen, resonated. That’s part of that “capture ideas at their source” ability. Smalltalk, I think, should have some of that; I just haven’t been able to make friends with any Smalltalk yet.

Unfortunately, a big requirement of fully doing “no distractions” is managing to somehow eliminate or minimise “modes”. That means creating something that’s like one single app, because every time you switch apps you get that “I just walked through a doorway - what was I doing?” code-switch and memory-blank problem.

And single do-everything apps are hard and probably wrong.

Forth-like OSes, interestingly, don’t give me this problem. Something in the concept of the Forth word resonates with me. Unix scripts also are somewhat similar. Windows Powershell is also a great interface to work for the same reason. Emacs probably has this quality, but again, I have failed to make friends with Emacs; it didn’t like me either. Smalltalk again, probably, but again, not quite actually, unless I can somehow find a way in that isn’t filled with distracting complexity.

The idea processor, then, might need to be something like a “shell” in order to minimise distractions and destructive code-switches. This concept of “beauty” is about integration and simplicity (lack of mental complexity in the interaction model) as much as it’s about “visually appealing design”.

  1. Ideas are symbols and are also documents. They can be of any size and complexity. Therefore, we need to be able to start with the smallest possible kind of symbol and cluster, group, and rearrange ideas. There should not be artificial barriers between small ideas and big ideas, other than the inherent complexity of the ideas themselves. That means we shouldn’t have to “code-switch” between multiple languages or multiple applications as we are developing ideas. We should be able to smoothly evolve ideas by adding, combining and linking, until we get a literal network. Because ideas spread through social networks. Ideas which run on machines also implement and augment social networks in an extremely literal fashion.

  2. Ideas include text, numbers, graphics, media, equations, simulations. These are still, after decades of compound documents, mostly scattered and siloed. Text editors can format equations, but they’re terrible at evaluating them. Databases can capture rigidly structured and typed row-column data, but break when dealing with unstructured or recursively structured data. Specialised graphical tools are great at photo collections or “fine art” but aren’t so good for quick sketching, especially for people with little artistic development. Programming languages, even ones used for scientific computing like Python, even in “computational notebooks” need elaborate setup and are often incredibly dangerous in a cybersecurity sense. We’ve got some tools like Microsoft OneNote, but again this is a proprietary silo.

The “computational notebook” concept, however, combined with “hypertext” and “zettelcasten”, comes the closest so far to what I mean by an idea processor. Hypercard had a lot of the idea processor DNA. So too did the original WorldWideWeb - the one which allowed writing and hosting as part of the same package. The WikiWikiWeb captured that too.

But the wiki still wasn’t quite It. How many people have personal wikis today? How many personal wikis are pleasant to use and don’t distract from your flow state? How many personal wikis can do computation, enforce data constraints, allow searches and textual/relationship analysis, and create simulations? How many can exchange small chunks of your data safely with other people? If a wiki can’t do these necessary things, how can it be nudged in the direction of doing them?

  1. Ideas can contain graphics. And yes, we could just embed a raw bitmap or vector canvas, add some line/box/circle primitives and call it a day. It’s a drawing tool; we can still do a lot with Microsoft Paint. But the reason I think graphics needs to be “structured” (probably a very hard thing to do, maybe impossible at this point with so many standards; definitely a hard ask) is because, well ideas are structured. Structure is inherent to the idea of the “idea”. (Mental chunking). If we want to “process” graphics as ideas and not just have them be a raw slab of data, we need to be able to pull a graphic apart, label it, change things, etc. The user needs to be able to do this, and do it quickly, as they would with a pencil and paper.

This part seems like it’s super-simple, but it is hard and finding a way to attack it is also hard. It’s hard because we have so much graphics. The art world is broad and intimidating; it’s also a massive attack surface, both technical and social, because there exist very illegal and destructive images and videos the mere sight of which will put you in prison forever. It also crosses over the the equally intimidating (if not quite so legally hazardous) world of book production. Postscript and TeX have been answers in the past; they’re probably required for access to knowledge, but they’re hard to use as a knowledge capturing tool. Markdown is becoming the knowledge capturing tool of choice, but it’s full of unexpected complexity now. Tools like “Lines”, yes, I can see a flash of something powerful and important there: particularly being able to serialise vector graphics into text and then exchange the text.

Premade shapes like emoticons are popular; they don’t scale up and they don’t cluster very well, but you can see the public conversation shaping toward them. “Image macros”, also. Something like Logo’s turtle graphics, I think, is maybe another useful approach. But exchanging documents containing recursive procedures that draw graphics (even though PDF and TeX do this) requires the next item, even harder:

  1. Ideas need to contain “code” in the form of equation, simulations and tools, because that’s the special genius of the computer: it’s a machine for examining and testing ideas. Ideas then aren’t just raw text (although they can start out as raw text) or raw graphics, or raw numbers, but relations between these things. These relations can include constraints (type declarations) and algorithms.

We need to be able to transmit ideas, including algorithms, but we need to be able to do so safely. When an idea “executes” it needs to not damage the computer it “executes” on. I put “execute” in quotes because again, this shouldn’t be about just shipping someone a machine and have that machine run. It should be something the user can fully deconstruct and work with. It’s an idea in the form of a machine; it’s not a machine that is a black box and just sits and does work. Machines which are black boxes belong to the “game” and “business” use cases; they aren’t about education, and they specifically aren’t about examinable ideas.

Basic cyber-safety of sharable documents should have been a solved problem decades ago, given that offices rely on them, and for many use-cases (.doc files, PDF files, email) we thought it was. But we’ve still managed to un-solve that solved problem. It remains an issue that’s high up on any todo list: first, don’t get your users rooted.

  1. Ideas need to be safely transmittable. But they also need to be kept local, secret and uncensored. Ideas can be a threat to the state and to entrenched powers. That means that any “idea processor” which starts out with centralised storage/processing or with submitting anything for evaluation and censorship is dangerous. Information has to be kept at the edges, not routed through a center.

Again, this shouldn’t need to be stated. But sadly email, social media, personal photo libraries, book readers, most Markdown diaries today all sync to cloud. That’s dangerous. Combined with pervasive AI scanning of all documents and authoritarian regimes, and terrible things will result from this. Terrible things probably already have resulted - I mean, that’s the whole AI sales pitch. “Do terribly wrong things at scale without understanding what you’re doing”.

There probably are all-local Markdown diaries, which are at least possible contenders for the core of an idea processor. But I feel like Markdown itself as a syntax is going to become a wall rather than a door sooner than later. It would be nice to have something like Markdown, but which can scale to carry code within it, rather than being the file you write your readme.txt in and then your code is in something else.

  1. We’re going to need the idea processor, if we need it at all, during a time of rapid social change and upheaval - because that’s what our foreseeable future is going to look like for maybe a century. That means that it needs to be able to cope with chaos in networks, power outages, angry online mobs, hardware shortages, to outline a few very known and immediate problems. We don’t have to panic about this but it means we do need to be practical and pragmatic. It can’t involve custom hardware unless that hardware is somehow magically cheap and pervasive and user-serviceable. It needs to be cross-platform. It needs to be able to run on the machines we have (assuming that we can remove or bypass most of the worst privacy and censorship risks - that in itself may be a big ask).

Worst case, we can use, as we do today, a mismash of text files, scripting languages, emulators, broken social media networks that can’t exchange files, and cobble together personal workflows out of existing pieces. We probably will have to do this anyway whatever happens.

But AI is out there and it’s hungry for data. AI is linking all the data together in ways that our patchwork mishmash can’t, and authoritarian regimes inside and out of governments are sucking on that data and doing terrible things with it.

tldr: I feel like we might need something that isn’t AI but can link data together.

5 Likes

Thanks for sharing this! :smile: Quite a lot of it resonates with with my personal malleable goals.

At the moment my own “knowledge management” is (unfortunately) trapped in the Notion ecosystem… Their editor happens to work well with the way my brain operates, but of course I’d rather have something truly local-first and with more powerful computation abilities as you suggest so it could go beyond “just” notes and start to be more like true simulations of ideas.

Of the things I know of in the world today, perhaps Glamorous Toolkit and @khinsen’s HyperDoc are the closest match to your ideas, though only some of your requirements are met by those projects. I know you’ve already rejected Smalltalk, but perhaps Smalltalk / Pharo / Glamorous Toolkit is a foundation you could start from and then reshape to better meet your needs…?


A bit of an aside from your own use case, but connected to the quotes you shared…

I do find myself drifting towards building a hybrid (text and visual) spreadsheet-inspired environment lately, so it was quite nice to see those “idea processor” quotes and their reflections on spreadsheets. The free form nature of spreadsheets combined ease of visualising changes as percolate through is quite powerful. I’ll be try to some experiments to see what it’s like to build a more general programming environment around these concepts. (I know that’s still quite vague… This is the first time I’ve tried saying anything about it outside my own notes…!)

We could try to collaboratively create this “idea processor”, at least a primitive prototype or an ongoing experiment.

What languages, tools, libraries, and “Lego pieces” would we need to start building? It could be built with Glamorous Toolkit, Guile Hoot, Freewheeling Apps (Lua/LÖVE), Uxn, from an existing emulator of 8-bit machine, or from scratch with JavaScript.. There’s a wealth of raw material out there.

What resources and references are there with practical tools and building blocks for people who want to build their own malleable system? Which exact tools to use probably depends greatly on a person’s prior experience, aesthetic and subjective preferences, what’s comfortable in their hands.


record-2025-06-18_20.44.41

Recently I saw this interactive article, an explorable explanation, about the ZX Spectrum.

The technical implementation is a bit weird, an emulator of ZX Spectrum controlled from JavaScript host and ZX80 assembly. It reminded me of your aesthetic preference @natecull, pretty sure this machine runs BASIC.

I learned BASIC as a child, then later LOGO, Pascal, C (I think C89 or 99 depending on how old I am), and a bit of 8086 assembly. So I have the same retro nostalgia for those machines. (I have a fond place in my heart for the classic Macintosh, and still trying to recapture a bit of that magic.) It’s strange returning to re-learn C after many years, it’s like a mother tongue long forgotten.

Anyway, I’d love to see a retro virtual (or real) machine that you can program an “idea processor” in BASIC! Well, first I’ll finish reading your thoughtful post. :upside_down_face: It’s probably not any specific language you want, but that essence of a friendly live computing interface.


..As I continue reading your post, I should have listened more carefully and thought about what you’re saying before rushing to reply. I missed the mark with this comment.

This part reminds me of Workflowy. It’s like an infinitely deep and flexible list of lists, where you can focus on a section of the list as the entire screen, or zoom in/out to navigate, or have different view modes.

Each item/node of the list can be text, images, links to other items/lists, etc.

There have been numerous implementations of this tree-structured hyper-document concept. But I imagine you’d say none of these are quite “it”, they have aspects you like but are still missing essential features for your needs, and not integrated in a way that suits you best. They’re not for me either - I know I have to build it myself to get exactly (or closer to) what I want.

Dendron

CherryTree

Obsidian

1 Like

Yeah, exactly. My rough worldview has been:

  • I want many things
  • But I also have a life to live that is ticking by at 1 second per second

These days I capture notes using an alternative frontend called capture.love which I bind to a keyboard shortcut in my window manager. This frontend doesn’t show anything when it comes up, just a blank slate so I don’t forget what I was trying to capture.

That’s why particularly the PET’s “immediate mode” with its “more immediate than immediate mode” ability to just scribble on the screen, resonated.

This is why I started this whole chapter of my life with lines.love. I was stuck between drawing on paper books and searching on my computer, and I wanted to do both at once.

Yes. My note-taking system (which I lately have been feeding 10k words into everyday, many of them from this forum) lets me see lots of little (or large, but mostly little) rectangles of text next to each other, connect them up with relations, search them and insert hyperlinks into them. Rectangles grow into columns, then knotted graphs of relationships, then sometimes get remixed into large rectangles.

Yes, this is still a problem. I put my line drawings in lines.love, and pensieve.love can also show line drawings. But it can’t show my box and arrow diagrams like I make using snap.love. I’ve long wanted to integrate that as well into pensieve.love, but can’t come up with a good experience for the two different ways to nest boxes of text: as dense structured columns that hide links between boxes and more loose graphs that visually show links between boxes.

The “computational notebook” concept, however, combined with “hypertext” and “zettelcasten”, comes the closest so far to what I mean by an idea processor.

I have a computational notebook, but so far it sits apart from pensieve.love. I can’t view my notebooks right there next to other things. I do want separate windows sometimes, but I wish I could make that choice based on the need of the moment and not the constraints of the past.

Yeah. the biggest thing lines.love has done for me is to make me more aware of how little I still use line drawings in my notes. The experience is still not one that encourages certain states of mind. And I lack the skillset to tack gradually towards the right state of mind. All I can do, it seems, is throw things at the wall and see what sticks.

I think what I would find most helpful is to have someone try something out and suggest gradual tweaks to it. Instead of jumping to talk about Forth, is there one little feature from Forth that we could pull into this implementation of PET graphics that somehow improves the state of mind of using it?

Then again, states of mind are so hard to get into, perhaps it wouldn’t be helpful for me to try to use a PET-based note-taking system. All my notes are in this other system over here, and without all my notes any conclusions about my usual state of mind would be compromised.

Perhaps the ideal topology is we each are creating our private zettelkasten in the platform of our choice, moulding it gradually to our needs, showing each other ideas and little things we add to our systems, and then we can steal such ideas from each other so we can all elevate the states of our minds.

I suspect that’s all we can/should hope for: better states of mind where we can capture a wider diversity of ideas. But there will always be gaps in functionality, seams. Many seams are uneconomic to address. They distract the state of mind more than they elevate.

Another seam. I can create ideas made of code in Lua Carousel. Like this one. Or turtle graphics. But I can’t put such simulations into a living rectangle in pensieve.love, just like I can’t put notebook.love’s ideas made of code into pensieve.love.

And Carousel has its own markup language now, so I can create little help documents for myself that contain nested divs and spans in a grid layout. They integrate boxes and arrows inspired by snap.love. But they’re separated by seams from the live computation available right in Carousel itself.

Exactly. The classic Unix shell environment created a sprawling environment rather than a single app. What it did for line-based text-mode apps with 1D input and output streams, I’m trying to recreate for GUIs. It can’t be a REPL in the classic sense. The best REPL for a GUI seems to be a multi-headed hydra that starts out with 8 little boxes I can type code into. And the number of boxes grows from there.

In any case, like I tried to walk back, my way is by no means the only way. But I have found that if I spread out too thin to look at and try to learn from every imperfect thing that has ever existed, then I forget what it was that I came to this world to do. At some point you have to stop turning, turning, turning – and aim for a target.

Turning and turning in the widening gyre
The falcon cannot hear the falconer;

So what I like to do is alternate periods of turn, turn, turn with dives for a target. And all of you are doing the same thing, I now sense.

I think where I differ from you @natecull is that I don’t think this endless activity will ever change. The seams that are uneconomic to clean up are an essential part of the activity, they are desirable. Trying to eradicate them is a distraction.

I just want to be in control of my seams. No damn fool company should tell me where my seams should go.

2 Likes

That’s a nice description, a kind of modular model of distributed experiments where each person is making tools and modules for their own use, and if any of them look useful, other people can take ideas or code to use in their own system. And all of it in a collaborative creative and supportive environment.

Making code or features easier to “steal”, that’s a real valuable idea I think. Recently I read an insightful comment about “Reusing existing libraries or build your own” where you said:

Use libraries by copying them.

That’s controversial - I didn’t know anyone else who did this, but in my programming career I’ve done it many times - to “adopt” a library into my own codebase, assimilating it, reading and getting to know the code, and digesting it into the larger organism, to make it mine. It has worked out very well, with occasional difficulty when trying to sync changes from/to upstream.

This approach makes me think about how can I write code that is easy to steal, copy & paste into another project. I want people to copy my code and benefit from whatever nuggets of usefulness I invented, discovered, or stole from elsewhere (with proper attribution hopefully). I think creators of art, literature, music - they all work like this, the best steal from the best.

I’ve found that making code easy to steal often means better manageable code in general - making it independent, idempotent, deterministic pure functions and data, and not too clever. Sometimes I laugh at my own code that looks like a child wrote it - but it’s the result of hard-won lessons experiencing the other side, building elaborate cathedrals and architectures. I just choose not to anymore, I’d rather have an open toolbox of bite-sized tools and components that are surprisingly simple, robust and scalable, to let other people use them to build their own castles in the sky.

I think we’re gradually working our way there, but what I would love to see is a conversation not only of words but exchanging living examples of what we’re talking about. You’re further along than many of us @akkartik because you’ve been creating experiments for a long time and sharing them with the community. I wish to see myself and others also create working prototypes as a call-and-response and a way to communicate what we mean. “Show me what you got.”

Ahh that’s what I want to do, and I’m not there yet, of building and using my own tool for daily use. I’m working on it though, especially inspired by forum discussions like this.

3 Likes

One clarification: I was only thinking of stealing ideas, not code. It’s nice to steal code when one can, but I don’t spend time mourning when I can’t.

1 Like

I tend to think “should rhetoric” is unhelpful. You never step in the same river twice, you never exist in the same world twice. This is the nature of humanity and this is the nature of the world. The fact that we don’t have this today, and even more so that we did in the past, is a signal from the cosmos that it is hard!

Like I said, I spent some time in 2022 thinking about sandboxing in the context of arbitrary code execution. But it feels lower priority and so I’ve jettisoned this concern in the interest of making progress. These days when I’m looking to steal some Lua code, I just look at it with my eyeballs[1] and give myself a rough sense of confidence that it is safe. Guarantees are nice, but I don’t need perfection. I just need to outrun most people, and the malicious hackers will leave me alone. (Unless a nation state goes spear fishing. In which case.. I surrender! :smile:)

[1] My heuristics look roughly like:

$ grep '\<load' . -r
$ grep '\<require\>' . -r
$ grep '\<os\>' . -r
$ grep '\<io\>' . -r

I’m not aware of any way that any Lua code running on an interpreter on my machine that I trust can do malicious things if I pay close attention to the results of these commands.

Absolutely agreed; all my tools above are local. However, this is also the point at which I again start feeling like goalposts are shifting. Is it possible that maybe we have enough hard problems in the first 5 bullets? :smile: I have nothing to say to someone whose zettelkasten syncs to the cloud except, “go in peace.” There are an infinite number of good things to aim for, but we have to draw a line somewhere and this feels clearly lower priority than bullets 1-5.

1 Like

Let me add an outliner that I think deserves more attention than it gets these days: Leo, which is very old, more than 20 years, and still around and doing well. It’s been accumulating a bit of cruft over time, but less than many more recent ones.

The particularity of Leo is that its tree structure can contain Python code, and that Python code acts on the tree itself. There’s a lot you can do with this idea of “a document tree as a medium”.

That said, I haven’t used Leo for a very long time, roughly since I got a smartphone. I want my personal information graph in my pocket, not only on the desktop.

1 Like

Your “easy to steal” code reminds me of Donald Knuth’s idea of writing re-editable rather than reusable code (in this interview),

3 Likes

Another one: TreeSheets

3 Likes

@natecull I read the original post in its entirety, many rich ideas in there to chew on. I saved the post in a Markdown file and I’ve been considering how to translate it to a working prototype. The ideas are ambitious and would be challenging to attempt even partially - but the first few steps have already been an interesting learning experience.

THINKER-2000 was my first idea for the project name, a modern re-imagining of Commodore PET 2001. I’d still like to incorporate that retro aesthetic , but now I’m leaning toward the name Node Book. A “node” being the term for an item in tree-structured data; each node is connected via “branches” to its parent and child nodes. A node can contain any data type and media, including other nodes directly or as references (internal and external links).

I wondered what language and tech stack to use, to make sure every part of it is comprehensible and hackable, easy to copy & paste into other projects. From the SUSN project I saw Node.js is already installed on your system, as it is on mine, so that’s a common ground. I’m fond of Bun, a TypeScript runtime with modern features (like built-in native bindings for SQLite) - but I didn’t want to impose my preference as a requirement, Node is more popular and standard. So the application will be written in plain JavaScript. A flawed lingua franca, the American English of programming.

Requiring no build setup felt important, so you can just download the project and run it immediately. I also wanted to have zero dependencies, to avoid needing to install from a remote source like NPM (Node package manager).

I started with an HTML file. No server, just opening the file with a browser. This made think, maybe HTML is the best portable medium for a cross-platform single-file executable application. Everyone has a browser installed, and they can simply download and open an HTML file that contains the whole application, including CSS and JS. There are known limitations of a static HTML page - such as CORS (Cross-Origin Resource Sharing) for communicating with external servers; and I think running Wasm is prevented - but perhaps these features can be a progressive enhancement, to make the local server optional.

The name “idea processor” came to me..from a January 1984 Byte Magazine article about Thinktank.. Essentially an “outliner”, a word processing mode that got quickly subsumed into Microsoft Word and, I suppose, Emacs Org Mode.

A few years ago I made a prototype app called Bonsai, a tree-structured outliner. The drag-and-drop editing of the tree was smooth and keyboard accessible. But it got out of control as I implemented collaborative editing via WebSockets. The ad-hoc conflict resolution of edit operations was a bit fragile, not robust logic. If I were to revisit that feature, I’d build it on a proper foundation like the Yjs library to handle distributed data types.

Yjs is a CRDT implementation that exposes its internal data structure as shared types. Shared types are common data types like Map or Array with superpowers: changes are automatically distributed to other peers and merged without merge conflicts.

Yjs is network agnostic (p2p!), supports many existing rich text editors, offline editing, version snapshots, undo/redo and shared cursors. It scales well with an unlimited number of users and is well suited for even large documents.

That sounds good, like this could be one of the basic building blocks of malleable software. It reminds me of another library..

Build instant multiplayer webapps, no server required — Magic WebRTC matchmaking over BitTorrent, Nostr, MQTT, IPFS, Supabase, and Firebase.

Trystero manages a clandestine courier network that lets your application’s users talk directly with one another, encrypted and without a server middleman.

The net is full of open, decentralized communication channels: torrent trackers, IoT device brokers, boutique file protocols, and niche social networks.

Trystero piggybacks on these networks to automatically establish secure, private, P2P connections between your app’s users with no effort on your part.


As a mockup of the outliner, I wrote an example list using ul and li.

  • Hello, world
  • List
    • Item 1
    • Item 2

Now how to make this a dynamically editable list/tree..

..Oops I saved and published the post while still in draft. Well, to be continued..

Instinctively what I wanted for the dynamic tree is a React-like data flow with the view as the function of state.

Given a data model of the tree of nodes:

const tree = [
  { type: 'text', value: 'Hello, world' },
  { type: 'text', value: 'List', children: [
    { type: 'text', value: 'Item 1' },
    { type: 'text', value: 'Item 2' },
  ] }
]

(Looks similar to Freewheeling Hypertext. I have a hunch that HTML is actually a dialect of Lisp.)

I want to render it into an existing DOM element, like:

render(App({ tree }), view)

The app can have “actions” to manipulate the tree, like create/delete/edit/search nodes. Each node could have actions too, like move up/down. And all actions produce new state, which re-renders the application, efficiently updating the changes.

But React has a big interface and file size, it’s not fully understandable by a single person. The compromise I settled on is using Preact and its Signals library.

Signals are reactive primitives for managing application state.

I “prebundled” these (using Bun’s built-in bundler) as a standalone library in a single unminified file, meact.js. That way we don’t take its internal logic for granted, all code is readable and forkable if needed.

I decided not to use JSX syntax, which requires a build step. Instead I’m writing the view by hand, with h() as shorthand for createElement.

h(tagName, attributes, ...children)

Sure is a Lisp-y looking shape. It may not be a familiar syntax as:

<tagName key={value}>
  {...children}
</tagName>

But I think any verbosity is worth it for not needing a build step. It’s simpler, more immediate, with fewer layers of abstractions.


So now there’s a minimal dynamic self-editing tree with view and actions to modify its own state.

Just the first few steps already needed design decisions that are subjective, opinionated, arguable and questionable. It kind of feels like I’m doing things with one arm tied behind my back, intentionally limiting my usual toolbox to a small set of portable tools.

Next I want a tiny Markdown parser (that also supports a subset of HTML) as a single standalone library.


I’ve been playing with OPFS (Origin private file system), a web API for a virtual file system in browser local storage.

This would be convenient for local-first data, but requires a local server (or remote one with HTTPS). It won’t work with an HTML file opened directly.

localStorage probably works with no server - but it’s more primitive and slow. So again progressive enhancement would be a good approach.

txiki is a tiny JavaScript runtime based on QuickJS and libuv, an async I/O library. It’s mostly written in C, with bindings for JS. It supports a subset of the web platform API like streams, JSON, fetch, WebSocket, WebAssembly, Workers.

It can run JavaScript on server side, or compile it (and the runtime) into a single-file executable binary, about 5MB as baseline. (Compared to 90MB of a Bun-built executable, which is not bad either for what you get out of the box.) In a way it reminds me of LÖVE, a small runtime that packs a punch, bundling a curated set of features. I’m learning a lot from its C codebase, I’ll probably borrow some ideas for my micro-Lisp adventure.

It supports TCP/IP but not HTTP yet.

My current state of mind is that of going with libwebsockets + mbedtls.

i think GitHub - civetweb/civetweb: Embedded C/C++ web server is a good fit for this project. it meets all the requirements discussed earlier: MIT license, websocket client + server, mbedtls support, and small size

Performant HTTP server · Issue #44 · saghul/txiki.js · GitHub

This could be a way to package and distribute a tiny local server for NodeBook. That way the application can be opened in a browser, and have a minimal backend to serve and save files. And all standalone with no dependencies, not even Node.js. It could even build new variations of itself.

Well, to be honest I don’t know how far I’ll take the prototype, I may lose interest sooner or later. But at least I’m gathering and leaving breadcrumbs, if anyone else wants to build it themselves. It sure is a fun exercise though, the healthy constraints are a right amount of challenging, where it’s on the edge of being technically possible.

1 Like