Wells describes his vision of the World Brain: a new, free, synthetic, authoritative, permanent “World Encyclopaedia” that could help world citizens make the best use of universal information resources and make the best contribution to world peace.
Plans for creating a global knowledge network long predate Wells. Andrew Michael Ramsay described, c. 1737, an objective of freemasonry as follows:
… to furnish the materials for a Universal Dictionary … By this means the lights of all nations will be united in one single work, which will be a universal library of all that is beautiful, great, luminous, solid, and useful in all the sciences and in all noble arts.
In 1895, Paul Otlet and Henri La Fontaine began the creation of a collection of index cards, meant to catalog facts, that came to be known as the Repertoire Bibliographique Universel (RBU). By the end of 1895 it had grown to 400,000 entries; later it would reach more than 15 million entries.
In 1904, Otlet and La Fontaine began to publish their classification scheme, which they termed the Universal Decimal Classification, originally based on Melvil Dewey’s Decimal classification system. It was a bibliographic and library classification representing the systematic arrangement of all branches of human knowledge organized as a coherent system in which knowledge fields are related and inter-linked.
I see this is the historical context of the idea for URI (Uniform Resource Identifier), “a unique sequence of characters that identifies an abstract or physical resource, such as resources on a webpage, mail address, phone number, books, real-world objects such as people and places, concepts.”
In 1926, extending the analogy between global telegraphy and the nervous system, Nikola Tesla speculated that:
When wireless is perfectly applied the whole earth will be converted into a huge brain … Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.
In a series of lectures in 1936-37, Wells begins with the observation that the world has become a single interconnected community due to the enormously increased speed of telecommunications. Moreover, energy is available on a new scale, enabling, among other things, the capability for mass destruction. Consequently, the establishment of a new world order is imperative.
..Without a World Encyclopaedia to hold men’s minds together in something like a common interpretation of reality, there is no hope whatever of anything but an accidental and transitory alleviation of any of our world troubles.
..I am speaking of a process of mental organisation throughout the world which I believe to be as inevitable as anything can be in human affairs. The world has to pull its mind together, and this is the beginning of its effort. The world is a Phoenix. It perishes in flames and even as it dies it is born again. This synthesis of knowledge is the necessary beginning to the new world.
1 - Dilemmas
2 - Two Evolutions
3 - Space Civilizations
4 - Intellectronics. The day will come when machine intelligence will rival or surpass the human one. Moreover, problems facing humankind may surpass the intellectual abilities of flesh and blood researchers. What shall we expect (or fear) in this conception of the future?
5 - Prolegomena to Omnipotence. Technological evolution gives us more and more abilities—in fact, sometime in the future we should be able to do everything at all.
6 - Phantomology. Speculations on what is now known as virtual reality.
7 - Creation of Worlds. May it be that instead of painstaking research we can “grow” new information from available information in an automatic way? Starting with this question Lem evolves the concept to the creation of whole new Universes, including the construction of a heaven/hell/afterlife.
8 - Pasquinade on Evolution. Can humanity engineer their own evolution?
9 - Art and Technology
I feel like we maybe need a word that leans into the “art, skill, craft” aspects of τέχνη. We have a lot of “technology” today which actually de-technes its users. But some technology up-technes us.
Help: A Minimalist Global User Interface by Rob Pike
Oh! That reminds me of the late-1980s through early 1990s moment, after windowing had been demonstrated on the Macintosh and Amiga and Atari, and well before the Web, when the PC world was still just starting to do pop-up windows with text. And “hypertext” was the new buzzword. Everyone was making their own proprietary hypertext systems. I forget most of them, but a very early use was for documentation systems. Netscape had (or licenced) “Folio Views”, which it used for its system documentation well into the 1990s. Microsoft Windows (at least by 1993) had “Hypertext Help”. The GNU Unixy ecosystem got “info” at some point - when?
When HTML appeared it just instantly destroyed almost all of these proprietary hypertext systems for two reasons, even though it was worse in many ways: one, it was open and free, and two, it was network-aware, with DNS creating a global filesystem, which was the killer app that melted the whole world (and created a million literary allusions to Neuromancer’s “cyberspace”). But even without TCP/IP (that even Windows 1995 didn’t ship with), you could still use a HTML browser for a local documentation set.
tldr, my guess is that Plan 9’s “Help” was thought of as a hypertext system at its core, with an initial use-case of browsing system manuals, which is why it would have been called that despite sounding something more like Emacs. Am I anywhere near right?
Plan 9 is a fascinating fork in the path and I wish I knew more about it. I was for a while intrigued by its follow-on Inferno - but I’m pretty disappointed to hear that Go is considered to have come from there. Wasn’t the whole point of Inferno that it was a live, Smalltalky, VM? And not just yet another C-like dumb compiler that takes text blobs to native-machine-code binary blobs?
to furnish the materials for a Universal Dictionary
Isaac Asimov presumably knew about the Masonic and utopian interest in this too, because that’s how his 1940s “Foundation” begins - at least with the cover story of “a project to build a galactic Encyclopedia”.
Encyclopedia Galactica is the name of a number of fictional or hypothetical encyclopedias containing all the knowledge accumulated by a galaxy-spanning civilization, most notably in Isaac Asimov’s Foundation series.
Theodore Wein considers the Encyclopedia Galactica as possibly inspired by a reference in H. G. Wells’s The Shape of Things to Come (1933). The future world envisioned by Wells includes an “Encyclopaedic organization which centres upon Barcelona, with seventeen million active workers” and which is tasked with creating “the Fundamental Knowledge System which accumulates, sorts, keeps in order and renders available everything that is known”.
Earlier, in the 19th century, Western esoteric philosophers like Rudolf Steiner were speaking of:
The Akashic records are believed to be a compendium of all universal events, thoughts, words, emotions, and intent ever to have occurred in the past, present, or future, regarding not just humans, but all entities and life forms.
The concept of a book or library that contains a universe of knowledge is probably ancient.
The apocryphal Book of Jubilees speaks of two heavenly tablets or books: a Book of Life for the righteous, and a Book of Death for those that walk in the paths of impurity and are written down on the heavenly tablets as adversaries of God.
The origin of the heavenly Book of Life is sought in Babylonia, where legends speak of the Tablets of Destiny and of tablets containing the transgressions, sins, wrongdoings, curses and execrations of a person who should be “cast into the water”; that is, blotted out.
A strange religious imagery I remember is some monk or saint eating a book. ..Ah yes, it was a woodcut by Albrecht Dürer from his Apocalypse of St John series.
What does it mean to eat a book? The saint is consuming angelic knowledge and making it a part of his being.
The Argentinian writer Jorge Borges has a short story called The Aleph, published in 1945. It’s not exactly about a book, but a kind of multidimensional crystal.
The Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping, or confusion.
On the back part of the step, toward the right, I saw a small iridescent sphere of almost unbearable brilliance. At first I thought it was revolving; then I realised that this movement was an illusion created by the dizzying world it bounded. The Aleph’s diameter was probably little more than an inch, but all space was there, actual and undiminished. Each thing (a mirror’s face, let us say) was infinite things, since I distinctly saw it from every angle of the universe. I saw the teeming sea; I saw daybreak and nightfall; I saw the multitudes of America; I saw a silvery cobweb in the center of a black pyramid; I saw a splintered labyrinth (it was London); I saw, close up, unending eyes watching themselves in me as in a mirror; I saw all the mirrors on earth and none of them reflected me..
And we’re back to Herbert George Wells. He’s like a Bell Labs of writers. History may not have a coherent narrative structure, but it sure knows how to rhyme.
And that Borges story will be why the McGuffin (spoilers for a 37-year-old book) at the end of William Gibson’s 1988 “Mona Lisa Overdrive” is… a giant hard drive called an “Aleph” which contains a copy of the entire Internet.
The concept seemed a lot cooler and more mystical in 1988, before search engines, web-scrapers, and the Internet Archive. Not that the Archive isn’t cool! It’s just, Bill, did you somehow never hear of floppy disks? You do know that data can be copied, right?
Maybe the concept was that the Aleph in MLO was some weird quantum-mechanical thing that was a live image of all Cyberspace data via spooky entanglement. If so, that wasn’t ever really explained.
My guess about Rob Pike’s Help being specifically hypertext was wrong. Although it’s from 1991, so in the right era, but no, not “help” as in lookup of textual manuals, but “help” as in a generalised assistant.
This is a revision of a paper by the same title published in the Proceedings of
the Summer 1991 USENIX Conference, Nashville, 1991, pp. 267-279.
It’s inspired by Niklaus Wirth’s Oberon specifically.
The inspiration for help comes from Wirth’s and Gutknech’s Oberon system [Wirt89, Reis91]. Oberon is an attempt to extract the salient features of Xerox’s Cedar environment and implement them in a system of manageable size. It is based on a module language, also called Oberon, and integrates an operating system, editor, window system, and compiler into a uniform environment. Its user interface is disarmingly simple: by using the mouse to point at text on the display, one indicates what subroutine in the system to execute next. In a normal Unix shell, one types the name of a file to execute; instead in Oberon one selects with a particular button of the mouse a module and subroutine within that module, such as Edit.Open to open a file for editing. Almost the entire interface follows from this simple idea.
The user interface of help is in turn an attempt to adapt the user interface of Oberon from its language-oriented structure on a single-process system to a file-oriented multi-process system, Plan 9 [Pike90]. That adaptation must not only remove from the user interface any specifics of the underlying language; it must provide a way to bind the text on the display to commands that can operate on it: Oberon passes a character pointer; help needs a more general method because the information must pass between processes.
I wish we had something like help on Windows or Linux today. We’ve got copy-paste of text and that’s all. And on the other end, we’ve got insanely over-engineered IDEs which are all universally terrible to use.
There is a reference to hypertext, though!
Help is similar to a hypertext system, but the connections between the components are not in the data — the contents of the windows — but rather in the way the system itself interprets the data. When information is added to a hypertext system, it must be linked (often manually) to the existing data to be useful. Instead, in help, the links form automatically and are context-dependent. In a session with help, things start slowly because the system has little text to work with. As each new window is created, however, it is filled with text that points to new and old text, and a kind of exponential connectivity results. After a few minutes the screen is filled with active data. Compare Figure 4 to Figure 11 to see snapshots of this process in action. Help is more dynamic than any hypertext system for software development that I have seen. (It is also smaller: 4300 lines of C.)
The next question I have though is: what was Xerox Cedar, that inspired Oberon? It was a language on the Alto and Star, developed from a thing called Mesa. It inspired Modula-2, so I guess that’s the Wirth connection.
In 1976, during a sabbatical at Xerox PARC, Niklaus Wirth became acquainted with Mesa, which had a major influence in the design of his Modula-2 language.[10]
Java explicitly refers to Mesa as a predecessor.[11]
Also! Ah, this is why Smalltalk’s assignment syntax is weird!
Due to PARC’s using the 1963 variant of ASCII rather than the more common 1967 variant, the Alto’s character set included a left-pointing arrow (←) rather than an underscore. The result of this is that Alto programmers (including those using Mesa, Smalltalk etc.) conventionally used camelCase for compound identifiers, a practice which was incorporated in PARC’s standard programming style. On the other hand, the availability of the left-pointing arrow allowed them to use it for the assignment operator, as it originally had been in ALGOL.
Fun fact: “PETSCI” on the Commodore PET and C64 also followed that 1963 ASCII standard, to the extent of having that weird left-arrow key in prime real estate on the keyboard. Used to baffle the heck out of me, because BASIC didn’t use that symbol. Nothing ever used it! It wasn’t backspace/delete, it wasn’t left-cursor, it just printed an utterly useless arrow! And it took up a whole key! In the C64, right on the top left where Escape should be! Madness!
But presumably the reason was because Commodore thought “an assignment symbol” would be super important for programming languages (because ALGOL I guess). Instead, everyone except Xerox PARC just used = or := … and PARC just removed themselves from the entire OS and programming-languages conversation.
Brett Victor has a copy of the 1984 “Cedar Midterm Report” describing the Cedar Programming Environment, which looks very Macintosh-y. (I mean obviously the arrow of causality from Xerox to Apple goes the other way, but Cedar is now out of the critical path, it’s after GUIs have been commercialised and PARC is losing its influence).
In 1978, the computing community at PARC consisted of three distinct cultures: Mesa, Interlisp, and Smalltalk. Both the Smalltalk and Mesa communities programmed primarily on the Alto, a small personal computer that had been developed at PARC [32]. The Interlisp programmers continued to operate on a time-shared, main-frame computer called MAXC, a home grown machine that emulated a PDP-lO and ran Tenex.
But there were aspects of the Alto design that did not work out well. In particular, the limitations on the size of the address space and on the amount of real memory were serious. t2 As a result, a great deal of time was spent squeezing software into the limited space available. However, we did not take this as an indictment of personal computing, but merely an indication that we had to think bigger.
This memory-size thing is my big worry with Uxn. It’s super cute! But if we get married to Uxn, or anything else 8/16 bit shaped, we end up hitting the same wall Xerox hit in 1978. Do we really want to do that?
After much painful deliberation, we decided to design a new machine, the Dorado, to overcome these obstacles.t3 We intended that the Dorado would provide the hardware base for the next generation of computer system research at PARC. … Most Dorados currently have 2 megabytes of main storage expandable to 8 megabytes. (The Alto had 128K bytes of memory, later expanded to 512K via additional memory banks.)
The EPE working group recommended that “CSL should launch a project on the scale of the
Dorado tll to develop a programming environment for the entire laboratory starting from either Lisp [Interlisp] or Mesa” [H].t12 We also concluded that the laboratory could support only one major programming environment.
Interesting that Smalltalk wasn’t even in the running! At all!!! So much for the myth of Smalltalk being PARC’s secret sauce. They were actively marketing it as the Next Big Thing - but weren’t relying on it for internal infrastructure.
Between Mesa and Lisp, Lisp lost (and there’s another big fork in the path!) for social rather than technical reasons.
The rest of Xerox had a fairly large and growing commitment to Mesa, and none to Interlisp.
Remaining largely compatible with the rest of the corporation had both advantages and disadvantages, but the advantages predominated. With respect to research communities outside of Xerox, either choice would reduce communication with important (but different) research communities in the outside world: Lisp favors the AI community, Mesa favors the programming language and systems programming community.
Although the efforts required were about the same, a somewhat larger number of qualified people were available to work on a Mesa-based EPE. It was noted, however, that if Mesa were chosen, some effort would be needed to ensure that those members of the Lisp community concerned with programmer assistance, programs as data bases, and integrated ublanguages were able to provide enough input to ensure that Cedar would be of use and attractive to them
We wanted to keep abreast of developments in computer science in the world at large. Most of the knowledge representation. expert systems. automatic programming, and programmer’s assistant type of work is done in various Lisp dialects. However, much of the formal specification and verification work was directed towards Pascal dialects, and would therefore be more easily applicable to Mesa. Also Ada being based on Pascal is much more similar to Mesa than to Lisp, so that work on Ada environments would be highly relevant.
Ironic in retrospect that Pascal was the ALGOL-derived language that was being considered the industry standard in 1979, with C not even seen as a speck on the horizon. I guess there was just a massive gap between PARC and Bell Labs culture.
We wanted to move implementors into the project easily, and were even more concerned that it be easy for users to convert to the use of Cedar. t20 Within CSL, there were roughly comparable numbers of hard core users in each camp. so that the issues of migration seemed roughly symmetric. However, within Xerox, Mesa was much more widely known and used. Outside of Xerox, generic Lisp was more widely known than Mesa, but generic Pascal was more widely known and used than Interlisp.
As a result, the decision was made in early 1979 to launch a project to build Cedar, an experimental programming environment, starting with Mesa.
and later on, we see the result of this choice:
The principal shortcomings of Cedar are in the area of providing support for various aspects of the Lisp style of programming (and to a lesser extent Smalltalk), and can be attributed to the selection of Mesa as a starting point for Cedar and the fact that the overwhelming majority of the Cedar implementors and users came from the Mesa community. These shortcomings include not reaching Cedar’s original goals with respect to: fast turnaround for small program changes, support for wide range of (i.e., late) binding times, easy use of programs as data, and inheritance/defaulting (Smalltalk subclassing). In short, with respect to the fundamental principle stated in the EPE report [8] that “the present Lisp, Mesa, and Smalltalk programming styles all must be supported in a satisfactory way,” it is fair to say that Cedar has not (yet) succeeded.
The rest is a description of the Cedar environment, which is basically a whole windowing system of its own, that’s also an IDE and also a language. It’s got a bit of a Windows 95 feel by way of late-1990s XWindows in that there’s a reserved icon row on the bottom of the screen (a bit like the Taskbar), except that it’s just a chunk of the desktop; but “maximised” windows don’t obscure it.
imo the concept of a “Desktop”, which would immediately and always be obscured by actual working windows, was a huge mistake. Is it too soon to say that? I’m saying it.
What’s wrong with a desktop? For a “moving files around” type operation I like that I have a scratchpad to dump a file and have it be in the exact same place x & y when I come back to look for it.
In my experience two things are wrong with the “Desktop” concept:
It’s often not treated as well as a real folder is. Even if the Desktop is mapped to a literal actual disk folder, it’s often fragile. In Windows particularly, the Desktop has usually been the first folder to get corrupted.
If you maximise windows - and you almost always want to maximise windows because screen real estate is valuable - it’s always hidden. Even if you have a weirdly small working window, much of your desktop is still obscured. To actually use a desktop as a scratchpad, you have to fully minimise all your working windows. If you’re very lucky there’ll be some kind of half-hidden gesture to do this for you, but you can’t just “bring the Desktop to the front” like you can a normal folder window.
An always-visible icon bar like a Menu, Launcher, Dock or Taskbar, on the other hand, is extremely useful. I think if we were doing say the Macintosh or Windows 95 from scratch today, a smarter solution than a special-case, constantly hidden Desktop would be a button on the icon bar which opens a normal folder (the same one each time) as a Scratch area.
The Desktop was imagined as a sort of launcher / home screen, where you’d only work on one or two windows/apps at once and then you’d close them again, returning you to the Desktop. In practice, we just don’t use GUIs like that. We have zillions of open windows and they’re usually fullscreen (because they don’t work if they’re not; I try to at least have two tiled half-screen web browser windows so I can use my 16:9 landscape screen, but sometimes even that breaks layouts) and we never close them, we switch between them. So the Desktop never gets a chance to become visible.
Oh, that’s interesting! Yes, the “tooltray” seems like exactly that kind of UI design.
I keep running up against that RAM/storage limit though. With a 256K data limit I still can’t code even my personal “tiny, toy” Javascript apps. (A Chinese language translator, and a star map based on the AT-HYG dataset.) My AT-HYG JSON data file is 38 MB and my Unicode Chinese database is 45 MB. I can’t make those datasets any smaller because those are the datasets! They’re reduced subsets as it is! The raw Unicode CJK files are 169 MB. The full AT-HYG is 190 MB. The datasets are the whole reason why my Javascript apps exist, to search them.
I get wanting to start again with a clean slate. I get that. I get wanting to be “small and simple”. I have that desire myself. I get the hunger for the approachable ROM based retro systems we had in the 1980s. I grew up with them. I want that feeling back too.
But arbitrarily restricting a toy machine’s RAM/disk size, so that we deliberately can’t put real-world datasets and workloads in… that’s not how we get actual simplicity. With a well-designed object or module model, a VM ought to be able to remain safe and usable as a unified information space right up to the size of a modern desktop, and beyond. I mean multiple gigabytes of RAM and terabytes of disk space. That’s a modern cheap home machine’s capacity. I want a VM I can put all my stuff in.
Remind me, what was the deal-breaker for you with my LÖVE-based approach? I kinda went through a similar thought process before I settled on it, of not wanting to artificially limit RAM.
Yeah, my impression is that its not meant to be a usable desktop for anything other than making small to medium scoped video games.
They do plan to release the runtime as source-available in the coming years so that more serious projects can edit the restrictions if needed but you’d lose the ability to share it on their BBS, and the joy of having no external dependencies.
One of many projects I have that’s not going anywhere is to port their devtools (all written in Lua with source editable ) to Arcan, another Lua-based desktop engine with a focus on network transparency and being a “real deal” daily driver.
This video (which i like but its long and not that relevant to malleable computing so not suggesting you watch it) covers the motivation for the Pico8, the even smaller predecessor to the picotron.
& the reason for the super tight constraints was said to explicitly take away decisions on scope and art style and distribution etc., and to give creators something to blame for why their game is so tiny. With the point being to make a platform for games that’s inspiring and cozy and fun. And for that, I think it works great.
I’ve been messing around with Picotron this week and I already want to make a flappy bird clone, while figuring out what game I want to make with Android Studio feels overwhelming. There’s so many options and being able to make something big makes me feel kinda sad at myself if I don’t.
I think to have a similar sort of experience for non-videogame software you’d pick different limitations, and which ones that would be fun and inspiring and which would be frustrating and pointless depend totally on what sort of thing you’re making.
And of course for a real general computer I don’t think I’d want any artificial limits like that.
Recently I’m thinking about how to gradually replace my existing personal computer usage with a homebrew language and software environment. Starting with a tiny Lisp VM that can fit on a microcontroller “as little as 2 Kbytes of RAM and 32 Kbytes of program memory”.
That smallness and simplicity is so satisfying and refreshing compared to the conventional modern computing stack. It’s cute, cozy, comfortable - and most of all, conceptually beautiful. At that primitive level of computing, it’s how I imagine the mystic mathematicians of the Pythagorean school envisioned the underlying design of the universe. It’s how I feel about Lambda calculus.
I ported/forked the ESP32 version of the Lisp VM to C99, so it can be compiled to target WebAssembly, which has a 32-bit architecture and limited to 4 GiB of memory. (Actually I see there’s a Wasm64 target, but I haven’t seen anyone using it - and probably depends on Wasm runtime or host if it’s supported.) So far I’m still learning how to be fluent in the language, and to do practical things with it. By default the VM is allocated 64K of memory, and I haven’t hit the limit yet in the small programs I’m writing.
PICO-8 and Picotron are designed to be small, where the limited capacity is defined in the rules of its universe. But I think Uxn was created for practical daily computing, albeit an exotic environment. If there’s a need for larger memory or disk storage, I bet it has room for someone to develop a solution, maybe based on historical examples of 8 or 16-bit computers, like using paging or swapping.
With continuing miniaturization and commodification of computing and electronic components, I feel like we’re in a similar era like the days of Homebrew Computer Club and the Creative Computing magazines you mentioned. And, as it turned out with personal computers, it has big potential not only for hobbyists but for “serious” purposes like R&D in academia and business. I love that it gives a new generation of people a chance to revisit history and re-experience some of the limitations of past computers, but this time in an even more micro form factor and with the benefit of hindsight - to do things better, and take that other fork in the road if we wanted.
I’m curious to see how practical it is to try and move my computing needs to a small malleable system built from scratch. Ideally I want to live in it, to use it as my daily driver - which suggests a homebrew VM/OS/IDE with code editor, terminal shell, file system, hypertext browser, media player, recorder, tools and libraries to make music, art, animation.. Since I already have a working system built up from components made by companies and other people, that more or less meets my needs - the “reconquest” of my software tools and environment is going to be partly a process of deconstruction, and rebuilding it part by part, a spaceship of Theseus.
Typically a retro fantasy console like PICO-8 has limited screen resolution in pixels. For my use case, I was wondering about a (virtual) display device based on vector graphics that can adapt to any available or desired resolution. For example, to print the output of a program on large-format paper or canvas; or connect the real-time output to a light projector and cast on a screen or wall of a building.
Similarly with audio, I don’t want to be limited to a fixed “resolution” (sample rate, bit depth, channels) due to the design of the VM. What’s the audio equivalent of vector graphics? (LLM tells me “pure tone” or “sinusoidal waveform”, like analog synthesizers. I’ll need to think on this further..)
What I was picturing is, just like SVG (and web Canvas API) is made up of graphics instructions, a data format to describe audio and musical operations. Something like MIDI or OSC (Open Sound Control). I know MIDI values (notes, velocity) are integers in the range 0~127, so OSC seems better suited - I see it supports float32 values.
Open Sound Control (OSC) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance or show control.
So it’s a general-purpose messaging protocol, and unlike MIDI, it doesn’t specify any message types of its own - it’s up to the implementation what the messages mean.
OSC is sometimes used as an alternative to the MIDI standard, when higher resolution and a richer parameter space is desired.
OSC messages are transported across the internet and within local subnets using UDP/IP and Ethernet. OSC messages between gestural controllers are usually transmitted over serial endpoints of USB wrapped in the SLIP protocol.
Gestural controllers.. First thing that comes to mind is a theramin, but also wearable computers/sensors.
(I like how she talks about “pitch, yaw, and roll” of hand movements, like an aircraft or ship.)
OSC messages consist of an address pattern (such as /oscillator/4/frequency), a type tag string (such as ,fi for a float32 argument followed by an int32 argument), and the arguments themselves (which may include a time tag).
Address patterns form a hierarchical name space, reminiscent of a Unix filesystem path, or a URL, and refer to “methods” inside the server, which are invoked with the attached arguments.
It’s like a one-way remote procedure call. I wonder, can the receiver return values? ..No, as far as I can see, OSC is unidirectional (one-to-one or one-to-many broadcast) but bidirectional messaging can be achieved by each participant implementing a client and server.
Applications of OSC include real-time sound and media processing environments, web interactivity tools, software synthesizers, programming languages and hardware devices.
The protocol has achieved wide use in fields including musical expression, robotics, video performance interfaces, distributed music systems and inter-process communication.
Cool! I’ll have to explore this deeper.
Speaking of protocols, I’ve been trying to achieve the seemingly simple task/challenge from early in this thread: to exchange messages peer to peer (between an instance of a VM to another over the network), directly without going through anyone else’s server.
I want to use WebRTC (real-time communication) as one of the supported data channels.
..It allows audio and video communication and streaming to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps
But it’s been difficult to use, especially from plain C. Partly it’s my lack of knowledge and experience with C, but the design of the protocol is complicated, I suppose due to the nature of networking in the modern age. As part of the initial negotiation between peers, they need to consult a TURN server (Traversal Using Relays around NAT).
A complete solution requires a means by which a client can obtain a transport address from which it can receive media from any peer which can send packets to the public Internet. This can only be accomplished by relaying data through a server that resides on the public Internet. Traversal Using Relays around NAT (TURN) is a protocol that allows a client to obtain IP addresses and ports from such a relay.
As I’m learning, most example applications demonstrating the use of WebRTC (like the official examples which are great otherwise) have a hardcoded list of “public” STUN servers, the top ones being Google’s unofficial servers which apparently everyone decided were good enough for demo purposes, and I’m certain that countless production/commercial application with WebRTC are using them. There’s even a list of “publicly available” STUN servers, refreshed every hour with automated testing: valid_hosts.txt. Except, it’s about 90 domains of questionable origin - including Google, Cloudflare, SignalWire, NextCloud, and a picturesque stun.ru-brides.com.
To self-host a TURN server, the most popular and comprehensive solution is coturn. I also found a minimal implementation called violet, also written in C11. The latter is great for studying how it works - and maybe integrating into my Lisp VM. Its author is a guy at Netflix who’s written other networking libraries like:
libdatachannel: WebRTC and WebSockets C/C++ standalone library
datachannel-wasm: C++ WebRTC Data Channels and WebSockets for WebAssembly
“I’ve also been contributing to WebTorrent, and libtorrent, in which I added WebTorrent support. This first native WebTorrent implementation opens exciting possibilities for Peer-to-Peer file exchanges between Web browsers and native clients!”
I’m learning a lot in the process. For now I encountered an obstacle: I typically use NGINX as reverse proxy and load balancer for server applications; however, it turns out, a TURN server requires the use of a block of ports, and due to its nature it’s best to run on its own server. Which I’m willing to invest in, because I’ve always wanted to set up and self-host a P2P video chat app, like Jitsi but ideally hand-rolled.
..Well, it will be an ongoing process to reconquer my computing stack, I’m only on the first few steps. I see it will require a kind of stubbornness, long-term determination, and willingess to sacrifice some convenience in exchange for the greater good and independence. Like being a vegetarian or refusing to drive automobiles, or use phones, social media, AI, even the internet or computers altogether like the Amish. It has parallels to off-grid living, it’s accepting constraints by choice and design.
It renews my admiration for people like Linus Torvalds and Richard Stallman, who are demonstrating how to build and use your own tools for personal computing. It’s not for everyone, but there’s something important at the bottom of it, the act of questioning the given social/cultural/personal situation, to understand what layers and components it’s built on, and to learn how to be more independent in thinking and living, even at the expense of convenience.
I’m honestly not sure, but downloading it again today I guess LÖVE is very very heavily oriented towards graphics and games, which isn’t quite where my head’s at. It’s probably a very good game engine! My toy stuff is just text though. I’d honestly just run them in web browsers if I could open files, but browsers now go out their way to not allow people to access their own filesystem from web pages running on their own machine.
Lua for Windows would seem to be the more “normal scripty text” kind of Lua environment, but it a) seems to come with a whole wall of added libraries that I’d rather not have, and b) has been abandoned since 2018, so I assume all those libraries are on fire and full of security vulns?
Otherwise it sort of seems like to run Lua in a simple text console I need to be running Linux, or figure out how to write a C wrapper and compile it with Visual Studio. I guess I’ll be running Linux again soon enough since I have to get off Windows 10 by October (my hardware can’t run 11 and I see no reason to e-waste it).
But if I keep running Node, I get cross-platform-ness and a text console for free. I just can’t easily do graphics if I need it. It seems hard to bridge both worlds.
Edit: Been playing with Love. It reminds me a lot of P5.js except in its own window. One thing though that I really want and seems like I can’t have in a canvas-based system, is copy-paste. I really really like being able to generate text and be able to copy it out of a window.
Oh, it absolutely is. At least to think about. I spend far too much time dreaming about fantasy tiny computer environments just because it feels so good to be able to understand the basic workings of a machine, and trust that nothing is happening that you didn’t make happen.
I still have a fondness for the late-DOS-era “windowing using IBM text box characters” system. Visual BASIC for DOS - that was a thing! I loved it so much. And fixed font text. You knew exactly how to put stuff precisely where you wanted it. No mess, no fuss. The Windows graphics toolkit was a nightmare and even HTML table layouts felt like quicksand.
I don’t want to take over this thread, but maybe give my old talk transcript a quick look. In brief, LÖVE is not perfect but it’s available now and (along with a few other things, but just focusing on it here) has many good attributes while we come up with something better. It does do stuff I don’t understand or use yet, but it’s only 5MB so the “bloat” seems bounded. I didn’t realize this when I started using it but its rep on the street seems to be that it has better text primitives than most game engines. I use it mostly for text-based applications and have increasingly been liking having an escape hatch for graphics. My editor supports (just text) copy-paste, and is extremely reliable at this point with a 1200-line core. And lack of releases is a strong point in my book. Lua has very few bugs, and LÖVE has had 3 releases since 2018.
It does require a computer from say the last 10-15 years. Not very power-efficient. Also, zero support for screen readers. I think about that a lot but have made zero headway there.
Anyways, if it seems interesting we should probably take further conversation about it over to the thread on Freewheeling Apps.
I feel like the thing that’s the trickiest today with all our software fragmentation is answering: how does data get in, and get out, of an individual app, or a tool, or a component? How do things connect? What’s the “bus”?
Is it “standard input and output”? Is it “the filesystem”? Is it “Berkeley sockets”? Is it an “event queue”? Is it “the stack”? Is it hooking up listeners and callbacks to an in-process object system? Or an inter-process object system? Can callbacks be hooked and unhooked at runtime, or does the tool/app/component need an external installation routine? Is the communication medium a database, and if so, is there a whole conplicated setup dance that needs to be orchestrated to get database permissions, user accounts, etc? Is it a “framework” that the conponent needs to be registered with, like an OS system service dispatcher or a large application or a web server? Or on the other end of the complexity scale, is it raw memory access? Is it raw CPU interrupts (including OS call interrupts)? Are there firewall permissions, DNS or perhaps entire Ethernet subnets needed? Is there a hypervisor, container, VM or runtime API, or a software-defined-network, simulating or virtualizing any and all of the above? How does debugging access work with any of these comms channels, and how can it be made safe and secure so components can’t elevate and steal debugging permissions without the user’s awareness?
I know it’s easy to say “there are too many communication standards, let’s add one more to be a universal one”, but… something just doesn’t feel right here. It just all feels like epicycles and we’re overdue a Newton.
Of these, stack and standard I/O (pipes) feel conceptually simplest. But even stack or pipe systems seem to also need a storage system (RAM or the Unix filesystem), and a lot of complexity gets shunted off to there.
Fragmentation is a huge problem, at all levels of our software stacks. I suspect it’s an expression of software people being idealists rather than pragmatists. They see what’s imperfect in someone else’s work and aim to do better, mostly because they feel they can. What’s missing is a shared value of interoperability that is strong enough to make people actually invest effort to make it happen.
My one-line summary of @akkartik’s freewheeling apps is “LÖVE as a dependency is a lesser evil compared to a browser.” Which I agree with.
Still, there’s a deal breaker for me in that LÖVE doesn’t support network access, which I need for most projects I could imagine using LÖVE for. Looking forward to a future version that might have what I need.
I’ve downloaded Lua Carousel and I’m playing with it now. It’s a lot of fun! Reminds me very much of the old BASIC machines.Maybe I can use this to bootstrap my Lua skills from Javascript. Also looking at the source code of Lines to see how you do copy/paste in it.