NeXT, OOP, WWW, HTML

love.system - LOVE has the scoop.

1 Like

Local-First Conf 2025

Learn from engineers, designers, academics, startups, and indie developers who are putting local-first into practice and reaping the benefits of a cloud-optional architecture.

The conference happened last week in Berlin. Just learned about it, I think because some folks at Ink & Switch also presented there - and have been enjoying the series of talks about local-first software. Here’s a playlist:

How does local-first architecture relate to malleable systems.. It’s about shifting priority and primacy toward local data within control of the user. As Illich put it, to “invert the present deep structure of tools” in order to “give people tools that guarantee their right to work with independent efficiency.”

I get the feeling local first is only a piece of the puzzle, though an important and fundamental part. There are commercial closed-source software touting local first, often combined with interoperable formats like Markdown or JSON - which is a valuable feature - but the system itself is not malleable, only the user data is able to be liberated.

It seems there are degrees of malleability, from the data layer (local first, decentralized), to style/interface/behavior settings, to scripting and APIs, down to the application layer if it’s open to be modified by the user. (And further down to the operating system and hardware..) Open source addresses that lower layer, but requires familiarty with the programming language it’s written in, as well as good enough documentation, code organization, comments..


Rethinking Programming ”Environment” – Technical and Social Environment Design toward Convivial Computing

Computers have become ubiquitous in our life and work, and the way they are programmed needs fundamental improvements. The prior effort often aims at improving programming experience for people with specific technical backgrounds (e.g., programmers, end-users, data scientists), respectively. In contrast, throughout this paper, we discuss how to make programming activities more inclusive and collaborative, involving people with diverse technical backgrounds.
We rethink the programming environment from both technical and social perspectives.

First, we briefly introduce our previous technical effort to share the programming environment between the developers and users of the programs, eliminating the distinction between programming and runtime environments and fostering communication between them. Second, we introduce our social effort to support people with visual impairment to implement customized smart glasses that read words with a camera and speakers. We design their programming environment to consist of a software/hardware toolkit and engineers with domain expertise called evangelists.

Learning from these experiences, we discuss several perspectives on convivial computing. To conclude, we argue that both technical innovations on user interfaces for programming and understandings on the socio-technical aspect of domain-specific applications are critical for the future of programming environments, and accordingly, convivial computing.


How A Blind Developer Uses Visual Studio


The Lost Ways of Programming: Commodore 64 BASIC

By the same author, a Czech professor of programming languages and systems..

Write your own tiny programming system(s)!

Hands-on introduction to fundamental programming language techniques, algorithms and systems.

A series of hour-long lectures (video):

  • TinyML: Tiny functional programming language interpreter
  • TinyBASIC: Tiny imperative interactive programming system
  • TinyHM: Tiny Hindley-Milner type inference algorithm
  • TinyProlog: Tiny declarative logic programming language
  • TinySelf: Tiny prototype-based object-oriented programming system
  • TinyExcel: Tiny incremental spreadsheet system

What a world it would be if we could build actually practical software with the same ease as building these toy systems for educational purpose. Maybe what’s missing are robust building blocks for malleable systems that are easy enough to use by non-technical people.

The C language is a supremely portable and malleable computational medium. The more I study the language and its ecosystem, I realize how foundational it is. If only it were a Lisp, the entire elaborate edifice of modern computing might have been more malleable all the way down to the bottom.


I’ve been vaguely thinking how writing is an amazingly malleable technology for thinking made visible, with programming languages being an outgrowth of it. Also, spoken language as a communication protocol, and how thinking in words is related to the development of rational logic. Whereas before words, there was thinking in images, a different mode of visualization and dreaming.

..And how design is about a visual way of thinking, that intersects with software and computers, like human-centered design or user-experience design. And how malleability is about the design of tools, not only about the technical aspects which exist to serve that design. I think there’s a tendency to get lost in purely technical aspects and implementation details, without stepping back to think through the design.


From “The Lost Ways of Programming: Commodore 64 BASIC”:

  1. I believe that how we interact with a programming environment when programming is more important than the specific programming language that we are using.
  2. This has never been widely studied and we have interesting things to learn from past systems, including Commodore 64 BASIC.
  3. We should look at the history and recreate past programming experiences in order to learn from them, following a method that a historian of science, Hasok Chang, calls complementary science.
  4. Reading about interactions is not enough. To get a sense of how the interaction worked, you need to experience it yourself, at least in a limited form. This is best done with an interactive article.

Complementary science

From History and Philosophy of Science: The Myth of the Boiling Point

This paper on the fickleness of the boiling point illustrates the potential of what I call “complementary science,” which contributes to scientific knowledge through historical and philosophical investigations. Complementary science asks scientific questions that are excluded from current specialist science. It begins by re-examining the obvious, by asking why we accept the basic truths of science that have become educated common sense. Because many things are protected from questioning and criticism in specialist science, its demonstrated effectiveness is also unavoidably accompanied by a degree of dogmatism and a narrowness of focus that can actually result in a loss of knowledge.

History and philosophy of science can ameliorate this situation, and seek to generate scientific knowledge in places where science itself fails to do so; I will call this the complementary function of history and philosophy of science, as opposed to its descriptive and prescriptive functions.

3 Likes

I love the Commodore interface (because it was my first experience of computing), but playing with it again as an adult I realise just how terrible BASIC was as a programming language. No names! Line numbers were okay, but a decent language would have named functions. A late 1970s hobbyist BASIC social ecosystem but running on Forth would have been an interesting fork in the path. Would it have made us smarter, faster, less eager to jump to compilers and user/programmer walls, kept the habits of open acess to the machine around a bit longer? Or nah?

The Commodore thing where you could move the cursor around the screen and pressing Enter would execute the line the cursor was under - there was a magic in that.

There was something even about just the cathode glow of the TV that drew you in.

Also the graphics symbols - just shift a letter and you instantly have the ability to draw. We lost a lot when we lost that. When I moved “up” from Commodore to IBM, I grieved those beautiful lines and cross-hatches for years. When I moved further “up” from DOS to Windows, I grieved the loss of even the chunky IBM box chars. We can get them back again, now, I guess, if we have the right Unicode font installed. How many gigabytes of font packs does that need?

I’ve been vaguely thinking how writing is an amazingly malleable technology for “thinking made visible”, with programming languages being an outgrowth of it.

Yes. The post-1945 computing culture, I believe, was glued together by Telex, even more than by punched cards. The keyboard, teleprinter and cuttable, spliceable punched paper tape was the universal human-computer interface… up to and including Microsoft BASIC on the Altair. And that’s why we have text consoles and text editors even today.

Screens are nice but in so many ways, they’re still an “output-only” technology without keyboards. Or without some way of clustering strokes into glyphs, and glyphs into larger glyphs.

(Our current typewriter keyboards always annoy me: I wish we had regular rectangular arrays of keys, then we could supply our own keycaps/images and maybe build our own languages that way.)

The PET 2001 keyboard wasn’t all that great, but still, just look at that bad boy. You felt you were touching something unthinkably futuristic and alien. It breathed graphics: it wanted you to draw on it, not just type.

We got so close to having a more literate, shareable, media with early 1990s compound documents! I thought copyright was the problem. When StarOffice was open-sourced, I really thought we might get that happening again. But nobody in the 2000s-2010s Linux scene was interested in documents as interfaces. It was all a rush to apps, apps, apps and then sandbox those apps, and sell them as cloud services.

Mitch Kapor was interested in going beyond the app model - there was something very cool in Lotus Notes (even if it became a terrible IBM Enterprise behemoth) and he at least tried to get something happening on Linux with Chandler. But even he couldn’t sell the dream, explain why it was important.

1 Like

I wanted to export this forum thread, as I’d like to return to the many topics mentioned and study further, as well as replies, suggested paths and ideas that have gone un-replied to yet. I learned Discourse has a /raw path.

https://example.com/raw/topic-id
https://example.com/raw/topic-id?page=2

From this current URL:

https://forum.malleable.systems/t/next-oop-www-html/332/83

The last route part 83 is the comment ID within the thread, and 332 is the topic ID. So this link below renders the entire thread in Markdown.


The Role of the Human Brain in Programming


Creating a Robot for My Childhood Self

3 Likes

I don’t follow this! Could you elaborate?

1 Like

Sounds much like the Jupiter Ace. Which was a commercial failure, so there never was much of an ecosystem around it.

1 Like

I mean the graphical characters (usually above 128) from Commodore’s proprietary PETSCII code, which were not representable in ASCII or IBM Extended ASCII, and until very recently were not a part of Unicode either. They are now, but still might not be representable on the system you are using to view this. They happen to be viewable on my Windows 10 notebook right now. They might not be viewable on my Android (I’ll check).

You could type a PETSCII graphics character (if the machine was in “Uppercase Mode”) by simply shifting a letter on the keyboard. No messing around with a character selection interface, etc, etc. It made drawing on the screen really easy, and then you just had to insert 10?" in front and you had a BASIC line that would print your image.

These things:

Ax NBSP 🮏 🮇
Bx 🮈 🮂 🮃 🭿
Cx :spade_suit: 🭲 🭸 🭷 🭶 🭺 🭱 🭴 🭼 🭽
Dx 🭾 🭻 :heart: 🭰 :club_suit: 🭵 :diamond_suit: 🮌 π
Ex NBSP 🮏 🮇
Fx 🮈 🮂 🮃 🭿 π
2 Likes

Ah, ASCII art! I see.

Kind of, yeah! In that it was art constructed of sequences of symbols. But because PETSCII had actual thought put into its choice of graphical symbols, the art you could make with it was so much more beautiful than what could be achieved with 7-bit ASCII, which was primarily intended for compatibility with Telex and typewriters, not art.

The other 8-bit micros all had their own graphical characters. The TRS-80 didn’t have the beautiful PET diagonals and lines but it had 2x3 pixel grid characters which allowed for a “super lo-res graphics mode”. The BBC Computer had Teletext/Videotext compatible graphics, including colour codes, and that could make some pretty awesome and really memory-cheap splash screens. And of course the IBM PC had characters mostly intended for drawing boring coporate box charts, but when ANSI colour sequences finally hit, it was good enough to rock the BBS world. Just in time for the Web to make BBSes obsolete a few years late.

The top of the line though was the C64’s completely reprogrammable character set, which Varvara does a pretty good job of emulating. Of course, you couldn’t transmit those to any other system.

And looking at this page on Firefox on my Samsung Galaxy A12: about half of those PETSCII characters are visible, the other half are gray boxes. It seems random as to which are which. So, we’re still not anywhere near pervasive rollout of these ancient 1977 characters in Unicode across all devices.

1 Like

I think I’m more of a graphical rather than ASCII supremacist at this point, after spending my youth on the other side of the fence. No matter how good a character set is, it’s just weird to draw using a grid of characters, depend on them to be monospace, etc. Then you’re editing it in a line-oriented editor, and drawing downward requires the smarts to insert lots of spaces, moving things to the right gets complicated, etc., etc., etc. Just use a vector-based representation like SVG, I say.

So my ideal is the computers from this era (I don’t remember which; I didn’t actually live through it) that would let you type in a statement of BASIC right after boot up and draw a line on screen – in the same space as the text. That’s what the Carousel experience aims to replicate and improve on.

2 Likes

Yeah, I’m not saying that those of us on PETs didn’t look enviously over at the C64 or PC with CGA which had full bitmap graphics. Of course we did.

It’s just that once you were finally staring at a bitmap screen and you found that the only primitive you had was LINE(x1,y1,x2,y2) and maybe FILL() if you were lucky… Well, it felt very intimidating. You couldn’t just easily doodle with premade shapes. It was like going from building with Lego, to being handed a blank block of granite and a chisel.

There were lots of things that could have filled that gap: tile-based systems, libraries of vector shapes, etc. But mostly that all happened in the “application development” world, as for-pay libraries, or in large desktop publishing packages. I guess a lot of it got folded into the game development world, and then into 3D engines. Blender’s still terrifying to me, though.

So an interesting experiment might be: an 80s-style bitmap machine, and then some kind of BASIC or Forth-like interactive programming system, with a constructive graphics interface of some kind. A tile or shape library, maybe. And see where that might have gotten us.

Probably SVG is the closest thing to that today. I dunno if you could build a whole desktop rendering system on it, on a small machine.

Depends on what you mean by “desktop rendering system”. For me a couple of answers seem fairly adequate for a system that rewards curiosity without selling its soul to capture the incurious:

  1. When I launched Lua Carousel one of the example screens was this set of abbreviations:
-- Some abbreviations to reduce typing.
g = love.graphics
pt, line = g.points, g.line
rect, poly = g.rectangle, g.polygon
circle, arc, ellipse = g.circle, g.arc, g.ellipse
color = g.setColor
min, max = math.min, math.max
floor, ceil = math.floor, math.ceil
abs, rand = math.abs, math.random
pi, cos, sin = math.pi, math.cos, math.sin
touches = love.touch.getTouches
touch = love.touch.getPosition
audio = love.audio.newSource
-- Hit 'run', Now they're available to other
-- panes.

In particular, pt, line, rect, poly, circle, arc, ellipse. Each with an optional fill mode. That seems fine for doing most things with? I don’t know if you consider this Lego blocks or granite-and-chisel.

  1. I’ve mentioned LuaML before, and that hypertext browser that is now also bundled into Carousel for its help system is a thousand lines or so to implement, augmenting the above graphical primitives with hierarchical blocks of text arranged in rows and cols, with the ability to configure colors and fonts. Again, I don’t know if you consider this Lego blocks or granite-and-chisel.

Both these, to me, feel like I already have the “80s-style bitmap machine with some kind of BASIC or FORTH-like interactive programming system” that I want. But I’m looking forward to understanding more what you mean by these phrases that I’m not thinking about..

From personal experience as a kid: Yes, I consider line, rect, poly, circle, etc to be very much granite-and-chisel graphics primitives compared to a character generator or font system.

Once you’ve got named recursive functions (but SAFE recursive functions! ie, functions that don’t automatically get access to all of your program’s environment space! which even languages like Lua still don’t give us!), then we’re getting a bit closer to something like what I mean. Add Logo-style relative-positioning turtle graphics, so everything can be repositioned on a screen. Then on that, build font systems and libraries of premade scalable graphics blocks. Would probably also need a bitmap/blit/sprite kind of functionality as well, starting with a way of representing bitmaps in text/code. Probably also need some kind of “responsive” business to detect the size of variable-sized viewports and adjust (that’s a whole pile of pain right there).

Finally, we’d also need not just graphics primitives that can be “executed” and are then immediately forgotten with only the bitmap remaining (remember what I said about a bitmap screen being an “output only device” compared to a text console? This is exactly what I mean by that) - but we’d want some kind of graphics buffer that stores the instructions themselves. I mean as glyph shapes, or functions, but not just one-way compiled - retaining all their human-readable names. Something like Display Postscript, is probably what it would be like.

The idea being that in order to have a literate, symbolic usage of graphics - not just a one-way outputting of graphics symbols to bitmaps - those graphics need to be structured into glyphs and clusters/sequences of larger glyphs. And that structure needs to be preserved and that structure needs to be what’s input and output, not just tht bitmap. That structure is the “language”. I don’t mean in the sense of a “programming language” that’s just immediate-mode instructions for a machine; I mean a language in the sense of a structured series of symbols with meaning attached, that can be parsed equally by humans or machines.

Am I making sense? I know this is a complicated idea to convey, because it’s not really how we’re used to thinking about “graphics”. And specifically, because this idea is something that’s been missing in our computing experience for decades. So since it’s not implemented, all I can point to is the vague outline of the hole of where this thing that should be, isn’t. And some of the things that are a little like it, but aren’t quite it, because we didn’t get it. To the extent that we “got something like it”, we got either a very simple beginning (line/box/circle functions) or we got very large, highly developed, but mostly proprietary/business based systems of apps/objects/APIs (everything from windowing widget sets through Postscript/PDF through Office through CAD systems and 3D modelling/gamedev systems) that don’t really decompose or integrate with each other.

One place I think where the path forked as that as we moved from “characters” to “bitmaps”, we lumped together “glyphs/shapes” and “behaviour” - and that happened roughly alongside Smalltalk, which actively encouraged mixing the two. Ideally, I think, we would have kept the graphics and the behaviour separate. But for example, see what happened with Digital Research’s Graphics Kernel System, which started out as something like what I’m talking about, but quickly became merged into the API of a windowing system, and that was the end of any chance of it becoming a “language” between systems.

I’m not sure I’m getting it yet. But let me try this: a while ago I shared a 60-line prototype that I consider to be the core of what I’ve learned building a text editor atop LÖVE. The core is a way to track the glyphs I draw on screen in a way that lets me position the cursor on them, select them with the mouse, copy them to the clipboard, etc. Took me a while to converge on the right approach, and I’m sure it’s not novel, but now that I have it it’s trivial in both dev time and run time.

Also, my original editor has always supported picking up the shapes in a drawing and moving them around. That part was easy to come up with as I recall.

So I guess it doesn’t feel like that much of a leap for me between a) what seems to you like the granite and chisel of my vector graphics, and b) what seems to me like your notion of symbolic usage. The key is that I never track the bitmaps that result from my vector drawing primitives, and I never relinquish the vector primitives that lead to a drawing.

2 Likes

functions that don’t automatically get access to all of your program’s environment space! which even languages like Lua still don’t give us!

I don’t quite follow this either.. I do have closures in Lua, if that’s what you mean.

Yes, scripting languages like Lua and Javascript have closures. But that’s not the quality of “safety” that I’m referring to, and which I think is essential for any language which can be considered suitable for sending information between mutually untrusting systems. For example, for the use case of serialising graphics or hypertext documents.

The quality I mean is: it needs to be absolutely and provably impossible, within a phrase of such a language (let’s say a function, though phrases still might not always be functions), to access any variable which is not either explicitly passed to that phrase as a parameter, or explicitly passed into the environment used to evaluate that phrase.

If we’re going to exchange information between systems about structured graphics and we’re going to do it as executable Lua or Javascript code, then that code must run in an absolutely tight sandbox. It absolutely MUST NOT be able to access the builtin “system” or “OS” or even “current webpage” variable which is usually available on these scripting systems. It obviously must NEVER IN A BILLION YEARS be able to access raw RAM, the filesystem or the Internet! It also must not be able to find out what’s on the call stack, what other tasks are running, or any debugging states. It probably also shouldn’t get access to a system clock at high precision, read the keyboard or mouse, read the contents of the screen or the object structures used to draw the screen, and it must not be able to enter an endless loop or allocate RAM to exhaustion. It would be best if it just can’t access anything other than a few highly specified graphics primitives and maybe some control/looping constructs.

Maybe the general-purpose scripting language of your choice, like Lua, can do all these things if the code is evaluated through a custom environment. Maybe Lua in particular has been battle-tested in multiplayer gaming. Hopefully. But scripting languages are often not our friends in the security battle; they often “helpfully” provide access to far too many objects. That’s what created Office macro viruses - and why now all corporations warn their staff never to open PDF or DOC attachments from unknown senders.

We don’t want to become Microsoft Office. Even PDF I think removed some Turing-completeness from Postscript in order to become a somewhat safe graphics/document exchange format. Some kind of hard limit on the depth of recursion or stack/RAM allocation is probably a requirement in this kind of case.

1 Like

Ah I see. Yes, Lua doesn’t provide this level of safety by default. But it is possible to sandbox Lua code. (That repo uses Lisp syntax, but it gets translated fairly straightforwardly to Lua, so the capability of capabilities (heh) is available to Lua as well in principle. A more basic example in straight Lua.)

3 Likes

I spent a while thinking about sandboxing for my Teliva project in 2021/2022. I gave up on it when I realized the hard part is not technical but social. A scripting language can absolutely be made rock solid at running untrusted code. But nobody in today’s society is interested in using something that rock solid, in putting up with the level of inconvenience it entails. In some ways Freewheeling Apps are my attempt at carving out a stepping stone to a more mature society of laypeople that internalize why security matters.

1 Like

Yeah, the

setfenv(user_script, env)

thing in Lua is part of the answer, I think.

Ideally there’d just be an “eval(string, env)” and you couldn’t blanket “eval” something in the current environment, because 999 times out of 1000 in scripting, that’s a terrible life decision which you will regret.

However I think something even stricter might be good in a language. Something like, all names inside a function/method/block/namespace are local, and you literally cannot access any variables in the enclosing scope unless you do it through a single reserved “global” name. This wouldn’t just be for safety/security, but also to make it easy to define domain-specific languages which can also be used as interactive control languages. Smalltalk maybe has this property?

1 Like