UXN and Varvara

Hi all. Been reading for a while, but this is my first post here.

I wonder if anyone has come across the Hundred Rabbits “UXN/Varvara” virtual machine, and what your thoughts might be on it as a platform for malleable software?

My feelings are:

  • It’s basically an idealised 6502 (UXN) plus an idealised Atari 800 / Commodore 64 (Varvara) to run on it.
  • It’s designed for super-small 8-bit binaries for the arts and demoscene
  • It’s really fun to play with but the 64K limit means it’s not actually that practical
  • Like all Forthlike systems, it doesn’t do any memory protection beyond the “entire machine” level.
  • One could potentially build a personal local cloud of a whole bunch of UXN machines, and build a malleable OS that way, but again, the 64K-per-machine wall seems like a bit of a nonstarter

I like what UXN is doing, I like the minimalist simplicity vibe, but sadly because of these limitations I don’t think it’s a potential candidate for a “100 year computer” for personal data and software preservation. But it might be worth examining for ideas.

2 Likes

Hi Nate! I’m very happy to see you here.

I totally agree with all your bullets – except the word “practical”. It seems plenty practical for the (surprisingly large number of!) people (that doesn’t include me, though I hang out on Merveilles.town) who use it for various purposes. You’re probably right that it can’t be someone’s sole daily driver and personal data.

On the software preservation front – it seems really easy to preserve? It probably doesn’t help preserve other platforms – but it also doesn’t dig us deeper into the problem. That feels like a useful contribution.

Malleability certainly doesn’t depend on daily driver use cases or even software preservation, so I tend to consider Uxn (and PICO-8 and other hobbyist projects like that) to be de facto malleable. They’re so small, and the development experience is so intertwined with the user experience, that they easily fulfill the definition of Malleable Systems.

If I might attempt a provocative punchline: one way to make software as easy to change as it is to use is to… make it harder to use :smile:

2 Likes

I think we need to ban from our minds the idea of “sole” whatever in the realm of software. A malleable software toolbox will always contain multiple tools. A more interesting question about UXN (which I admire but don’t use) is: how well does it play with “outside” technology? Can you make small tools in UXN that interface with non-UXN tools in a sufficiently convenient way?

That’s actually something I say a lot, though in less provocative flavors. “Developer” and “user” roles need to meet halfway. And malleable software needs to provide a good learning path, from occasional user to power user.

2 Likes

I think we need to ban from our minds the idea of “sole” whatever in the realm of software. A malleable software toolbox will always contain multiple tools.

I’m inclined to agree, but one note of caution: every new tool will tend to push someone away.

Can you make small tools in UXN that interface with non-UXN tools in a sufficiently convenient way?

Devices are how Uxn creates extensible interop. Someone has to build a new device in C++ or whatever the host language is, but there’s a reasonable variety to choose from at this point. Uxn can use up to 16 devices at a time. I believe programs need some out of band way to specify the devices they require, which has a cost in portability. (I can imagine that the GBA port of Uxn might not support every possible device.)

1 Like

First impression: a small-scale implementation of the Plan 9 idea of “everything is a file”. Interesting!

In software, yes. I think that’s something that should change. New tools are scary because of complexity, dependencies, and churn. People do adopt physical tools with much less hesitation.

First impression: a small-scale implementation of the Plan 9 idea of “everything is a file”. Interesting!

Kind of, yes. More precisely, like"everything is a combination of up to 14 6502 INP/OUT ports and one hardware interrupt vector".

Some UXN devices just transmit one byte of data at a time, others take a RAM address and memory-map up to almost the entire 64K space (ie, the memory extension device).

2 Likes

Hi all,

It’s nice seeing a Uxn thread on here. :slight_smile:

I like what UXN is doing, I like the minimalist simplicity vibe, but sadly because of these limitations I don’t think it’s a potential candidate for a “100 year computer” for personal data and software preservation. But it might be worth examining for ideas.

I’d like to share Kragen’s critique of the Chifir, which advised the design of Uxn:

Bootstrapping instruction set

Chifir is inspirational. It’s the first archival virtual machine good enough to criticize. You can implement the CPU in a page of C (my version is 71 lines and took me 32 minutes to write), and then all you need is screen emulation, which took me another 40 lines and 28 minutes.†

Archival virtual machines have applications beyond archival. Independent implementations of the virtual machine defeat Karger–Thompson attacks at the virtual machine level and potentially the hardware level as well. And a stable deterministic virtual machine enables the publication of reference implementations of other virtual machines and deterministic rebuilds of derived files, such as compiler output, database indices, or raw pixels output from an image codec.

Alas, Chifir is not good enough to actually use as a general-purpose archival virtual machine, which of course is not its intended purpose. Any of the following problems would by itself be a fatal flaw (for a general-purpose archival virtual machine):

  1. The specification does not include test cases, so there is no way to debug a candidate implementation, and in particular to easily disambiguate the natural-language specification of the instruction set. In fact, as far as I can tell, not even the Smalltalk-72 image described in the paper has been released. This is important because when I added screen output to my implementation of Chifir, my first test program was a single-instruction infinite loop, which had a bug and also found a bug in my implementation of Chifir.
  2. Also, the instruction set contains unspecified behavior, including access to out-of-bounds memory addresses, execution of any of the 4294967280 undefined opcodes, division by zero, the appearance of pixels that aren’t all-zeroes or all-ones, and arguably the rounding direction of division. Unspecified behavior is a much worse problem for archival virtual machines than for the systems we’re more familiar with, because presumably the archaeologist implementing the virtual machine specification has no working implementation to compare to.
  3. The straightforward implementation of the instruction set using for (;;) { ... switch(load(pc)) {...} } is unavoidably going to be pretty slow, because the instructions only manipulate 32-bit quantities. Getting reasonable performance would require a JIT-compiler implementation, which is complicated by the need to invalidate and recompile self-modifying code.
  4. Input is blocking; it is impossible to read keyboard input, which is the only kind of input, without blocking. That means that Chifir will, at any given time, be either unresponsive to the user or unable to execute any code until the user types a key.
  5. The instruction set has absurdly low code density, requiring 128 bits per instruction. This is a big problem for archival virtual machines because archival media are many orders of magnitude lower density than media customarily used for software interchange.
  6. Although the specification specifies the bit order of the image when stored as individual bits (as in their proposed physical medium), it does not specify an image file format as a series of bytes. In my implementation, I chose to represent the image in binary big-endian format, with the most significant byte first, but a different implementor might make a different choice, such as representing the image in a sequence of the ASCII digits “0” and “1”. Such differences would produce unnecessary incompatibility at the file level between implementations, if not at the micro-engraved disc level.
  7. Chifir has no provision for access to storage media; the only I/O is the screen and keyboard.
  8. Because Chifir has no clock, no timer interrupts, and no keyboard interrupts, there is no way within the Chifir system to regain control from an accidentally-executed infinite loop.

However, it’s worth pointing out some things Chifir got right:

  1. Unlike Lorie’s UVC, the Chifir virtual machine has well-defined arbitrary implementation limits and overflow behavior, so it should be straightforward for all implementations to have the same arbitrary limits. This will avoid incompatibilities where a virtual machine image works on one implementation of the virtual machine (perhaps the one used by an archivist) but fails for obscure reasons on another (perhaps the one written by an archaeologist.)
  2. Like BF, and unlike most virtual machines, Chifir does succeed in being implementable in an afternoon, being a “fun afternoon’s hack”. In fact, it vastly overshoots this goal: an afternoon is three to eight hours, depending on your definition, and it took me 32 minutes to implement Chifir without a display and an hour to implement Chifir with a display. (The COCOMO model used by David A. Wheeler’s ‘SLOCCount’ instead estimates it would take a week, but it’s usually wrong in such a way for such small programs.) Chifir could be about three times more complicated without overshooting the one-afternoon budget, especially if the implementor has known-good test programs to work with instead of having to concurrently write and debug their own. Maybe a complexity budget of 512 lines of C, a bit under eight pages, would be a reasonable absolute maximum.
  3. Chifir’s CPU lacks many edge-case-rich features present in other CPUs, which are common compatibility stumbling blocks. For example, it doesn’t have floating point, flag bits, signed multiplication and division, different memory access modes, different instruction formats, different addressing modes, for that matter any general-purpose registers, memory protection, virtual memory, interrupts, different memory spaces (as in modified Harvard architectures like the Hack CPU). Similarly, its I/O specifications are ruthlessly simple, leaving little room for compatibility problems. It’s very likely that a straightforward implementation of the specification will be, if not absolutely right, at least almost right.
  4. In part because Chifir lacks such corner cases, nothing stops you from writing a multithreaded program or JIT compiler in Chifir code.

Uxn Design

I’ll put some notes down for anyone reading and interested into why the design look the way it does.

First, Uxn doesn’t have a limit of 64kb(ex: Oquonie is 500kb), it does has a bound of 64kb addressable working memory but that’s only a length of fast memory, it can read and write massive files(my wiki for example, is hundreds of pages to parse in uxn), and most implementations have at least 16 pages of RAM, even on small devices.

Uxn has no undefined behaviors, a massive test suite and a self-hosted assembler. The entire spec fits on a single piece of paper(double-sided perhaps), the cell length is specified and kept small so we can target a variety of consoles.

I wrote a devlog which goes into detail about type checking, devices and other things for anyone who’s interested to learn more.

4 Likes

@neauoire I tend to think of Uxn as having a 64KB limit to the heap. Perhaps you’d even agree that Uxn discourages a heap?!

1 Like

I wouldn’t discourage a heap, I think some programs do well to using something like that if they need to. What I normally use don’t need it, so I never had to mess with it. The more canon usage of memory with uxn, to me, would be something like Porporo which gives each uxn instance some memory to do their little tasks, and then takes it back.

I like to think of a computing system that would make use of hundreds of uxn instances where each have limited memory to do a task, like cells in a larger organism, each uxn cannot interfere with the memory of the others, like a bit of protected state.

4 Likes

For sure, heaps are not always needed or a good idea. And we tend to not look for alternatives if it’s really easy to apparate a literal object in the midst of our code. So it’s an excellent creative constraint.

1 Like

That view invites a comparison with Alan Kay’s ideas that led to Smalltalk. I don’t know enough about uxn to do this comparison myself, but I can comment on where Smalltalk falls short of the analogy with cells in an organism (beyond the fundamental issue of design vs. evolution):

  • Smalltalk objects have no membrane. Only the state of the object is protected against direct access from the outside, but the internal machinery, i.e. the code that defines the object’s behavior, is fully accessible and discoverable from outside.
  • Synchronous communication prevents the massive parallelism of cells in organisms.
  • Smalltalk objects permit arbitrary connectivity, as opposed to cells which communicate in a very limited spatial environment.

From my limited knowledge of uxn, I’d say it does a lot better on the first point, but is equal in terms of connectivity. As for parallelism, I can’t say. Does that look like a fair comparison?

2 Likes

Porporo kinda reminds me of something Alan Kay said about how computers should be subdivided into smaller computers instead of weaker things like functions and data structures.

I’ve recently been thinking that the Smalltalk Image is the membrane, or in other words, is an Actor of some kind. Building software out of collaborating Images/Actors where objects are just what the Image/Actor is made from is something that interests me.

Interesting. Seen from the outside, a running Smalltalk image is just another OS process. Which specific Smalltalk features do you plan to harness to do something better then interprocess communication at the OS level?

An interesting one would be serialization via Fuel. You can send arbitrary object graphs from one image to another, and that works surprisingly well in practice.

Development tools could also be extended to clusters of images, using something like TelePharo.

1 Like

I’ve been looking at a few possibilities but one reason I’m looking at Smalltalk is that it could be made recursive. Images can be actors and can contain actors. Similar to the Vat idea in E-lang. A bit like how Porporo is sorta like a rom that contains roms.

Thanks for the pointer to TelePharo, I hadn’t seen that.

Another reason is, of course, that I really like Smalltalk :slight_smile:

2 Likes

(How did I end up accidentally deleting that? oops!

Hi! It’s great to see the UXN developers here! And UXN is one of the nicest little minimalist VMs I’ve seen so far.

I don’t quite understand the claim that UXN “doesn’t have a limit of 64kb” , though. Following this logic, wouldn’t we have to say that the Commodore 64 in 1982 was not a 64kb machine because you could plug a disk drive into it and implement a virtual memory paging system? And many 1980s programs, such as the Infocom games with their ZIL interpreters, did do this in order to fit their program inside the RAM limit. Yet we do in fact say that the Commodore 64 was a 64k machine, and that RAM size limit did complicate programming the Commodore a lot. I remember having to use disk files to fit the text descriptions of an adventure game in interpreted GW-BASIC on the IBM PC, because that had a hard 64K limit too (and it also didn’t have any decent data literal support). It wasn’t much fun to do that, and I don’t miss that part of 1980s programming.

I love UXN for what it is. It’s a much better 6502 than the 6502! But the 64k literal-RAM limit, and what I would need to do to work around it, has personal significance to me because the two current malleable projects I’m hacking on involve databases which, just in the size of the data, are much larger than 64K.

These two projects are: a Chinese language dictionary (consulting the Unicode Unihan database, Babelstone character decomposition database and CC-CEDICT word dictionary: 45 megabytes of JSON just in data), and a 3D star map (consulting the AT-HYG dataset: 330k stars, 64 megabytes CSV in its smallest version, which is already a tiny subset of the full 2.5 million stars database).

And these are “toy” datasets that I’m playing with interactively, building visualisations for my personal pleasure and learning. My malleable data exploration platform of choice is currently Node.js, because it can suck this amount of data into RAM as a single array of objects and just query it instantly with one-liner functions. Node’s not great, but it’s the best simple and widely available tool that I feel that I have right now. I suppose if I knew Smalltalk I might be using Pharo for this kind of dataviz (and that’s a project I might look at to learn me some Pharo).

If I were to try to put this amount of data into UXN… I feel like I would be spending a good half of my time at least coding up some kind of RAM paging layer and database, rather than just exploring my data. If UXN didn’t have the 64k RAM limit, and I could just whoof it all into contiguous RAM in one go, that would be one whole problem I just wouldn’t have to face.

Maybe Porporo could help with this scenario? Or would I still have to design a database of some kind?

I had a few specific applications in mind when I designed Uxn, it’s definitely not a good host platform for doing number crunching, big json database, AI, blockchain, cryptography and that sort of thing. You’re better of with 64-bit systems if you want that kind of continuous memory.

There’s someone who build a kind of utf-8 support for varvara but I can’t imagine using it myself. Uxn is quite incapable, it has no signed integers, floats, no registers, no overflow flags. It’s really just two stacks and a PC. There’s a depth of tar, that just won’t accommodate this high level programming stuff you’re interested in, it’s best to stick to javascript for that.

A friend of mine built a more capable VM aimed at doing more standard computing, called UVM. You might enjoy it

1 Like

I went and downloaded the latest Uxn32.exe, which now seems to run Drifblim in command line mode ok. So that’s good! I guess I was more than six months out of date before. Which is okay, I haven’t been trying to do anything with UXN in the meantime, and writing my own assembler was fun.

But now that I’m up to date, I’m getting assemble errors when I use macros.

%COUT { #18 DEO }
|0100
#41 COUT
BRK

with
uxn32.exe drifblim.rom hello.tal hello.rom

usage: drifblim.rom in.tal out.rom
!! Error: Reference %COUT in RESET
!! Error: Reference COUT in RESET
Listening..

Are macros still even a thing? Has the macro syntax changed completely, now that lambdas are using curly braces? I’m not imagining things, there used to be macros that started with percent signs and used curly braces, and lambdas didn’t exist six months ago… right?

Drfiblim is a bit more limited than uxnasmin terms of pre-processor runes(no macros and limited includes). Anything that compiles with drifblim will compile with uxnasm, but not the other way around.

The Driflim limits(token length, etc…) were chosen to be as portable as possible.

1 Like

Aha! So no macros in Drifblim. That explains that then.