Reversibility

One aspect that comes up a lot, at least in the circles I lurk, is reversibility. Being able to undo computation. Some computation models are better than others at achieving it, I think a malleable system would benefit from some form of it.

If we believe that a future computer is one that more closely follows the laws of the natural world in which nothing is created and nothing is lost. This future computer might have this particularity.

Reversible computing is a model of computation in which time is reversible.

As far as anyone knows, the laws of physics are reversible: that is, you can run them in reverse and recover any earlier state of the universe. This means that from a physical perspective, information can never be destroyed, only shuffled around. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic.

image

Reversible Logic

The first condition for any deterministic device to be reversible is that its input and output be uniquely retrievable from each other. This is called logical reversibility. If, in addition to being logically reversible, a device can actually run backwards then it is called physically reversible and the second law of thermodynamics guarantees that it dissipates no heat.

Energy Consumption of Computation

Landauer’s principle holds that with any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the observation apparatus.

Microprocessors which are reversible at the level of their fundamental logic gates can potentially emit radically less heat than irreversible processors, and someday that may make them more economical than irreversible processors. The Pendulum microprocessor is a logically reversible computer architecture that resembles a mix of PDP-8 and RISC.

The promise of reversible computing is that the amount of heat loss for reversible architectures would be minimal for significantly large numbers of transistors. Rather than creating entropy (and thus heat) through destructive operations, a reversible architecture conserves the energy by performing other operations that preserve the system state.

An erasure of information in a closed system is always accompanied by an increase in energy consumption.

Linear Logic

A linear system excludes combinators that have a duplicative effect, as well as those that a destructive effect. A linear logic computer language avoids the need for garbage collection by explicit construction and destruction of objects. In a reversible machine, garbage collection for recycling storage can always be performed by a reversed sub-computation. The “dangling reference problem” cannot happen in a linear language because the only name occurrence for an object is used to invoke its destructor, and the destructor doesn’t return the object.

Most Forth operators take their operands from the top of the stack and return their values to the top of the stack. The language’s absence of variable names is characteristic of [combinators](file:///home/neauoire/Git/oscean/site/logic.html), the programming of Forth operators can therefore be seen as the construction of larger combinators from smaller ones. A Forth which incorporates only stack permutation operations like swap, rotate, and roll must be linear, because it has no copying or killing operators.

  • NREVERSAL of Fortune, Baker on the thermodynamics and meaning of garbage collection on Lisp systems.
3 Likes

I’m also into reversibility. Both in terms of “undo” operations as well as for more domain specific reasons.

I think tracking bijective functions, similar to reversible logic would aid in at least making decisions based on whether computations are reversible or not. Reversibility could be extended further by lifting non-bijective operations into reversible ones. For example, you could preserve the original inputs during computation, enabling a reversible path even for operations that typically lose information.

I’d love to see more emphasis on relations rather than just forward computation. Reversible systems should be able to handle cases where you know the output and want to infer possible inputs. It would be interesting to see systems that let you specify an “output” and compute backwards to explore which inputs could have generated that output. I think this is useful in a malleable context where you may have part of a problem defined but not strictly in a operational order.

1 Like

Reversiblity is very interesting, but I have one very large worry about it (and the same problem applies to pure functional systems): How does it handle secrets, like encryption key material and authorization codes?

Because those are pieces of data which absolutely must be erased during the process of safe handling. Any malleable computing system which does not allow secrets to be irreversibly erased, is going to have a real bad time when it comes in touch with today’s Internet.

Multi-level caches also make my eyes water for the same reason. Any time there’s a cache, there are probably secrets inside being inappropriately stored. And yet for legitimate performance reasons, our entire computing ecosystem is a series of caches, from the CPU and RAM up to the Web and Cloud, and I’d generally like there to be more and more permanent caching (especially at the personal / home level) rather than less.

I feel like “secrets” are almost something that needs a very low-level type tag in the VM to say “yeah, don’t allow these to be stored or copied past a certain boundary in space or time”.

Reversible computation does not imply full disclosure of the results. If you perform an encryption reversibly, you get two results which together permit to reconstruct the input. You are free to send only one of them to the outside world. Communication isn’t going to be reversible anyway.

Thanks for this nice overview of the current state of reversible computation!

Just a minor nitpick:

As a physicist, I just have to disagree. It’s the laws of mechanics and the laws of electromagnetism that are reversible. But there’s also thermodynamics that very explicitly states that in a large system, nothing is fully reversible.

Yes, this does imply that mechanics and thermodynamics are in contradiction. A lot has been written about this. My point of view is that the contradiction comes from the unjustified belief that physical laws are universally and exactly true. All we know is that they are true to the level of precision of our instruments, and for the phenomena on which we can test them. The set of testable phenomena for mechanics and thermodynamics have almost no overlap.

3 Likes

Have a look at this video that explains that dynamical systems that try to have this reversibility actually introduce constrains that are not part of the physical system.

On the other hand, reversibility is very useful and at the same time, caring for the environment is a very important cause, so I appreciate the motive.

1 Like

ah! thank you, yes for some reason I’ve always considered the laws of thermodynamics as part of fundamental physics, but maybe it’s not part of physics, just thought in physics classes ^^;

In any case, IO is not reversible, and communication is IO. The communication’s reagents would just be part of the the pool of resources during the reversibility process.

Not to say that it’s not susceptible to side-channel attacks or leaking credentials, but I think you can often parametrize your functions on these secrets. So you could have the secrets be necessary to run in the forwards and backwards direction but not present in the persisted or visible data.

1 Like

As a non-physicist - I agree that physical laws are only “fundamental” insofar as we can empirically verify them, as just a baseline definition of science. Although I wouldn’t go so far as to say that the universality of laws/principles are unjustified. I would characterize the consensus view as fair: simple principles formalized and tested several centuries ago have held up to much more sophisticated inquiry to a shocking degree, justifying high degrees of confidence.

But the questions of what is or isn’t reversible and consequently whether theories are in contradiction seem totally unclear. Physicists still don’t agree on why wave functions collapse, or even if collapse is real, and collapse appears fundamentally irreversible.

Are there even half-baked prototypes of reversible computing? (And no, QCs do not count)

One would expect a huge number of practical challenges to reversible computing. The physical elements already seem effectively impossible. On top of that, transforming traditional programs naively would also be impossible, given the amount of copying/erasing involved.

The incentive for (near-)zero-power computing is obviously in the trillions of dollars, so I would not expect the absence of success to be a simple miss by PhDs and investors…

Multiset rewriting is my favourite way of playing with reversible computing.
Try the playground here.

The laws of physics are intrinsically reversible at a microscopic level, in contrast to most models of computation which are intrinsically irreversible. The discrepancy between the laws of physics and our computer models might be an indication that we might be doing things wrong.

AC 19, fruit-cake
02 19 Ă— 119/19 = 119, apple-cake fruit-salad
01 119 Ă— 715/17 = 5005, apples apple-cake oranges cherries
00 5005 Ă— 30/7 = 21450, apples apples flour sugar oranges cherries

Fractran operators are reversible, meaning that a some programs can run backward to their original state. Evaluation is undone by testing and applying rules by inverting their right and left sides. For a program to be reversible, two rules may not share the same right-side.

:: a > c :: a b > c

In the program below in which a mirror of the issue occurs, a compiler may throw a warning when a left-side is found in multiple rules.

:: a > b :: a > b c
2 Likes

Reversible computing is so far a mostly theoretical field. As far as I know, there are no low-energy computing devices yet that implement these ideas. Not even lab prototypes. The fight against entropy increase is much easier in theory than in practice.

Wave function collapse is just a fancy term for the well-known fact that any measurement perturbs the quantity being measured. In classical physics, that perturbation can in theory be made as small as you like, and therefore it is not considered a relevant problem by theoreticians. In quantum physics, the perturbation has a lower bound, so theoreticians have to deal with it. And that means that it needed some fancy jargon.

Maybe I wasn’t clear, I meant an actual hardware processor that performs physically reversible computation. The only benefit of reversible computing seems to be power efficiency, so only real hardware can prove its value.

To me, philosophical parsimony is far too subjective to be a convincing argument in favor of a computational model. Computers are no less physical than, say, trains.

Well, that’s what I thought, but I wouldn’t be surprised to have missed something. Hardware startups are popping up like weeds these days, many of them claiming to make some exotic computer or other.

My understanding of wave function collapse is that it’s not about perturbation, which I take to mean “sufficiently small interaction,” since the Schrodinger equation (or more generally the Standard Model) handles those just fine. Rather it pertains to the resolution of a probability distribution into a singular state.

It seems like a well-known fact that physicists absolutely do disagree on which interpretation of QM is true. There also appears to be no consensus on how important it is, i.e. whether there are experimental differences large enough to be measured and/or care about.

1 Like

The discussion about the meaning of quantum mechanics started with quantum mechanics itself, a good century ago. Back then, everyone shared the view that quantum mechanics was the future theory of everything. The universe was supposed to be governed by a huge wave function. It was therefore philosophically important to understand its nature, irrespectively of practical issues such as measurement devices, which were expected to evolve and improve.

The discussion became a lot more productive when people started to look at it pragmatically, in terms of what can be observed in experiments. The most important concept emerging from this shift was decoherence. But physicists never managed to disentangle this point of view from the continuing philosophical debate about the true nature of the wave function.

The main issue in the pragmatic debate is the nature of measurement. Measurement requires interaction between the system under study and some measurement instrument. This interaction must transfer sufficient information about the system, and that sets a lower bound to the perturbation implied. But then the agreement ends. Some say we could look at the system plus the instrument as a bigger quantum-mechanical system, and study its combined wave function. Which, unfortunately, nobody has been able to actually do for some real measurement device. In theory, this approach eliminates collapse as something fundamental. At the other extreme, some maintain that measurement requires a conscious observer, and that implies such a huge bunch of atoms that studying its wave function is not even envisageable.

Oh, do you mean like the Pendullum chip?

1 Like

In a distributed setting, I think that it is impossible to have reversibility. In fact you have for one input many outputs. You have to specifically design your system to abstract away the entropy of the system.

So that any order of computation will eventually result to a single value.

1 Like

Interesting! Was this ever followed up by something like actually making and using such chips?

I agree. More generally, it is not obvious that for the tasks we use computing for today, entropy generation can be reduced significantly. On the other hand, the contrary isn’t obvious either, as long as nobody has made a serious effort trying. Put differently, this is an open research question.

This paper only describes the architecture. The hardware is (intentionally) completely omitted:

“Questions of implementation details, such as SCRL circuit realizations of functional blocks, clearly present substantial opportunities for future work, but are beyond the scope of this thesis.”

By hardware, I mean hardware. Measured performance per watt.

At a bare minimum - I would not be surprised if some research has achieved this - a simple logical circuit implemented with a reversible alternative to modern transistors.

To compare to “real” processors, I’d pick some commercially successful benchmark - e.g. ESP32, which draws on the order of magnitude of 1W, was released in 2016, uses 40nm node which was state of the art in 2008, and does ~100 MFLOPS. Conservatively, let’s say 10 MFLOPS/W.

Modern consumer CPUs hit >1 GFLOPS/W: The GFLOPS/W of the various machines in the VMW Research Group ; the absolutely most efficient in 2024 probably closer to 10 GFLOPS/W.