History as a First-class Citizen

Interesting ! I would like to hear how your teacher responds to this project. :smiley:

1 Like

crossposting my ethics comment here

When you first posted vnstore, I felt something was still missing. So I created a versioned, collaborative authoring setup with transclusion and bidirectional reference (IPv6 no longer required). (What I cannot create, I cannot understand. Plus, I need a framework for my new website.)

Here are a few observations:

  • When we talk about history, we actually refer to branching semantics like version control. Not literal “branches” in Git vocabulary, but simply having more than one independent subsequent after a certain state. This forms a tree. If we want it to be useful we would support merging, so it now forms a lattice. In an ascending path in this lattice, everything is cascaded. And of course, the user must be able to refer to a past state of a state in its descending path in this lattice.
  • If you restrict the number of branches on any state to be at most 1, you get the usual linear undo history: making an edit (branch) on a past state clears the redo list (replaces any existing successor).
  • You specified cascading semantics with your natural language specifications. But there is one problem: when you add or del (make any change) on a past timestamp, you create a branch implicitly, and now multiple latest states exist. So, add and del should be defined to accommodate this.
  • vnstore supports version reference, and implicitly, branching. Branching is not in your syntax, and merging is not built-in (but can be defined).
  • The store (data structure) in vnstore does not do much. The complexity is completely in the function calls.
  • My website above supports branching through Git. But visitors can only see one path that I choose. Also, no past version of any content can be referred to within the system, though it is possible to implement this. The goal of my website is not a nonlinear one. I only wanted clear references.
  • Do we have conceptual holes in our discussion? What must we consider before history? For example, should we argue that a mutability scope must be defined, before proceeding to history? In vnstore, the graph is versioned. But are objects and attributes versioned? Additionally, are objects named after natural language phrases? If so, and if a name gets created, then deleted, and then created again, will it be the same object? On my website, any object identifier has at most one life. Recreating a dead object is prohibited. Example (IPv6 no longer required).

The paper got rejected (3 rejects, 1 neutral.) I’m sending my reflections.

Most reviewers advised me to improve the writing. This included:

  • more explanation of the (non)linearity concept,
  • making the whole thesis less abstract,
  • pointing out related works (at least NixOS, which everyone knows),
  • correcting the inaccurate claim that “space in any environment is fundamentally fixed, unless explicitly allocated” in light of functional programming environments like Lisp. (Intriguingly, reviewers understood my Backus reference, yet still made this comment.)
  • and making it clear that the scope is limited to personal computing (which must introduce the personal-nonpersonal computing demarcation.) I am not sure how this could have been done in 3 pages, but I am not limited to conferences.

Some reviewers are worried about the absence of deletion. Deletion has arrived in the last post. Reviewers, like some forum members here, referred to GDPR. I would like to clarify that GDPR does not apply to personal activity. This is a consequence of me not stressing enough that the scope is limited to personal computing. (It was written exactly twice and without explanation, so skipping them is understandable.)

Art. 2 GDPR
2. This Regulation does not apply to the processing of personal data:
(c) by a natural person in the course of a purely personal or household activity;

Other laws may apply, though.

There is one single serious problem: the lack of argument, and especially the lack of qualification of existing arguments, around the connection between nonlinearity and intelligence. Because intelligence is such a hot and amazing topic, everyone wants to comment on it, which reviewers did, ending up in what I perceive as misunderstandings. The one single non-technical argument I made was that, (now rewritten with the benefit of hindsight,) because intelligent creatures have a nonlinear capability, it is only reasonable for their tools to have a nonlinear capability. Perhaps a philosophical argument was the wrong choice. My overarching argument is not very philosophical, although in order to have this motivation the philosophy is indeed necessary.

1 Like

I remember reading a hypothesis that creativity is intelligence. It’s been observed that creativity often involves non-linear thinking (not sequential or directly logical), making connections between distant concepts, recognizing emergent patterns. It’s related to inductive associative thinking.

Creativity involves associative thinking: linking concepts from memory. Computational models of semantic memory allow researchers to quantify associative thinking as movement through a semantic space of concepts.

Associative thinking reflects a search process operating on a semantic memory network structure. Highly creative people travel further in semantic space, switch between more semantic subcategories, and make larger leaps between associations.

Associative thinking at the core of creativity. Trends in Cognitive Sciences, Volume 27, Issue 7, 2023.

Importantly, in this model, the creative sequence does not start with inspiration; it starts with preparatory groundwork. In the epigraph, Milne—through Pooh—argued the same point: The author must first go where the ideas can find her. This place, one could argue, is a state of mind rather than a geographical location. It is this state of mind—or aspects thereof—that the present paper aims to explore.

This study on 138 undergraduate students used path analysis to investigate the relationship between creativity (interest, measured by a creative activities survey; and ability: fluency, originality, and elaboration) and different aspects of thought patterns presumed to influence the preparation and illumination phase of the creative process: habitual patterns of thought (ruminative brooding, ruminative self-reflection), thought suppression, thought intrusion, mind wandering, and associative ability.

Such relationship was hinted at in Wallas’s classical model of the creative sequence, but is rarely investigated. We found that creative behavior/interest was driven by self-reflection, thought intrusion, and the lack of a need for thought suppression; creative ability was fueled mainly by associative ability.

On Being Found: How Habitual Patterns of Thought Influence Creative Interest, Behavior, and Ability

That leads me to the following book, apparently a classic on the subject.

In The Art of Thought, Graham drew on the work of Hermann von Helmholtz and Henri Poincaré to propose one of the first complete models of the creative process, as consisting of the four-stage process of preparation (or saturation), incubation, illumination, and verification), which remains highly cited in scholarly works on creativity.

Poincaré and Helmholtz are giant polymath philosophers of the old school whose research covered a wide range of subjects. Much of their work is at the foundation of modern science, including: chaos theory, non-Euclidean geometry, theory of special relativity, radioactivity, gravitational waves, quantum mechanics; theories of vision and perception of space, colors, conservation of energy, thermodynamics, acoustics and aesthetics.

As an aside, that unbalanced closing parenthesis after “verification” is bothersome, I wonder how hard it is to edit this Wikipedia article.. Wow, I was able to do it without creating an account, and the change is already published. Before:

image

After:

image

That is so satisfying, malleability for the win.

Back to Wallas’ “classical model of the creative sequence”.

Helmholtz said that after previous investigation of the problem “in all directions… happy ideas come unexpectedly without effort, like an inspiration. So far as I am concerned, they have never come to me when my mind was fatigued, or when I was at my working table…. They came particularly readily during the slow ascent of wooded hills on a sunny day."

Helmholtz here gives us three stages in the formation of a new thought.

  • The first in time I shall call Preparation, the stage during which the problem was ‘investigated… in all directions’;
  • the second is the stage during which he was not consciously thinking about the problem, which I shall call Incubation;
  • the third consisting of the appearance of the ‘happy idea’ together with the psychological events which immediately preceded and accompanied that appearance, I shall call Illumination.
  • And I shall add a fourth stage, of Verification, which Helmholtz does not mention here.

It seems the role of non-linear thinking in creativity concerns the second stage - the ability to let your mind wander and make free associations - which leads to the illumination, the synthesis and crystallization of insight.

As computers and software are tools for thought, their design is meant to support the thinking process involving creativity and intelligence. That means recognizing and supporting the diverse ways of thinking that people utilize or prefer, as different approaches are useful at various stages of the creative process. Non-linearity is valuable in the incubation stage, but more rigorous logic, sequential argument and formality is necessary for the verification stage.

1 Like

This is an interesting observation, because I’ve seen something like this mentioned in ‘esoteric’ writings. Basically, the idea that to ‘get an answer from the Universe’ for a creative problem, one should first think hard about the problem and THEN ‘let go’ (relax and forget about it) and let the unconscious mind provide the answer in its own time. Failing to ‘let go’ and continuing to try to beat at it with the conscious mind apparently blocks ‘receiving an answer’. Interesting that the creative process appears to be the same even when viewed from different perspectives.

1 Like

Interesting to consider the opposite of rumination and illumination. It sounds like writer’s block, about which pages have been written describing the torment of not being able to write, or even think one’s way forward.

It seems to be about the integration, or communication between the conscious and sub/unconscious layers of the mind. About different modes of thinking, and states of consciousness. “Write drunk and revise sober,” Hemingway might have said.

Here’s a thing I read recently, about how having a concrete goal or objective can be counter-productive when searching for a solution to open-ended questions and research.

By synthesizing a growing body of work in search processes that are not driven by explicit objectives, this paper advances the hypothesis that there is a fundamental problem with the dominant paradigm of objective-based search in evolutionary computation and genetic programming: Most ambitious objectives do not illuminate a path to themselves. That is, the gradient of improvement induced by ambitious objectives tends to lead not to the objective itself but instead to deadend local optima.

Indirectly supporting this hypothesis, great discoveries often are not the result of objective-driven search. For example, the major inspiration for both evolutionary computation and genetic programming, natural evolution, innovates through an open-ended process that lacks a final objective. Similarly, large-scale cultural evolutionary processes, such as the evolution of technology, mathematics, and art, lack a unified fixed goal.

In addition, direct evidence for this hypothesis is presented from a recently-introduced search algorithm called novelty search. Though ignorant of the ultimate objective of search, in many instances novelty search has counter-intuitively outperformed searching directly for the objective, including a wide variety of randomly-generated problems introduced in an experiment in this chapter.

Thus a new understanding is beginning to emerge that suggests that searching for a fixed objective, which is the reigning paradigm in evolutionary computation and even machine learning as a whole, may ultimately limit what can be achieved. Yet the liberating implication of this hypothesis argued in this paper is that by embracing search processes that are not driven by explicit objectives, the breadth and depth of what is reachable through evolutionary methods such as genetic programming may be greatly expanded.

Novelty Search and the Problem with Objectives. Genetic Programming Theory and Practice series IX. Genetic and Evolutionary Computation.


In a round-about way, I encounter the other side of history: novelty. The edge of now, breaking forth from the past and reaching for a future. It’s history in the making, the creative act in process.

Here, I think nonlinearity can be expressed in other words, to explore the question deeper and define it in more detail. I see it’s trying to talk about the role of creativity and novelty in the process of thinking. “Linear” is hinting at logical, sequential and formal process of thought by following rules, taking steps toward an objective; and “nonlinear” is about other modes of thinking, such as dreaming, visualizing, brainstorming, taking a shower, walking up a hill on a sunny day. Nonlinearity implies there’s a jump or break in the line of thinking. A critical threshold is crossed, the rules are being broken intentionally to achieve a higher order of harmony and organization.

History is a nightmare from which I am trying to awake.
– James Joyce in Ulysses

How can we dig deeper into what that break in linearity means. It’s a liberation from history and the past, that leap of faith, the strike of inspiration, the spark of creativity that is the soul of life and art.

Emergence occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole.

Emergence plays a central role in theories of integrative levels and of complex systems. For instance, the phenomenon of life as studied in biology is an emergent property of chemistry and physics.

Alright, here are concepts like phase transition and self-organized criticality. That’s starting to describe more concretely the nonlinear leaps of logic involved in creativity and novelty.

Spontaneous order, also named self-organization in the hard sciences, is the spontaneous emergence of order out of seeming chaos.

Examples of systems which evolved through spontaneous order or self-organization include the evolution of life on Earth, language, crystal structure, the Internet, Wikipedia, and free market economy.

Self-organized criticality is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality.

It is typically observed in slowly driven non-equilibrium systems with many degrees of freedom and strongly nonlinear dynamics.

The concept was put forward by Per Bak, el al, (“BTW”) in a paper

Self-organized criticality: An explanation of the 1/f noise

The development of flicker noise (1/f noise) in dynamical systems with extended spatial degrees of freedom is investigated analytically. A natural evolution of self-organized critical structures of states which are barely stable is demonstrated, and the results of numerical simulations are presented graphically. The implications of the present analytical approach for phenomena related to 1/f noise (such as the self-similar fractal structure of spatially extended objects and the development of turbulence) are discussed.

Oh, 1/f noise, I love going down that rabbit hole.

Pink noise, or 1/f noise, has been discovered in the statistical fluctuations of an extraordinarily diverse number of physical and biological systems.

Examples of its occurrence include fluctuations in tide and river heights, quasar light emissions, heart beat, firings of single neurons, resistivity in solid-state electronics and single-molecule conductance signals resulting in flicker noise.

Press, W. H. (1978). “Flicker noises in astronomy and elsewhere”. Comments in Astrophysics.

General 1/f α noises occur in many physical, biological and economic systems, and some researchers describe them as being ubiquitous. In physical systems, they are present in some meteorological data series, the electromagnetic radiation output of some astronomical bodies. In biological systems, they are present in, for example, heart beat rhythms, neural activity, and the statistics of DNA sequences, as a generalized pattern.

Aperiodic components of neural activity, characterized by endogenous 1/f noise dynamics, are hypothesized to support the emergence of large-scale cortical order and cognitive flexibility. Here, we combine computational modeling and human brain stimulation to elucidate the role of 1/f noise in modulating neural synchrony.

Neural synchrony.. Earlier there was mention of swarm behavior and collective intelligence, so there’s a connection with how groups of individual organisms, or neural patterns, synchronize their behavior in a self-directed manner, acting as one.

Using a coupled oscillator model, we demonstrate that ubiquitous 1/f noise does more effectively enhance phase synchrony than spectrally flat (white) noise. Crucially, we identify a competitive synergy between noise intensity and the 1/f spectral exponent: starting from optimal white noise-induced synchrony, increasing the 1/f exponent while decreasing noise intensity leads to a further enhancement of synchrony, which peaks at a specific parameter regime before diminishing. To experimentally validate these findings, we developed a transcranial 1/f noise stimulation (tFNS) system and applied it to human subjects.

Enhancing Neural Synchrony with Endogenous-like 1/f Noise Stimulation

In the above illustration, they label the brain as a “non-linear dynamical system”. Here the word “nonlinear” has a mathematical meaning that differs from our intended aim of describing a mode of thinking.

Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
Stanisław Ulam


Following the concepts of criticality, phase transitions, and complex systems.. The Chemical Basis of Morphogenesis by Alan Turing. Patterns in nature: symmetry; trees, fractals; spirals; chaos, flow, meanders; waves, dunes; bubbles, foam; tessellations; cracks; spots, stripes.

Self-organizing-Mechanism-for-Development-of-Space-filling-Neuronal-Dendrites-cycle

O-hoh, a new word I learned, vermiculation.

Vermiculation naturally occurs in patterns on a wide variety of species, for example in the feathers of certain birds, for which it may provide either camouflage or decoration.

It also appears in architecture as a form of rustication where the stone is cut with a pattern of wandering lines. In metalwork, vermiculation is used to form a type of background found in Romanesque enamels.. In this case the term is used for what is in fact a dense pattern of regular ornament using plant forms and tendrils. In Ancient Roman mosaics, opus vermiculatum was the most detailed technique, and pieces are often described as “vermiculated”.

An on-going special interest I have is dithering algorithms. Recently I saw @neauoire post somewhere (maybe hundredrabbits.itch.io), and this patterned texture of the wing caught my eye.

I’ve been playing with oekaki (お絵描き) and noodle, drawing programs on Varvara/Uxn. The source code is tiny but dense. I’m learning how to wield this power of dithering and sweet noise.

Later I noticed the brush-y lines from Phinxel’s Phield Notes made in Decker, also on itch.io.

Some time ago I ported the brush engine of MyPaint to WebAssembly (brushlib-wasm). It was compute-intensive and slow, but from a dozen or more brush types, my favorite was this pencil-scratch effect.

In Japanese calligraphy there’s a technique called Kasure 渇筆(かすれ ), “dry brush”. People practice for years to get the right amount of streak and fade to let the paper’s rough texture show through.

So I started a little experiment, dither-draw, to explore this effect. One thing I knew I wanted to achieve was, when I move the brush back and forth, a consistent pattern would appear instead of each stroke overlapping noisily. Hard to describe because I haven’t yet grasped the math behind it, but I found a technical post that explains exactly how such dither stability can work.

It goes into the nitty gritty advanced stuff, the trials and errors, until there’s a dither pattern mapped to the inside of a sphere, fixed to the world geometry as the camera moves.

Dither2-CameraSphere2

Then I found an even more innovative algorithm called surface-stable fractal dithering, see it in action to appreciate how it feels.

Source files for further study: Dither3D


The surrealist Max Ernst had a technique called decalcomania, incorporating naturally forming patterns made by squishing paint with a flat surface and lifting it slowly.

Leonardo da Vinci talks about letting the mind wander and actively hallucinate patterns emerging from chaos, as a creative technique for stimulating the visual imagination.

Look at walls splashed with a number of stains, or stones of various mixed colours. If you have to invent some scene, you can see there resemblances to a number of landscapes, adorned with mountains, rivers, rocks, trees, great plains, valleys and hills, in various ways. Also you can see various battles, and lively postures of strange figures, expressions on faces, costumes and an infinite number of things, which you can reduce to good integrated form. This happens on such walls and varicoloured stones, (which act) like the sound of bells, in whose peeling you can find every name and word that you can imagine.

Do not despise my opinion, when I remind you that it should not be hard for you to stop sometimes and look into the stains of walls, or the ashes of a fire, or clouds, or mud or like places, in which, if you consider them well, you may find really marvelous ideas. The mind of the painter is stimulated to new discoveries, the composition of battles of animals and men, various compositions of landscapes and monstrous things, such as devils and similar things, which may bring you honor, because by indistinct things the mind is stimulated to new inventions.

– From A Treatise on Painting

This is a psychological phenomenon known as pareidolia.

Pareidolia is the tendency for perception to impose a meaningful interpretation on a nebulous stimulus, usually visual, so that one detects an object, pattern, or meaning where there is none. Pareidolia is a specific but common type of apophenia, the tendency to perceive meaningful connections between unrelated things or ideas.

So it relates to the subject of associative thinking, letting your mind meander in non-linear paths, to seek out moments of serendipity as “active luck”.

When effective, I think it can go deeper than the personal mind, to something like the “collective unconscious” as Carl Jung conceptualized. Jung believed that insight into the collective unconscious could be gleaned primarily from dreams and from active imagination, a waking exploration of fantasy.

..[The anima and animus] evidently live and function in the deeper layers of the unconscious, especially in that phylogenetic substratum which I have called the collective unconscious. This localization explains a good deal of their strangeness: they bring into our ephemeral consciousness an unknown psychic life belonging to a remote past. It is the mind of our unknown ancestors, their way of thinking and feeling, their way of experiencing life and the world, gods, and men. The existence of these archaic strata is presumably the source of man’s belief in reincarnations and in memories of “previous experiences”. Just as the human body is a museum, so to speak, of its phylogenetic history, so too is the psyche.

Or an even lower layer than psychology, closer to the metal, down through biology, chemistry, physics - taking advantage of the natural tendency of swarms (of particles, liquid, neurons) to form patterns on their own.

Self-organization relies on four basic ingredients:

  1. strong dynamical non-linearity, often (though not necessarily) involving positive and negative feedback
  2. balance of exploitation and exploration
  3. multiple interactions among components
  4. availability of energy, to overcome the natural tendency toward entropy

Like with the Kasure brush technique, it’s a way of losing control in a controlled manner, to let the materiality of the ink and paper behave naturally, drip, splash, bleed into clear water, or drying along the brush stroke until it fades out, leaving delicate patterns. It’s about letting go of the small self that wants complete conscious control; to let unconscious forces of nature work with you and through you, to express itself as a whole.

1 Like

You see why I am not happy with “linear” software: resources that are type-compatible cannot be composed because there is not a workflow defined that way.

2 Likes

Of Clouds and Clocks: An Approach to the Problem of Rationality and the Freedom of Man. Chapter 6 of the book, Objective Knowledge: An Evolutionary Approach, by Karl Popper.

In his essay Of Clouds and Clocks, included in his book Objective Knowledge, Popper contrasted “clouds”, his metaphor for indeterministic systems, with “clocks”, meaning deterministic ones.

My clouds are intended to represent physical systems which, like gases, are highly irregular, disorderly, and more or less unpredictable. I shall assume that we have before us a schema or arrangement in which a very disturbed or disorderly cloud is placed on the left. On the other extreme of our arrangement, on its right, we may place a very reliable pendulum clock, a precision clock, intended to represent physical systems which are regular, orderly, and highly predictable in their behaviour.

There are lots of things, natural processes and natural phenomena, which we may place between these two extremes.. The changing seasons are somewhat unreliable clocks, and may therefore be put somewhere towards the right, though not too far. I suppose we shall easily agree to put animals not too far from the clouds on the left, and plants somewhat nearer to the clocks.

Speaking of plants being like clocks, I’m discovering rich potential in using algorithms related to growth and forms toward artistic expression. A direction I’m exploring is a new micro-language elsys, “extended Lindenmeyer system”, to describe structures as higher-dimensional sculptures, or hyperobjects. Time as a first-class citizen.

4

They’re generated by LOGO-like programs, consisting of an initial string (axiom) and one or more rewriting rules, often recursive. Forward, rotate, push/pop state.. More on this later. The above recording is from:

L : SYS
S : F|+[F->Y)[-S]]Y-!Y
Y : [|>F-F)++(Y]

From another angle, I’m pursuing an inquiry into fluid dynamics and brush strokes. This is closer to clouds, as fluids are a nonlinear system. Here I attempt to write malleable on a web canvas element.

2025-12-22-23-12-25-03

Back to Popper.

The arrangement I have described is, it seems, quite acceptable to common sense; and more recently, in our own time, it has become acceptable even to physical science.

It was not so, however, during the preceding 250 years: the Newtonian revolution, one of the greatest revolutions in history, led to the rejection of the commonsense arrangement which I have tried to present to you. For one of the things which almost everybody thought had been established by the Newtonian revolution was the following staggering proposition:

All clouds are clocks - even the most cloudy of clouds.

This proposition, ‘All clouds are clocks’, may be taken as a brief formulation of the view which I shall call ‘physical determinism’ .

The physical determinist who says that all clouds are clocks will also say that our commonsense arrangement, with the clouds on the left and the clocks on the right, is misleading, since Newton himself was not among those who drew these ‘deterministic’ consequences from his theory; everything ought to be placed on the extreme right.

He will say that, with all our common sense, we arranged things not according to their nature, but merely according to our ignorance. Our arrangement, he will say, reflects merely the fact that we know in some detail how the parts of a clock work, or how the solar system works, while we do not have any knowledge about the detailed interaction of the particles that form a gas cloud, or an organism. And he will assert that, once we have obtained this knowledge, we shall find that gas clouds or organisms are as clock-like as our solar system.

Newton’s theory did not, of course, tell the physicists that this was so. In fact, it did not treat at all of clouds.

He goes on to talk about Heisenberg’s Uncertainty Principle, determinism and free will.

..Now the tables were turned. Indeterminism, which up to 1927 had been equated with obscurantism, became the ruling fashion; and some great scientists, such as Max Planck, Erwin Schrodinger, and Albert Einstein, who hesitated to abandon determinism, were considered old fogies, although they had been in the forefront of the development of quantum theory.

..The amoeba solves some problems (though we need not assume that it is in any sense aware of its problems): from the amoeba to Einstein is just one step. But Compton tells us that the amoeba’s actions are not rational, while we may assume that Einstein’s actions are. So there should be some difference, after all. I admit that there is a difference: even though their methods of almost random or cloud-like trial and error movements are fundamentally not very different, there is a great difference in their attitudes towards error. Einstein, unlike the amoeba, consciously tried his best, whenever a new solution occurred to him, to fault it and detect an error in it: he approached his own solutions critically.

I believe that this consciously critical attitude towards his own ideas is the one really important difference between the method of Einstein and that of the amoeba. It made it possible for Einstein to reject, quickly, hundreds of hypotheses as inadequate before examining one or another hypothesis more carefully, if it appeared to be able to stand up to more serious criticism.

As the physicist John Archibald Wheeler said recently, ‘Our whole problem is to make the mistakes as fast as possible’.

We can say that the critical or rational method consists in letting our hypotheses die in our stead: it is a case of exosomatic evolution.

The soap bubble consists of two subsystems which are both clouds and which control each other: without the air, the soapy film would collapse, and we should have only a drop of soapy water. Without the soapy film, the air would be uncontrolled: it would diffuse, ceasing to exist as a system. Thus the control is mutual; it is plastic, and of a feed-back character. Yet it is possible to make a distinction between the controlled system (the air) and the controlling systems (the film).

Comparing the bubble with a ‘hardware’ system like a precision clock or a computer, we should of course say (in accordance with Peirce’s point of view) that even these hardware systems are clouds controlled by clouds . But these ‘hard’ systems are built with the purpose of minimizing, so far as it is possible, the cloud-like effects of molecular heat motions and fluctuations though they are clouds, the controlling mechanisms are designed to suppress, or compensate for, all cloud-like effects as far as possible. This holds even for computers with mechanisms simulating chance-like trial-and-error mechanisms.

It’s like he’s arguing that all clocks are clouds. Even computers, which seem to be as clock-like as they get, are fundamentally based on indeterminate and uncertain cloud of probabilities.

We may perhaps here look back for a moment to the problem of physical determinism, and to our example of the deaf physicist who had never experienced music but would be able to ‘compose’ a Mozart opera or a Beethoven symphony, simply by studying Mozart’s or Beethoven’s bodies and their environments as physical systems, and predicting where their pens would put down black marks on lined paper. I presented these as unacceptable consequences of physical determinism. Mozart and Beethoven are, partly, controlled by their ‘taste’, their system of musical evaluation. Yet this system is not cast-iron but rather plastic. It responds to new ideas, and it can be modified by new trials and errors-perhaps even by an accidental mistake, an unintended discord.

He adds a footnote as a tangent.

See Ernst Mach, Die Principien der Wärmelehre (“Principles of Thermodynamics”), 1896, where he writes: 'The history of art .. teaches us how shapes which arise accidentally may be wed in works of art. Leonardo da Vinci advises the artist to look for shapes of clouds or patches on dirty or smoky walls, which might suggest to him ideas that fit in with his plans and his moods .. Again, a musician may sometimes get new ideas from random noises; and we may hear on occasion from a famous composer that he has been led to find valuable melodic or harmonic motifs by accidentally touching a wrong key while playing the piano.

Funny I was recently thinking of Max Ernst the painter, and here’s Ernst Mach the physicist. And about Leonardo hallucinating at the clouds in the sky; and a musician hearing new melodies from accidental noise. I wonder how he relates it to thermodynamics.

I have several favorite Ernst’s, like Ernst Haeckel, the zoologist who is known for Kunstformen der Natur (“Art Forms in Nature”).

And Ernst Fuchs, the Austrian painter known for the Vienna School of Fantastic Realism and the Mischtechnik, using transparent layers of egg tempera, oil paints and resins. Note the wallpaper pattern.


We have seen that it is unsatisfactory to look upon the world as a closed physical system-whether a strictly deterministic system or a system in which whatever is not strictly determined is simply due to chance: on such a view of the world human creativeness and human freedom can only be illusions. The attempt to make use of quantum-theoretical indeterminacy is also unsatisfactory, because it leads to chance rather than freedom..

I have therefore offered here a different view of the world, one in which the physical world is an open system. This is compatible with the view of the evolution of life as a process of trial and error-elimination; and it allows us to understand rationally, though far from fully, the emergence of biological novelty and the growth of human knowledge and freedom.


Epicurus argued that as atoms moved through the void, there were occasions when they would “swerve” (clinamen) from their otherwise determined paths, thus initiating new causal chains. These swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused.

This swerving, according to Lucretius, provides the “free will which living things throughout the world have”.


At the end of his treatise on harmony Arnold Schoenberg makes a wry but revealing admission: Mit mir nur rat ich, red ich zu dir (In speaking with you, I am merely deliberating with myself). This entire Harmonielehre (“A Theory of Harmony”), completed on July 1, 1911, is presented as though it were nothing more than an internal conversation. This “teacher” is a pupil pursuing his own instruction, perhaps grappling with problems that do not even allow for a solution. If he voices his uncertainties publicly, it is certainly not in order to persuade anyone. It is to put others in a similar situation. His method, he notes, is like shaking a box to get three tubes of differing diameters to rest inside each other. One does it in the belief that “movement alone can succeed where deliberation fails.” And the same applies to learning of every type. “Only activity, movement is productive.” The teacher’s first task “is to shake up the pupil thoroughly.” His internal unrest must infect his students, “then they will search as he does”.

What goes for the teacher also goes for the artist. In Schoenberg’s own phrase, the music he composes in the years surrounding the Theory of Harmony “emancipates dissonance” from the rule of consonance. Consonance, a pleasing resolution of clashing tones, is like comfort. It avoids movement; it “does not take up the search.” Schoenberg’s compositions have more faith in disquiet than rest, uncertainty than knowledge, difficulty than ease. All good art, in Schoenberg’s view— plays out an unfinished, intellectual quest. Aiming only to make things clear to himself, the artist pursues clarity in open confusion.

Emancipation of the dissonance


So the question of nonlinearity relates to the “cloud” - indeterminacy, disorder, noise, complexity, entropy, unpredictability; and the “clock”, the precise controlled logical mechanism, how we expect computers and software to behave.

As we design software as tools for thought, how can they better serve the full spectrum of the creative process, the cloud and clock-like modes of thinking?

Part of the answer seems to be providing a controlled space for chaotic behavior, disorder and chance, to let you make mistakes fearlessly in an evolutionary search process to open-ended questions. And on the other hand, the ability to work with that raw material, to extract the best ideas, to give it structure and logic, to organize it into coherent forms.

5 Likes

I use Pharo and GToolkit, both current re-interpretations and deviations of 70’s Smalltalk (and the second build over the first one), and while they preserve image based orthogonal persistence, including storing some passwords (for example on your Git repositories), as a general practice, you don’t share your developing image, and just load the code from Git in a new image to have a customized image that you wan’t to share, making things pretty reproducible, without major privacy concerns (that was what I did in my Panama Papers as reproducible research prototype).

I agree with you on the idea of having history controlled locally and erasable, with enough friction to be shared externally to be aware of what are we sharing (as happens with Distributed Version Control Systems like Fossil or Git). But, in my experience, I don’t think that image based orthogonal persistence from Smalltalk current variations/deviations makes easier to share passwords unwillingly.

1 Like

This is something I like about this forum, as I see it as a place for engaging friendly conversations among peers without all the bureaucracy and stiffness of classical peer review. That’s why I proposed in another thread to have some kind of malleable publication, born from what we have shared in this forum, expanding it to include voices from the social sciences, arts and humanities.

At the moment is just a dream (but I think a necessary one). But who knows, maybe some of our malleable systems would allow us to collect and comment about the knowledge that we index and reflect here… ahhh someday, someday… :relieved_face:

3 Likes

Just a comment on @khinsen’s mention of Miller Columns as a retricted view of temporality in a computer interaction (file browsing) and how I have also experience it in GToolkit and web browsing via TiddlyWiki themes as those used on our TTRPG wiki and the one for (part of) Colombian Amazonas linguistic revitalizing project and is that after having this temporal dimension, even in a limited form, interactions that don’t account for some kind of temporality seems lacking.

And I think that some kind of mapping of time into space can be achieved by learning the lessons of how the comic medium does this by arranging into a 2D canvas the disposition of card like interfaces/elements that organize into space(s) some kind of time interaction, as is done with Miller Columns, but even taking the metaphor beyond as some narrative card games do or like is being proposed in conceptual interfaces like the one of OLLOS, WonderOS or MercuryOS. And, as TiddlyWiki and Hypercard before have shown, maybe those card like interfaces (I call them Interfaz Tarjetual, playing with some computing terms and Spanish) are powerful enough to allow high composability unavailable in current apps, while providing a simple enough tactile metaphor for improved learnablity.

I would like to explore deeper those card-like interfaces in my next malleable wiki engine.

^up: A Firelights solo play, taken from https://youtu.be/mwqP3EhVX3M

^up: OLLOS: An itemized personal computing timeline.

^ up: From Hello, Operator, WonderOS operator manual.


^up: From MercuryOS.

4 Likes

I received an objection that is focused on space usage:

6000 × 4000 × (10 bits) ≈ 28.61 MiB

is the unprocessed size of a 24MP modern camera image. Let each color channel have 10 bits. If you know more about cameras you can correct me.

Another estimate, for a worse camera, is:

4096 × 3072 × (8 bits) = 12 MiB

And somehow Apple says its 12MP ProRAW image is 25 MB (don’t know SI or JEDEC), a lot over my estimate.

These sizes are prohibitively large—I agree. Since these are nominally “input” they will be saved and take up a lot of space.

So, here is an amendment to clarify the case of signal processing. You can probably see that this whole thesis is designed for symbolic data, and encoded analog information is an edge case. Consider an alternative design of the camera hardware, where denoising is done with analog circuits. It will be impossible to update, but will otherwise function in the same way. Now, the digital input suddenly becomes very compressible, but this change doesn’t involve the principle of automatic persistence in any way. So, I would like to say that even if denoising is done in software, it will not be considered a breach of this principle of automatic persistence.

The real power of this thesis is progression—being able to use the most complete history as building blocks in new things—and not precise measurement. Every frame can arrive at any level of fidelity, and progression will still function.

Continuing that idea, I now propose a principle for signal inputs:

Each channel is normalized to a constant bitrate. (This is applied before inter-frame compression; inter-frame compression will try its best to further reduce size.)

This forces the system to be used by progression only.

I would support a malleable publication. People talked about something similar in the Substrates 2026 workshop, imagining that it will be powered by a nice enough substrate…

I like this summary. Yes.

Thanks to @natecull for pointing the last paragraph of your post, I am also interested in the question of how we model complex systems, things that cannot be explained from their parts.

When I wrote about having a modeling language, I was thinking of those complex systems. We could describe them without knowing the exact behavior of a part.

If our modeling language is also a programming language, then we could describe both clouds / complex systems and clocks / programs.

Allan Kay also wanted that. My answer to this is a type system that checks correctness at runtime. Then you can play as much as you want and the type system will stop you if the behavior goes out of the controlled environment that you have defined.

(Very good article, when I have time, i will try to have a more thorough look)

1 Like

That’s exactly it. The different electrons and their behavior inside the microchips act like clouds. What we have done is define properties that describe the cloud as a whole, not a specific electron. We have defined the notion of current and voltage.

On top of this cloud, we have a deterministic system, a clock.

We seem to forget that. Some clouds can be modeled into clocks.

My understanding on complex systems is very limited. I wonder if there are pequilarities that need to be taken into account, when doing such a modeling.

2 Likes

You may want to look at the notion of autonomy by Matteo Mossio and the role of self-referentiality to non-determinism.

In fact, we want systems to be non-deterministic, because this is the only way to allow a community to have agency. Imagine if the choices of a community did not matter, and everything was decided by the input, it would be a disaster. This is why we have to introduce self-referentiality.

In my contemporary dance classes , we sometimes use the noises of the nearby street to dance. But the non-determinancy is not due to the random noises, but because each dancer interprets subconsciously the noise in a different way.

1 Like