Modules vs Centers: Thinking about Christopher Alexander's Thought in Software

So I’ve been reading some Richard Gabriel. ( Richard P. Gabriel - Wikipedia and Dreamsongs Site History ) He’s a MIT AI Lab / Common Lisp person with PARC connections and so stands at the intersection of Lisp and Smalltalk, and is also one of the few people with links to the “Software Patterns” movement who actually reads and understands the late Christopher Alexander, the inventor of the Pattern Language concept.

I’ve read some raw Alexander, too (although I’ve only skimmed rather than deep-dived the four volumes of The Nature of Order) , and so I have Some Questions about Alexander’s approach to architecture/design, as opposed to how the Software Patterns people interpreted his work.

Oh, I also used to hang out in the mid-2000s on the Portland Patterns Repository Wiki (the original Wiki, Ward Cunningham’s one), which led to me getting very confused about Best Practices in Software Development. (The PPR Wiki community, paradoxically, was full of people who were very critical of the Patterns movement and of Object Oriented Programming itself.)

Anyway. Here’s the deal. Here’s one of the things that I’m currently very confused about:

The Software Engineering community, including the Object Oriented movement and the Patterns movement that arose from it, likes to think in terms of what I’m going to call modules. (The module concept, I believe, is much older and broader than the object concept; it was Wirth’s big thing, for example). A Module is a sealed box with clearly defined interfaces. The wisdom everyone agrees on is that a System must be built from Modules. Modules should have clean separation of concerns; if the modules are Objects then they should have high internal cohesiveness and low external coupling (if I remember my terms right). The Software Components movement was different to the Modules movement, taking inspiration from electronics, but roughly the same idea. The Containers and Microservices movements (the current hotness) is Modules applied to the Cloud. In networking protocols, we talk about Layers but they seem to be just Modules at a slightly different level of abstraction.

Even good old 1970-era C can do Modules, in terms of compilation units (source or object files). Lisp has been doing Modules as closures since the 1950s; there’s probably a paper somewhere arguing that “Lambda is the ultimate Module”, and if not there should be. Node.js literally implements its module system via Javascript closures.

Modules might or might not have anything to do with the Types beloved of the FP/Haskell functional programming world, but if you have types, then a Module should be able to define and export them and not expose all of the implementation. And whether or not there are Types, Modules can and should be tested as units, and then tested again as integration tests.

So we’re all agreed. Functional and Object-Oriented people, Static and Dynamic evangelists, Late-Binding or Early-Binding enthusiasts- everyone agrees that Modules, modularization, and cleanly separable interfaces, are where it’s at in software engineering.

Great. But here’s Christopher Alexander, not a software engineer (although he actually was a mathematician, and a programmer before and during being an architect-turned-mystic), whose ideas were adopted by the Object-Oriented crowd (a subset of the Modules movement).

And the thing is, Alexander’s thinking is all about how Modules Are Not Where It’s At. He wants us to think in terms of something entirely different: Centers.

Alexander’s first draft of his theory is “A City Is Not A Tree” (1965), spinning out of his “Notes on the Synthesis of Form (1964)”. Richard Gabriel has some thoughts on Notes here:

Essentially, as I understand it, Alexander in 1964 was trying to act like a good Engineer, and decomposing a physically-built city or a community (an Indian village, in one example) into what I would call Modules and what he calls Trees. Cleanly separate clusters of tightly cohesive concerns, loosely separated…

And he finds he can’t do it. What he finds instead - and the thought that powers the entire rest of his career - is “deep interlock and ambiguity at all levels”. He finds not trees but “semilattice” structures - what Ted “Xanadu/hypertext” Nelson would call “intertwingling”. He looks for a precise method to design and while he summons lots of mathematics (fractals, chaos theory) the best he can do is to call us to feeling and “life” and “The Quality Without A Name”.

In Alexander’s later works, he manages to focus the ambiguous intertwingling-ness of Good vs Bad architecture into something he calls a Center.

A Center is the closest thing in Alexander’s thought to what Alan Kay would call an Object. Yet I believe a Center is not an Object and it is certainly not a Module.

A Center in Alexander’s thought might have a boundary, and is “strengthened by” having boundaries, but it is not defined by its boundary. The boundary is ambiguous and permeable. If the boundary is completely sealed, the system dies. There might be interfaces, there probably are, but the life of the system is somewhere else. And there is deep ambiguity, everywhere, about what is or is not part of a Center. Centers overlap, fundamentally. Their beingness is shared. Without this sharing, the whole cannot exist.

This doesn’t seem to be how Modules - or systems built from Modules - work, at all. A system built from Modules - a “clean”, “secure”, “clearly separated”, “well-designed” system in 2020s Software Engineering terms - seems to be what Alexander (and all modernist architects) thought of as good architecture in the early 1960s, but by the end of that decade Alexander had recoilled from in horror as the ultimate in Non-Life.

And so here’s my confusion. As we continue to try to apply what we think of as “good software architecture”… are we actually doing very bad architecture?

Alan Kay’s thought almost approaches Alexander’s in how mystical he actually is, when you read him directly. Both Kay and Alexander draw deep inspiration from biology, especially a biological cell. Judging from the constant sense of frustration and disappointment in his 2020s writing, Kay’s actual, inner, not-clearly-expressed vision of an “Object” probably is closer to a Center than to whatever it was that an Object became in Smalltalk and C++. But Alexander’s Centers seem much more themselves, a much more unique and worked-out vision, even than Kay’s Objects.

The thing that gets me the most is, where’s the ambiguity and overlap in Software Objects?

Obviously we don’t want a certain kind of “ambiguity” - we have a trust and security crisis in software as it is, we don’t need our software to be less trustworthy! Our current situation is a bit like our actual bricks randomly exploding: we don’t need more of that. But what is it that Alexander’s Centers are about, how do we “feel the sense of life” in a software system, what kind of software entity is it that can interpenetrate and overlap itself, that is strengthened by boundaries yet not defined by them, that can be both positive and negative space at once?

The thought that strikes me the most about a Center is that it’s more like a temporary yet self-reinforcing solution in design space than it is a physically existing thing. A Center is something like a “habit” or “tendency” that expresses itself in a built environment. It’s a synergy, a sweet spot, a confluence of forces. It’s something you notice only from close observation and from living in a system (“eating your own dogfood” in software terminology). It’s a gameplay loop, in game design. It is possibly something along the lines of “a definition that can fit on a page” or repeated interaction motif. A Pattern, yes, but not something that can be reduced to a Pattern.

Does Kays’ vision of Objects lend itself to creating strong Centers in built software system? Does something other than Objects lend itself better to creating strong Centers? Do Data-Driven, Functional, or Logic Programming metaphors possess more of that unnameable quality of “deep ambiguity and overlap”, the “messiness” that pervades Alexander’s sense of Life in a built space?

I feel that Malleability is very close to the idea of the Center, as well. Alexander’s sense of building (as well as Kay’s) seems to be strongly about the user changing the residence from within. So there must be some connection between Kay’s Objects and Alexander’s Centers - even though the Center does not, itself, appear to be a Module.

This is the question that bugs me at the moment.


This is very interesting. Without all the rigorous background that you bring, and also adding non IT-related examples, I’d like to muse about some similar thoughts that fascinate me.

First a bit on OOP. The OOP paradigm in theory allows to design very elegant systems that are good enough representations of the domain to allow supporting of needs appropriately. Yet, how does the paradigm scale, or facilitate evolution of the system long-term? First of all a thorough understanding of OOP is needed for the elegance. Even experienced OOP developers often don’t apply the principles as Alan Kay and others intended. Then, all abstractions are leaky, and the simplification of the well-defined object interface, starts to compound as the system grows. It either becomes more rigid, constrained by the original intents of each designed object interface. Or it should be refactored all the time in all its aspects, and the refactoring should retain the quality of the original design for the system to remain elegant. Those are very tough demands.

Intuitively I’d say that OOP can’t handle well the “messiness” and inherent complexity that comes into play when organic evolution occurs, and the system grows in size, and must support previously unanticipated needs. Modules help, but only to an extent. You need a “God-like” overview and full understanding to perceive where things start to creak and fall apart. It may be that OOP works best when considered in a limited context, within a boundary where the model and abstractions are well manageable.

There’s also an analogy with the Semantic Web. Trying to capture “universal semantics” in global information architecture… it is doomed to fail. One can only approach satisfactory constructs in local contexts. A “concept” only has contextual meaning, same as an “object” in OOP.

In a way an OOP design is an unnatural, an artificial construct. It is “a tower”, upheld despite nature, and increasing amounts of effort must be put in so it doesn’t erode and fall apart.

When it comes to large systems, evolving over prolonged time periods, and with many people/stakeholders involved the system becomes way more “organic” and we might benefit from studying and mimicry of natural systems.

I posted on chat some insights I was passed related to “self-organization” and it led me to check out the idea of “Deep ecology”. Here’s the related toot.

So what is the intriguing part here? Well, consider the human body. Very, very complex, having systems and subsystems (modules?), exposing intelligent behavior, etc. Yet there’s no cell in control of all of this. There’s no leader, there’s no hierarchy. Cells aren’t designated a place in the human body… they find their place where they can thrive, and then new cells are born there. Cells exchange proteins with other cells in a kind of “market” of supply and demand. If these have an analogy to msgs in a software system, they’d be likely Events, as a cell AFAIK (with my very limited knowledge on biology) don’t command other cells.

So yeah… both on the level of organization within grassroots movements, as well as the technological support I’m pondering these analogies to Nature.

Alexander with the example of cities is also referring to more organic, messy systems that - by all the unexpected forces that work on them - likely evolve with the mechanics of natural systems.

Maybe for such systems it is not about the elegance of the design anymore, as it is about its general biological “fitness” to cope adequately with the circumstances in which it operates. And that should be something to take into account. “How does my addition impact the fitness and efficiency of the system as a whole?”… Idk, just musing aloud.


Yes, I feel that both Alexander and Kay would agree with that. An ever-changing and local idea of “fitness” - and the ability to evolve the system as the fitness metric changes - seems to be what they both agree on.

It suddenly strikes me that maybe the closest analogy to something like an Alexander “Center” in a software center is a process, or more specifically a Workflow. (If we were thinking of it in terms of an interaction with a human).

A Center in Alexander’s thought really is an “object” in three-dimensional space, or a “feature”. Something like “a garden” or “a building” or “a room” or “a doorway” or “a wall”… something that has its own identity, but decomposes into other smaller objects, and crucially these objects overlap at the edges, because they can’t help but do so. A doorway, for instance: is it part of one room, or two? Or is it something in between each? Well, yes. All of the above! It depends! A doorway can be widened and softened into an arch, blocked off into a niche, deepened and strengthened into a corridor, intensified into an airlock… the edges of systems always are overlapping subsystems.

A Workflow (or perhaps a Session) in a software system does have the sort of deep ambiguity and overlap that Alexander points to. Crucially, for a user-initiated and managed workflow, it’s a venue where the user can bring up and combine data from different applications or silos, so there’s the overlap. It can be temporary or have a semi-permanence to it, or be deepened and strengthened into an Application for itself.

Objects in a Smalltalk system do have this quality, I think. I’m not sure about objects in other systems. C++ or Java objects, I feel, have a kind of “deadness” to them which makes it almost impossible for them to be Sessions or Workflows. Not quite from anything inherent in their “objectness” (at a virtual-machine level) but in the tooling around them, the expectations for how they are defined and invoked.

It’s a very subtle thing, this idea of “liveness” which Kay (and Brett Victor) often points to, but it’s somewhat similar to Alexander’s idea of “life”. Something about interactivity and responsiveness, but not just interactivity and not just responsiveness. But being able to be configured on the spot and then also being able to be reified, to be a subject and object and message all at once. To be something both experienced and spoken/reasoned about.

Session/Workflow. Something like that. An “object” or “process” in the sense of something that receives messages and updates state, but that’s almost just the mechanical shadow of what it actually is… “an active idea as interaction-loop that can modify itself as needed”. A boundary perhaps - as required to maintain its self-integrity, and “dumb data structures” are still a perfectly good kind of self-identity - but more important is that it can self-modify?

Well, it’s an idea. Not sure how close I am to the reality. But if it’s embedded in a human-machine interaction loop, I’m sure we can spot the “strong centers” by their feel. If an interaction/component feels good and makes you happy, it’s functioning as a center in that system. If it makes you feel blocked, frustrated, distrustful, frightened, interrupted, unable to express your idea in its most natural terms… it’s not functioning as a center in the sense of a “mechanical proxy for a living thought”. Something about its natural being-ness in that system is being blocked or hidden and needs to be intensified and brought out.

William Gibson made up the concept of “cyberspace” because it provided a sort of “arena” for fictional fight/exploration scenes that wasn’t spaceships or castles, and the idea resonated with a lot of people because it caught something that we felt about how computers maybe weren’t but could be. I feel there is some kind of analogy between a system/session/workflow and an architecture in the sense of it being a “mental space” that we inhabit and put our attention into (in the form of symbols and other thought-proxies), and that despite all the words poured out since the 1990s about “cyberspace”, this fundamental human-machine-interaction-system analogy maybe hasn’t been explored fully yet.

Something that Gibson almost totally missed with his Cyberspace was the idea of “windows”; how we very naturally multiplex our attention into multiple sub-spaces. Related to this, I’m thinking about workflows/sessions being “locally configurable proxies of both remote machine entities and personal subjective thoughts”, and this three-way combination maybe capturing some of that ambiguity/overlap in Alexander’s idea of Center which Ted Nelson also was after with “hypertext” and “applitudes”.

A Center then, as a “message in transit”? No, not quite. “A persistent message in an ongoing system?” Perhaps. Or “a reified interaction as a system itself”. A Center mediates, I think, that’s its essential nature, and that’s where the ambiguity/overlap comes from. A doorway or hallway almost is the iconic Center. So whatever the information-system equivalent of a doorway or hallway is. I think that maybe gets quite closely at Alan Kay’s idea of “object” when he says “C++ isn’t it” and that “messaging” was closer to the intent.

One of the things that always makes me feel a system is “not a strong center” is if it can’t let me reify my interactions; if I can reify them, then it feels stronger. Ie, a REPL with command-line recall (as in GNU Readline), that feels “a strong center”. My commands are real things, they’re my thoughts in flight, I want the system to store and keep them as objects. The same with events. Many “log viewers”, though, don’t have that same quality, some there’s something “not live” about them. There’s a fussiness and formality to them, and the sense that the logs or histories are “dead” (they can’t be re-invoked, for example; I can’t group them and turn them into scripts; I can’t then iteratively deepen that script into an app; these are the sorts of interactions I intuitively expect from “a mechanical reification of a thought”).

Perhaps this means I’m thinking of a Center in an information system as something like a Sign or Symbol. Or an Interface. Not sure quite whether this fits with my contention earlier that a Center is not a Module, but I think maybe it can. It’s not the essence of being a barrier but the essence of being a connection. (A connection can be a barrier, but it also can be “empty space”. A room or courtyard is a valid Center in Alexander’s thinking, just as is a wall or a gate. What’s the software equivalent of “courtyard” or “empty space”? Well, a communication channel, I think. A “byte” and a “memory address” are both very strong centers, as in well-defined, having a clear sense of their own being-ness. These probably aren’t anyone’s idea of a Module though. )

Also, the various contexts of use that a software object exists in are part of the overlap for its nature as a Center. An object’s strength as a Center will depend on how well it fits with each use-case and context. One of those important context being how well it can be modified (for its authorised user/owner, of course, not for attackers). To be modified, it needs to be deconstructable into its components or have “source code” available (but preferably the “source code” will be objects and not just dead text… but also, not requiring a running system to exist, so not not text. Ugh. The ambiguity is subtle but I think I almost understand what I mean. )

Maybe the idea of modularity came from Herbert Simon’s 1962 classic “The Architecture of Complexity”. All the people quoted in this topic were probably familiar with it. Simon argues that hierarchical modularity is a key feature of both natural and engineered systems. In the human body, for example, the modules are the organs.

However, Simon also stresses that these modules are only nearly decomposable. They do not form a tree structure. Any kind of connection between modules is possible and often necessary. A module simply has a lotmore connections inside than outside. Which makes it not so different from a center.


Interesting! I wonder if I can get hold of a copy of that paper, it sounds very much like Alexander’s thinking. From the abstract it seems to be in the context of “General Systems Theory”, and Alexander is also thinking about systems theory.

I’m almost wondering if in software, a “center” might also be something like “an addressable entity”. Something that can be distinguished or pointed at in some way. That would include everything from memory bytes up to structured records and objects. We can strengthen a center by building a boundary around it, in the same way that we can improve the integrity (self-identity) of a record or object by guarding access to it, but we don’t have to do this; we can leave it open, as in scripting languages. And the concept of a “thing we can think about as a unit” is still there even if it’s latent and distributed, as in data-driven architectures in video games. And we can often change the way we break a data structure into components, much like how Alexander centers in architecture overlap.

So that’s two possible candidates for centers: “workflows” and “addressable entities”. Possibly the second one is more general. We sense frustration with a software system if there’s something that we’re conceptualising in our minds as a unit (whether it’s a task or group of tasks, or a set of data/knowledge) but the software won’t acknowledge it as being a unit. Put another way, that’s because we’re perceiving a center but the software is not letting us reify that center as a machine object.

While there are some deeply helpful insights in the object-oriented tradition, one way that mainstream object-oriented architectures can break down is when they forbid overlap, or letting the user reconceptualise the components of multiple objects as another object.

There’s probably no inherent reason, especially in a Smalltalk-type OOP environment, why it should be very difficult to take the parts of multiple objects and group them into a new one to meet a new user need (reify a new center that the user perceives), but sometimes it is. Sometimes that’s because too deep a focus on “encapsulation” prevents the components that the user sees as separable and reusable being accessed; sometimes it’s because there’s maybe too much coupling between those components so that they actually aren’t helpful to be exposed and would break if they were separated. Sometimes it’s because there’s been too much emphasis on ontologies: classes/types or inheritance, so the user is perceiving something that the class/type system won’t let them express, because an earlier judgement of “this is object A” can’t be overruled into “but this is also pieces of objects B and C”.

But being able to separate an object’s components and remix them into a new object is probably a deep part of capturing some of that Alexander-like overlap/ambiguity, and Kay-like liveness, in the user experience. At least, this is a felt need I often have when I’m interacting with large pre-built software systems: “I don’t want all of this! This is not solving my problem! I want this bit here, and that bit there, and to put them together! And I don’t want a big formal ontology, because I don’t yet know what it is I’m building, I need to see it and play with it first. But safely, of course.”

Smalltalk was born in play - how did OOP lose that? Also, Smalltalk was born in 1972, in an educational context, and on a machine with the capability of 8-bit micros, yet by 1977 it didn’t manage to migrate into the actual 8-bit micros that hit the mass market. Instead, BASIC did (“Fortran but with liveness”), which was much less suitable for the home computer role than even Logo! There was something about Smalltalk, great as it was, that didn’t let it make the jump into the world it created. I still find that strange.

One reason why I’m thinking about “workflows” as an entity that can be reified is that I’m reading this article on the history of the “operating system” as a concept, and how it arose from getting the machine to do setup tasks which programmer/operators had to do. (Setting up batch jobs in the 1950s all sounds very much like the ceremony of modern “cloud deployment”, the complexity of which makes my teeth ache; we seem to be going backwards in time, not forwards to even the 1970s. “Ensquozened” card decks were literally the first version of “minification”. Why! Why are we recreating busywork for ourselves? Why did our tools fail us so spectacularly that we’re back to manual batch job setup and congratulating ourselves for being “devOps”?). The History of GM-NAA I/O and SHARE

1 Like

There has been some work on Traits in Pharo & Squeak Smalltalk which allows you to build classes from smaller pieces.

Also, Smalltalk was on some 8 bit machines. Rosetta Smalltalk (which was never released) and TinyTalk both ran on them.

Edit: Almost forgot. Rosetta was Smalltalk-72 and TinyTalk was Smalltalk-76 based

1 Like

There has been some work on Traits in Pharo & Squeak Smalltalk which allows you to build classes from smaller pieces.

I think Aspect Oriented Programming in the high Java era of the late 1990s to 2000s was possibly trying to get at something similar, but I never had to deal with it.

I feel like probably a really simple approach to OO using composition instead of inheritance might help. Not sure though.

Also, Smalltalk was on some 8 bit machines. Rosetta Smalltalk (which was never released) and TinyTalk both ran on them.

I’m sure there were - I was no doubt overstating how hard Smalltalk is to get going for actual experts, as opposed to me. Sadly in the 8-bit era anything that wasn’t built-in tended to be expensive, and since the machines themselves were already pricey, nobody really wanted to take the hit. Logo did get heavily promoted in schools, but this backfired, because then it suffered from a stigma of being an “educational” language - like Squeak, like PLT Scheme before the rebrand to Racket, and like so many “graphical programming” languages which will never, ever, break out of a classroom and will be forgotten immediately.

Somehow BASIC didn’t get this stigma, I think because it had a body of software that wasn’t just games and education but was also used in science and industry like Fortran was, for real data analysis. But it’s a pity that home machines didn’t ship with anything better. I guess that’s why I like looking at retro 8-bit VMs: the simplicity of implementing, but also the “alternate history” idea of what could have been achieved if the Apple II, PET and TRS-80 had given kids a truly state-of-the-art tool? Or if the Amiga, say, had made that jump to being fully live and programmable? We might have been spared decades of Windows. On the other hand, the Jupiter ACE suggests maybe nothing would have happened, but still…

Ok so it looks like William Harrison and Harold Ossher got to this same idea, in 1993. Before the OOP fever had even reached its height! Very early movers.

Subject-Oriented Programming
(A Critique of Pure Objects)
William Harrison and Harold Ossher
IBM T.J. Watson Research Center
P.O. Box 704
Yorktown Heights, NY 10598

Although the theme of subjects in anthropomorphic terms is illustrative, we should not lose sight of its importance in tool and application integration settings. The tree could easily be a node in a parse tree, the bird an editor, the assessor a compiler, and the woodsman a static-semantic analysis tool. Each of these tools defines its own state and methods on the parse-tree nodes, e.g. the editor has display status, the compiler has associated code expansions, and the checker has use-definition chains.

Either the developers of these applications cannot encapsulate their own state and behavior with the parse-tree node to gain the advantages of encapsulation and polymorphism, or the system designer must manage an ever-expanding collection of extrinsic state and behavior becoming part of the intrinsic node. In fact, in the presence of market pressure to adopt applications provided by vendors rather than do all development in-house, the definer of the node faces the impossible task of anticipating all future extrinsic requirements. This burden demands a more powerful model than the classical object model in order to facilitate the development of application suites. We propose subject-oriented programming as such a model.

Subject-oriented programming failed dismally in the marketplace of ideas, of course. But it might be worth looking at. Might have just been too early.

We use the term subject to mean a collection of state and behavior specifications reflecting a particular gestalt, a perception of the world at large, such as is seen by a particular application or tool. Thus, although for smoothness of flow we may occasionally speak of subjects as individuals, they are not the individuals themselves, but the generalized perception of the world shared by some individuals. Similarly,
subjects are not classes. They may introduce new classes into the universe, but subjects generally describe some of the state and behavior of objects in many classes.

I mean but can’t we just do this in OO by creating new objects and linking them to others?

We probably can’t do that kind of deep linking if we have too much behaviour attached to objects though. What we probably end up with is very simple core “objects” that are essentially database rows with some type/schema consistency constraints.

Sadly, after such a promising start, from page 4 through 18 the paper devolves into what I consider, looking back, to be general 1990s unworkable Rube Goldberg object-oriented nonsense. The sort of architecture astronautics that made TCP/IP and HTTP look good by comparison. Far too closely tied to the idea of objects and “operations” (a sort of generalized distributed method, but still a change occurring in time) on them than to any kind of declarative database model. Activations, instance variables, interfaces, OMG CORBA, and still lots of fiddly interface and implementation hierarchies.

None of this ever had a chance of being rolled out, and if it ever did, it would have broken immediately. It’s an insanely complex distributed-data-rendezvous layer on top of CORBA and object-oriented design, instead of being something small and simple underneath it.

Stefan Lesser has some interesting comments on Alexander:

Coming back to software, another important part — and perhaps the most important one — is the part of the software that presents itself directly to the person using it. If software has anything like geometric properties then they sure occur in the user interface. Deeply interwoven with psychology and embodied cognition, human interface design might as well be the discipline that has most to learn from Alexander (and the least prior art to my knowledge).


Programming is fundamentally about building tools, which is a step removed from the creation process. We’re not creating something directly, but rather create something that others can use to create something else. It is here where Alexander’s main criticism about contemporary architecture applies directly to software: the people using it are not the people designing and building it. As in architecture, we have created a world in software where we, too, discriminate people from experts and have built a whole industry based on that chasm.

If a better future for architecture is rooted in involving everyone more in the building process, so that the people who exactly know or feel what they need and want are empowered to design and build these things themselves or at least are much more involved in the process of creation, then software needs to be headed into the same direction.

Bad software, designed without empathy, that restricts people to only follow strict procedures to achieve one specific goal, has trained millions of people that software is cumbersome, inflexible, and even hostile and that users have to adapt to the machine, if they want to get anything done. The future of computing should be very much the opposite. Good software can augment the human experience by becoming the tool that’s needed in the moment, unrestricted by limitations in the physical world. It can become the personal dynamic medium through which exploring and expressing our ideas should become simpler rather than more difficult.

That phrase designed without empathy resonates very strongly with me. Ironically, I see and feel that lack of empathy most of all in the "design* movement.

The important thing to me is that the user interface truthfully represents both the information domain and the machine which is rendering it. We have a lot of very shiny interfaces driven by design studies, but which separate the user from both control over their data and understanding of their machine.

1 Like