@akkartik In my opinion, it is profit seeking as a social force, that enacts on a higher level than the individual, that limits diversity and plurality.
I don’t care if an individual decides to profit for something. I care if this is the ONLY way in a STATISTICAL sense.
For the same reason, the state of software as it is today, is a product of that force.
My opinion on the matter is captured on this blog post :
On the first, yes, I think this captures exactly what I’ve felt about software as knowledge vs tools and why (particularly as I get older, and the various personal knowledge bases I keep have outlived multiple computers, operating systems and software stacks) I get frustrated as to why software isn’t more “knowledge-like”. That it should be constantly accreting - but isn’t, because the tools it’s encoded in keep getting rebuilt, yeah that’s probably it.
The inherent nature of tools as necessarily having to be constantly rebuilt… that’s an interesting thought that I need to ponder over more to see if I agree with it.
Certainly there’s something about “tools” that’s probably more situational than “knowledge”. They’re things that you make in order to make something else, not for their own sake. But do tool have to have such a short lifespan as we accept them to have in software? A knife or fire don’t constantly change, do they? Or even a socket wrench set? I feel like our computing tools should aspire to be more like a well-made knife that you can use all your life, rather than a single-use widget made out of plastic that will break and you’ll throw it away tomorrow.
Earlier, though, I was complaining about software tools not changing enough, so I’m not sure what I want.
I think maybe what annoys me at the moment is that our software tools seem to churn yet without really developing. Like, someone puts out a slightly improved take on C, or a new flavour of a website-frontend-rendering fabric, or a slightly different non-SQL database, and everyone rushes to tear out and replace their stack with this week’s new hotness… but actually, this week’s new hotness is not really that different in its powers and capabilities from last week’s old hotness, except for the tiny, overlooked downside that replacing it broke everything in the world that was happily still using last week’s old hotness. So we’re getting this massive churn and breakage (losing years and years of old but still good knowledge each time) yet no actual progress. It’s not just that we’re treading water, we’re actively tiring ourselves out with each one of these upgrade spasms, and meanwhile the cybersecurity and centralisation/surveillance sharks are circling closer.
So I guess I’d like tools to be like knowledge: each one a simple, correct, piece of a puzzle - a solved problem - that doesn’t change, but can be combined with others to make more capable tools. (That’s allegedly the Unix Philosophy, though Unix doesn’t really do The Unix Philosophy very well at all unless all you care about is processing lines of text.)
But I can see that single-use tools are also helpful. Being able to just whip up a quick script that you’ll throw away tomorrow, but does the job today, and then having the luxury of forgetting it… that’s also nice.
And the older I get, the more I realise that actually Solved Problems are extremely rare. Almost all “solutions” to problems are situational and temporary and often cause bigger problems down the line. See for example: asbestos, antibiotics, the Green Revolution, the internal combustion engine… And in programming it’s the same, we’re just seeing the results of human hubris faster.
So I’m not quite sure what I want in tools: forever-lastingness, or happy disposability. But maybe something like “a knife, and also some clay” would get us both. We don’t necessarily need landfills of toxic nonrecyclable plastic (or its software equivalent).
On the second point - formal systems not being able to hold large knowledge structures - I think I wouldn’t have agreed with that a few years ago, but now I think I’m starting to understand what it means and yeah, it is a problem. Formal systems can be amplifiers for human thinking, but they can’t entirely replace it. The squishiness between hard systems is where actually our intuition, common sense, creativity, etc reside. There’s so much tacit knowledge that can’t be formalised because we don’t quite know it ourselves… in the same way that we often have thoughts, or the beginning of thoughts, that we can’t always put into words or symbols. So lots of small, personalised, formal systems that interact through humans makes sense and seems pretty closely aligned with the conccept of “malleability”. Your Digital Scientific Notations sounds like a really interesting project!
I certainly agree about churn. Today’s tools are much more fragile than they should be. But there is also a level of useful and perhaps inevitable obsolescence of tools as they are replaced by better ones.
My 40-year-old collection of screwdrivers still works fine, but for much of what I do today, I need newer Torx screwdrivers and I actually prefer them. That’s real progress. Similarly, nothing prevents me from using my old Epson PX-8 exactly as I did in 1988. It still works fine. But I prefer a modern Linux laptop to WordStar running under CP/M on an 8-line display. What matters is that I can still process the text I wrote in 1988, which is plain text (with LaTeX markup). The scientific content of that text is still relevant today.
It’s good to create new tools. It’s bad to render old tools ineffective. In a world filled with shades of grey, it’s rare to get nice black and white like this. Before computers we just took this for granted. There’s just no way to reach in and make an old screwdriver in someone’s workshop stop working. But we have grown accustomed to this with software.
My perspective is a bit different. Leaving aside software-as-a-service, no one can take your software away from you. You can use it as long as your hardware keeps running. It’s not very different from screwdrivers.
The big difference is the enormous interdependence of software tools. My old PX-8 is almost useless in today’s world. I can still write text on it, in WordStar or as plain text. But getting them out of the machine is already a challenge: the only channel is an RS232 serial port. None of my modern computers has one. The useful life of a software tool is defined by the evolution of its environment, and that’s currently dominated by “move quickly and break things”.
Thinking out loud a bit more about this: All software is a plug-in for a computing system. The ground layer is hardware, and then we add software layers. As long as you can preserve all layers below the ones you care about, you can keep using your tools.
The problem with software (as compared to physical tools) is that most of our plug-ins are not useful by themselves, but only in combination with others. Which suggests that the way we divide software into pieces is perhaps not the best one. And that reminds me of this paper by Stephen Kell which I think many of you would appreciate.
A version known as Security Torx, Tamper-Resistant Torx (often shortened to Torx TR) or pin-in Torx contains a post in the center of the head that prevents a standard Torx driver (or a straight screwdriver) from being inserted.
Security Torx has its own set of variations, and many other variations of Torx drives are available in Security or TR versions. These include 5- and 7-lobed TR heads.
I’m deliberately missing the point you were trying to make with screwdrivers Couldn’t resist since Torx was already mentioned as example of real technical progress, while at same time being the poster child for deliberately non-malleable gadgets.
Yes existing screwdrivers will keep working with existing screws, but as Konrad said their usefulness does depend on which screws all the new stuff you buy uses…
(Philips screwdrivers OTOH, I’m told are technically flawed in several ways — but being roughly-compatible across sizes is such a delight!)
One of the things that always worries me a little about the object-oriented premise of sealed modules with “no user-serviceable data inside” is that, sometimes, you really truly absolutely do need one of these little beauties:
I’ve heard some pessimistic takes that this is possible, given the semantic differences between languages. A lot of the benefits a language provides is things like - this variable is immutable. You can assume it won’t change. How can you ensure whatever you’re calling respects that?
I think this would only be possible within a single paradigm, in the way that there are many JVM languages that can collaborate. There’s definitely a joy to using a system entirely built in one style but it would be losing something too
I too wish that more things were small, re-usable components, but my current thinking is that components only make sense in context and that establishing that shared context is very hard work. Especially for open-source development where there’s no mandate see: Open Source Can't Coordinate , but also even industrial versions of this have failed.
And abstracting early into re-usable components that can work well enough in all contexts does have costs, which it’s not clear to me yet if they’re worth it or not.
Maybe the problem is in the notion of composability. There are (at least) two very different forms of composability:
Composing well-understood components into a larger artifact through top-down design
Composing partially understood components into highly situated systems through bottom-up tinkering
Engineers dream of (1) and want to scale it up forever. Which is why (1) is the idea behind composability in software engineering. But the people who actually use software usually find themselves doing (2), because no readily available software fully covers their very specific needs. And often the needs evolve faster than anyone could write them down as a specification, so tailor-made top-down-designed software is not an option either.
In the non-convivial world of technology we have today, there is the added dimension of power structures. As Seeing like a State explains, scaling up formal rules and automation beyond what is acceptable for the people being managed is an effective way of disempowering them.
The idea is not to compose but to refine if possible.
We start from an abstraction and we try to find a realization, as a series of components.
Components are again abstractions, we do not know if they can exist or not.
By having runtime types, we can omit the proofs, but we will be notified if something wrong happens.
Types also help you play, by providing you information of what is to be expected.
So the direction is the reverse one… And this seems to be the philosophy of Rosen where we start with abstractions and then try to find realizations.
At one point I would like to find out if this workflow is how physicists/biologists could create their models… i haven’t done any modeling… So I do not know.
(Apart from the mechanists who start from components and go up…)
(1)
At the moment I work on other things, thus I cannot double check what I say.
My experience is that any non-trivial scientific modeling need to move back and forth between abstractions and realizations. And that requires both composing existing components (unless you wish to reinvent the integers every time) and searching for realizations of hypothesized abstractions. Types are helpful for the former but mostly a hindrance for the latter.
There are people that use theorem provers as a blackboard, meaning that they suppose that some components exist and move on. Then assuming those components exists, they look at the implications.
One remaining difficulty is to show that an existing component has the specific properties we want. There types are indeed a hindrance as you say . This is why I am talking about runtime types, types that produce an error at runtime.
With regards to the modeling workflow, yes that is what I would expect. I simply haven’t done it, and I want to have an open mind in case there are any peculiarities I do not know.