Syntropize: An Operating Environment for Malleable Software

Syntropize

Tool by Rahul Gupta (2022-present)

Syntropize is a proposal for a malleable software operating environment, our attempt to reclaim the original vision of personal computing as the means to augment human intellect.

Syntropize is predicated on the decentralization of both data and apps. Decentralization allows data from multiple sources to arbitrarily intertwingled and media elements to dynamically combined to represent such intertwingled data. Syntropize replaces applications with unified workspaces that are dynamically generated based on information content and context as well as the purpose for which the information is sought. By simultaneously addressing the twin problems of data silos and walled gardens, our intent is to reclaim the agency users should rightfully have over their data and their software.

The Syntropize prototype serves as a proof-of-concept meant to demonstrate the feasibility of such a malleable software operating environment, as well as explores a preliminary architecture for its implementation. We have prioritized capabilities that to the best of our knowledge are not widely implemented over features. The choices have meant that prototype, while functional, is rather minimal and keeps to a relatively familiar user experience.

Additional Resources

Metadata

1 Like

I kinda follow the motivation, but it’s not clear precisely what you’re building and how it’s related to other projects with similar motivation. Some examples might be a good next step.

@akkartik I came to know of Malleable Software and this community only recently, and am slowly working my way through the various projects, papers, posts etc. Having said that Syntropize is based on the architecture that was described by Doug Engelbart 1; pp 221-222, interpreting them in a modern web context. I dare say that it is almost scandalous that catalogue does not include nLS/Augment, Alto/Star and Xanadu, as these were implementing/describing malleable software concepts in some form even at their time. There is some similarity with the work of Geoffery Litt, in that Syntropize creates data models in an independent layer, which might be shared amongst different views, which themselves might embed different subviews. Please see the documentation for more details.

With no intention of blaming a reader, I hope you can sympathize when I say that it can be sometimes a little difficult, even frustrating to explain these ideas because they are unconventional for the world we live in. I have found that it has been easiest to actually show the software in use and then walk folks through the code at a high level. But I recognize that this is not a great way to spread the word. I contribute in the Solid community and even in that eclectic crowd, it took four years before someone actually got what I was doing and directed me towards malleable software. I am hoping that interactions in this community will help me find simpler ways to at least motivate the benefits for an operating environment for malleable software.

2 Likes

I don’t think it’s so much a question of wording, there’s words there(A LOT), but there’s a lack of Show Don’t Tell. As someone who’s no way in hell going to install npm on my work computer, I would like to see how this addresses all these lofty ideals without having to run the app. :slight_smile:

The irony of addressing the twin problems of data silos and walled gardens by installing a wrapper is not lost on me, but I’m curious. A video, a few screenshots, some problem solving demonstrated via the environment would go a long way.

2 Likes

Thanks for your interest. It is not that I do not understand the principles of scientific communication, but it is genuinely hard to “show not tell” architecture. Even I am unhappy with my wordy explanations. It is a bit like trying to show a cross-section of a car engine that is using some experimental form of propulsion. Even showing the car is not much help, especially when it is barely a car. :frowning: . I would genuinely like to figure out a way to show people under the hood.

The software feels like any digital garden app with minimal functionality and a few idiosyncrasies (I use it to store short notes sometimes). I am not asking you to install npm. I provide binaries for Window and tarballs for Linux.

You would need to have some wrapper that gives you a handle over operating system and networking, unfortunately right now (for reasons of expediency), that means making Syntropize as another app in a browser shell. This is just a proof-of-concept. What I eventually envision is that we would be able to wholesale ditch the desktop environment (a bit like Chrome OS but with proper offline software). But a lot of work needs to happen to make something like that a reality.

1 Like

To clarify, I did try to read the documentation before my previous comment. I do sympathize with the difficulty of explaining new ideas.

For triangulation:

Perhaps this will help close the communication gap, if we can discuss similarities and differences, different approaches to show not tell.

1 Like

@akkartik Appreciate the pointers. I’ll definitely have a look and get back!

1 Like

Looks good to me!

So, just to ensure I understand where you’re going with it all: there’s no applications, right? Just a workspace for accessing stuff? And that stuff is adjacent to Linked Data, as in, it’s linked-up data, linked across domains with something like URLs? Also, there’s data update propagation. So do you have anything like an observer pattern so that bits of linked-up data can inter-depend, like a spreadsheet, except across hosts across the internet?

@duncan-cragg Since, you have prompted me privately, I have been going over your Object Network write-ups as best as I can. The general ideas are very similar, which is not that surprising. What is surprising is how few people think this way, given these ideas have much history behind them.

there’s no applications, right?

Right! (Sticking to 2D spaces) Views that can invoke subviews (and so on), based on how deep you want tranclude content from linked up resources.

Just a workspace for accessing stuff?

Yes, for now. (To use your words: “The App Killer”).

An oversight on my part is to not think about multiple workspaces where, say, one shows some transcluded content and another shows the content in the original context. After reading Nelson, it seems an obvious thing to allow for.

And that stuff is adjacent to Linked Data, as in, it’s linked-up data…

Yes. (I very much like the phrase “linked-up” data).

Since I am using Solid for online storage, I am stuck with its container model, which I am trying to remedy. There is also interest in using RDF linked data. But given that I allow data models to be setup independent of resources, you could conceivably use another link-up scheme.

Also, there’s data update propagation…

Again yes! After every resource is fetched, a watcher is setup for updates, If an update is signalled, rethe resources refetched. So if an author changes a resource, other readers see the same update. If links change, the branch will repopulate.

For online storage, this uses the Solid Notifications Protocol (though I am using an extremely old version). I am also proposing an extension to HTTP called PREP to allow fetch and watch to happen with one request (These are the only update protocols we know of that have some form of content negotiation for updates). Eventually notifications might be CRDTs (extending them for any media-type is non-trivial but there are proposals), so notifications/auto-refetch it is for now.


Our approaches are conceptually similar, but where I think we might differ is that I am trying to work a lot more with existing infrastructure:

My attempt has been to build over existing protocols, and my eventual idea is to abstract over them. So it does not matter whether data is available via the file system or HTTP (or some ICN style protocol in the future). My bias here is to not presume too much about making communication protocols from scratch, because I can see that a lot of thought has gone into them. Also, getting folks interested in adopting a new protocol is harder than moving mountains.

Similarly, I allow the possibilities of different data models being constructed for the different data types/media-types/formats (actually this is the only place I cheat a bit in the present prototype, I just build all the possible data models at once but still using independent plugins to load them, something I would promptly remedy in a redesign). In principle, I can simultaneously support RDF and ONF with the right data-model plugins.


Also, we (read: some contributors to Solid) are currently working on drafting a proposal for Malleable Software with Solid/RDF and will be posting it on the forum here for feedback, hopefully soon.

1 Like

@akkartik I tried to look at the links that you posted and here is my preliminary impression:

What you are doing appears to me to be malleable software from the bottom-up. Allowing users to make programmatic or pretty fine-grained changes to the software (simultaneous to using them?).

OTOH what I am exploring is a top-down approach to malleable software. Where data + links in data and some basic expression of intent drive what software gets constructed. Users will have flexibility to change parts of views/sub-views but these components that create the workspace are expected to be coded externally (at least for now).

I do not see these two techniques as differences but as complementary and a spectrum. Bottom-up malleability will provide users with greater agency to play with the space when they need to; whereas top-down malleability allows the computer start you off with a good convention to view some data with minimal input (for example, when navigating to a new page). Both are necessary for software malleability to be fully realized.

Thanks so much for that detailed response! Much to digest, but meanwhile, I also asked: “So do you have anything like an observer pattern so that bits of linked-up data can inter-depend, like a spreadsheet, except across hosts across the internet?” Hope you’ll forgive the gentle prompt! If you don’t understand what I’m asking, I imagine it would involve the resource notification triggering an update of another resource, potentially itself notifying-on to other dependents.

If my explanation on notifications did not clarify, I am probably not sure what you mean by “observer pattern”. Modifying, say, a link in a collection, would prompt a client to reload the collection and then finding that a specific link has changed, contents from that link. But in what circumstance might changing a resource cause another to change automatically (the only way I can conceive is if another client (its business logic) might be programmed to trigger another change)? A concrete example might help!

I’m following a train of thought from “no applications, just the data”, to how to restore what was lost in disposing of applications: the functions and behaviours. In my world-view that’s where data chunks (your “resources”? my “data objects”) are “live” or have “internal animation” where their state is, spreadsheet-like, dependent on other resource state, thus resources they’re observing.

[Update: I see you have things like business logic and model-view layers mentioned in the docs; so if a remote resource updates, you may need to trigger some logic or update a view. That’s the kind of thing I’m discussing here. Then in turn, once the business logic has run, it may itself update a published resource that remote logic is watching, and so-on. The point is that interfaces - both for inter-domain interaction and user interaction - are essentially driven by the dynamic data, there are no APIs, services or any type of data wrapped in functions or functionality]

Our approaches are conceptually similar, but where I think we might differ is that I am trying to work a lot more with existing infrastructure:

Ah yes, I explicitly call my project a “research lab”. I spent over a decade trying to work within the constraints of existing systems, using Java, Android, HTTP, REST (original Fielding REST not Web Service “REST”!) and JSON amongst others (see my FOREST publications and NetMash code), but ultimately I got bogged down with the constant pressure to fit in to or swim along with deeply broken approaches, where people didn’t understand because they couldn’t, because I was simultaneously proposing a revolution and an evolution!

What I’m proposing is nothing less than an “Inversion” of the entire tech stack, so I decided that I have to start from the metal and build up from scratch.

Having done that, I can work backwards to legacy, probably simply via open protocols and file formats alone.

My attempt has been to build over existing protocols, and my eventual idea is to abstract over them. So it does not matter whether data is available via the file system or HTTP (or some ICN style protocol in the future). My bias here is to not presume too much about making communication protocols from scratch, because I can see that a lot of thought has gone into them. Also, getting folks interested in adopting a new protocol is harder than moving mountains.

Again, I agree but this is more along the lines of accepting that my project simply /is/ moving mountains. I just have to get on with it and fully expect to convince no-one and have zero regular users!

Similarly, I allow the possibilities of different data models being constructed for the different data types/media-types/formats

Again, I’ll work backwards to legacy, but I need to get clarity in my approach first, so other things will be fitted in and around later on, rather than in any way breaking the core concepts I’m after.

2 Likes

While I need to think about it, my general sense is that there should be hard separation between data and functionality. The desire to mimic von Neumann ISA on the web, that is data also serving code is a mistake that makes the web a security nightmare. When I reread Fielding’s thesis, the only thing that seems not natural as a part of his conceptual model is code-on-demand.

This is not to say that the operating environment cannot fetch programs from the web, only that data can only suggest its intended processing mechanism but not provide an executable. It should be the prerogative of the operating environment to fetch the program from its trusted sources and load the content (or not).

So it would be the client or peer that is responsible for the cascade of changes and data is passive! [EDIT: I see that agents might pick up some of this automation].

1 Like

Yeah code-on-demand always grated with me too! I silently ignore that when talking about (real) REST!

So it would be the client or peer that is responsible for the cascade of changes and data is passive!

OK, but there’s still no outside APIs beyond a uniform interface - a consistent way to fetch data that’s in standard formats? And you allow ongoing updates. So from the outside, you could indeed see the data in Syntropize as magically internally-animated, especially since you don’t need to poll, you get notified.

You don’t need to throw out that single final step towards a great, spreadsheet-like, end-user-friendly programming model! It would be based on what you call “Application Hooks” as I understand it.

[Update: your approach would mesh quite well with Alexander Obenauer’s front-end designs; here’s a page that’s maybe somewhat adjacent to Application Hooks]

Thanks for linking Obenauer’s page (his notes have been on my unreasonably long to do list).

The thing about application hooks as I had conceived it was that they allow additional actions with data loaded by the OE into the data model. My original use case was to print something that has been loaded (and that a print function could conceivably be shared across views). The implication being application hooks would not have direct access to network or the disk, only the data model. This was to prevent any backdoor being created through the hook (as is potentially the case with every web-app today). If you want to write data, say, to another domain, you have to go through the OE stack (which could potentially refuse the transfer).

The business logic associated with data model to be responsible for triggering updates to adjacent resources, whether directed by human or computer (Again data model is forced to use the OE’s uniform interface to the network, though this interface could include not just data but remote computation). Users can always modify the domain layer plugin. I suspect though you might not prefer this?