Hi! Iâm finding this thread really fun to read, and Iâm convinced that these forums are made up of some smart people.
My own thoughts on the subject include a few questions that might be interesting to ask as a kind of base for the discussion:
- What is a âgood programming paradigmâ?
- What is a âgood softwareâ?
- What is a âgood software developerâ?
I think that the reason we are seeing exclamations like âOOP is a trend, not a real paradigmâ, or âgreat software doesnât restartâ, among other commonly repeated opinions, is because businesses are fundamentally different in what they think about these questions and we canât seem to find a way to unify some fundamental principles even as we try to, as is our human tendency to try to do.
Is it possible to have two software written in two different paradigms interact? I think thatâs certainly so! âBut why would you do that?â, you may ask, since it seems to cause a bunch of difficulty and inconsistency that is not desired. The simple answer is that the real-world problem that each software is trying to solve can be very different, resulting in a gain in using different paradigms for each, yet they may still want to interoperate quite closely.
As an example, take a GUI application. Itâs a run-of-the-mill graphical form-like application with some textboxes, some graphs and buttons for various functions and navigation. Just looking at this GUI application from a practical point of view, itâs going to need to keep some state in memory as the user is using the interface. This state is everything that is needed to draw the interface on the screen, and also the current state of the form data that the user can change in this part of the interface, which is the âinputâ. When the user changes the input, some calculations are run and the stuff on the graphs changes too, i.e. the state also has an âoutputâ part.
We can tell from this description that, there are incredibly many points from which the user can change the input. As many as there are textboxes, in fact. Also, every time a textbox is changed, only a small part of the input is changed, but we still need to collect all of the current input from other textboxes and call the calculation function all over again in order to get an updated output state.
Of course, the part of the state that doesnât really need to change is even bigger. Thereâs probably some layout system that we fed with some data which resulted in our textboxes ending up where we wanted them on the screen. Thereâs a whole plethora of parameters that weâve given the rendering engine to make sure that everything is in the correct color, the lines have the right thickness, et cetera. Only a small piece of this information actually needs to change as the user changes the value in a textbox, and thatâs the piece that controls the graphs.
But, FP in its orthodox extreme, says that there is only ever input and output. So, it becomes incredibly difficult to define what we want here using FP, at least all the way. The best we could do is to define the calculation function as a single pure input-output function taking all of the input data, and then having a software that is not FP-strict assemble that input data structure and call the function, every time we detect that âthe user has changed the value of a textboxâ.
It would be completely nonsensical and a huge waste of energy to recreate essentially the whole GUI description every time the user changes the value of a textbox.
This is a property that is inherent to the problem we want to solve: a GUI application that is consistently showing the user a graphical interface. It makes sense to, say, model our graphical components as âobjectsâ that keep a state and are mutable, because thatâs what the application needs to do, in essence.
On the other hand, the calculation function defined at a reasonable level, where it takes the input it needs and outputs the data that is needed to present the graphs, would be terribly awkward to define using OOP. Thatâs because we know that the input data that weâre dealing with has been reduced to what should reasonably be needed to solve the problem, and that data does not have to be used for anything other than actually producing the graph output. Thus, why would we store it in mutable âobjectsâ? In this context, itâs only confusing and difficult to keep track of all the different states that those objects might end up in when dealing with them. Weâre going to have a much better time dealing with our code if we use invariants and immutables that enforce a certain order in which data is transformed: the code becomes easier to read, the number of states the data takes on is predictable, etc etc yada yada, all the benefits of functional programming basically. So in this case, we may even be better off using a programming language that supports the FP view.
So, what is a good software paradigm? I think it depends on the problem you are trying to solve. I also think that âproblemsâ, as software developers understand them, are best defined as data transformations, but also including the context of where the data comes from before being transformed, and where it ends up after it is transformed (in essence: what various memory locations are in fact used for).
On the topic of the other questions, Iâve noticed that there is some confusion around what makes a âgood softwareâ and a âgood software developerâ. In fact, a lot of software businesses, and by unfortunate extension also some software developers, seem to believe that âgood softwareâ is not just software that solves a problem well. Some prevasive beliefs seem to be that âgood softwareâ is software that is widely used, long-lived and that supports the livelihood of those that create and maintain it. In reality, a software could be used by a few people, be abandoned a few months after its creation, and the creators may not have earned a single penny from the users of the software. Yet that software could still be of immense value to its users.
Iâm not saying that the incentives above are inherently bad, but I do think that they have, letâs say, âmuddied the watersâ a bit in the discussion of what âgood software designâ really is. As has been already stated in this thread, we love to seem like we are being perfectly objective in our statements, yet much is still decided by cognitive bias and our real-world conditions.
The truth is, software does nothing on its own. It is not good by grace of existing. It is good because the bits and bytes that it handles are at some point shot back out into the world through some hardware, and deliver a useful impact on the real world.
So, what problems are we trying to solve? Should we even use digital devices to solve them? If so, maybe we should start by thinking about the hardware that collects information from the real world and digitizes it, and what that process looks like. Once we understand the data that our software works with and where it comes from, and have an idea of how the output of our software is used, we should probably decide on the set of hardware devices that we intend for our software to run on, and how that hardware really works. Whatâs costly on that hardware, and what isnât costly? Once we have all this knowledge combined I think is when we can actually start to think about software design seriously.