In the past year or 2, I got interested in OO and ...
# thinking-together
m
In the past year or 2, I got interested in OO and I find even its basic form within a mainstream language very powerful. Yet, when I try to advocate for using polymorphism in almost any context, I get "this is too unfamiliar"/"that's not how we do things in framework X" instead advocating for switch statements or similar. And it really gets me thinking, if we don't even understand OO after 30 years of mainstream adoption (in some form or another) and are still doing "structured programming" with lambdas & objects, is generic code just hopeless in practice? Is there an education problem? Is it just indicative of how poor standards are that few people have the privilege to do any amount of design for their systems before developing? If so, how does "future of coding" even matter if any form of real adoption feels impossible.
j
I can only guess, but I suspect you will find that people don't do polymorphism because it solves writing problems they don't have, and creates reading problems they don't want. No reason for despair generally that bad ideas aren't widely adopted, particularly if you judge ideas by their adoption.
j
There are many kinds of OO, but the kind one mostly encounters is quite bad. Those who have programmed using Java/C++-style OO, or in languages that should be able to do better but are culturally poisoned by Java/C++ culture (Ruby, for example), often develop antipathy to the whole idea.
m
I was being too abstract, and unfortunately this topic sparks too much controversy to be useful but still thought I’d try. Yes I’m talking about ruby, & JavaScript, @Jack Rusher that makes sense. What’s really bothering me here is a bigger question: If people are so resistant to change & you propose a non-trivial idea which might actually get adoption how do you get people to not just use it like their previous thing. Like to me the java/c++ thing is they are structured (c/basic/etc) programmers who do things with in objects now, in that case setting up & programming an interface is such an immense pain few people do it enough to get the epiphany. What does it say about us that even an idea which had so much going for it, almost completely missed in practice? Was smalltalk in particular marketed poorly to make it more approachable than it was? Was it purely the confusion with Simula which shared the term “OO” & bjarne quotes as primary influence of c++v
j
I see loads of objects in Ruby and JS, just mostly for the worse. Can you give an example of this kinds of things you’re proposing that aren’t being taken up?
m
Not just objects, but actually doing work to define good interfaces. I’ve yet to see it in either. The closest actually good example in rails is people creating custom form objects to do forms which don’t work exactly like the built in form objects - don’t map exactly to 1 model. This is just looking at existing interface & putting something in, and in practice its quite good. I’m not just talking about using classes or objects, but actually spending the time to think about & design interfaces to make good generic code. I fear I’m having trouble making my point, there is too much nuance in this topic for me to speak clearly about it. It goes beyond OO, but I think that’s lost, I apologize. Having a tough week, I need some time to think about how to phrase my thoughts on this.
j
API design is hard is every language/paradigm, yeah!
e
with the caveat that I’ve worked in very few code bases that are really “all in” on OO, the ones that I’ve seen come the closest to making it a good thing (good read here as “helpful”) are the teams that did the work ahead of time to model their domain, to define what an “object” was, and not sort of back into modeling objects just based off of some real world things like a lot of books suggests. Doing that, I’ve found, can dig ya into a wicked deep whole. I think, OO, like a lot of design philosophies, can be a powerful tool, but can’t be applied without doing the design work, too, and not all devs are cozy or know how to do that level of design.
a
I think OO is pretty good as long as you layer on a couple of additional dogmas: • Keep data and behaviour separate • Everything is either abstract (designed for extensibility) or final (doesn't allow extensibility)
s
it's interesting that you bring up polymorphism...I've kind of been on this kick for a while too 🙂 There's this talk from a while back that I really like

The Soul of Software

where Avdi Grimm mentions this breakdown from Object Thinking of programming having two schools of thought: Formalists and the Informalists (Hermeneuticists)...and Avdi makes a comment that if you were taught inheritance before polymorphism, you were taught by a formalist, but if you were taught polymorphism before inheritance you were taught by an informalist... Ultimately what I've found and that they mention in the book - programming/engineering is heavily dominated by the formalist world...so it can be hard if you find yourself as the only person in the team with the opposite view of programming the way I see the answer to your original question though is that the pendulum kind of swings back and forth over time with one side being frustrated while the other is in their element and highly productive
like...Ruby in the late 00's/early 2010s was dominated by this culture of throwing off the constraints of java and having complete freedom...which led to a huge explosion of new concepts and tools and ideas hitting the mainstream...(at the same time making it possible to make horrible abominations - I was at Groupon in 2013) and now you see those concepts and tools shifting more into the formal realm where variation and stability and correctness has become more important, because the patterns that work for the types of apps people are building have been found
m
Yes, inheritance will paint you into a corner, but you don't need to use inheritance all the time. Delegation, interfaces, etc. Rust traits, Go delegation, Haskell typeclasses. Polymorphism without the inheritance. Also related, of course, is the expression problem when talking about OO vs non-OO. https://www.giacomodebidda.com/posts/3-ways-to-solve-the-expression-problem
s
@Marcelle Rusu (they/them) You’re touching on something that resonates a lot with me. However I’m unsure if you talk about what I think you do, or if I’m just reading into your post what I want it to say. Let me try a few seemingly random questions to tease out what you are trying to point at: 1. The original Gang of Four Design Patterns, do they exemplify what you are trying to point at? Or are they irrelevant for that? Or are they perhaps even a counter example? 2. Some ontologies try to categorize everything in one huge tree (eg. biological species). Others are just trying to paint a comprehensible picture of complex local relationships. One could say the former is more concerned with identifying all the nodes while the latter is more concerned with identifying all the edges. Would you agree that the former feels misguided or irrelevant for what you mean and the latter is closer to it? Does the former remind you of inheritance? 3. Does a mathematical structure from group theory like a monoid (not a monad; although that’s adjacent I’m deliberately trying not to go there) feel related to your idea of power from polymorphism? For instance, adding integers and concatenating strings feels somehow similar, but yet clearly is also different. Does that map to what you have in mind? And furthermore, would you agree that it’s not about the formalism (that we call it “monoid” and can precisely describe what we mean), but about the intuition we can develop for it (“Ah, it’s the same thing! It works for integers, strings, and now I see how I can transfer it to this other type and it’s beneficial to see the connection and treat it the same way.”)? 4. When you described that scenario where polymorphism is replaced with “a switch statement”, did you feel like the other person is just not “getting it”? Did you feel like your polymorphism way was simpler and more elegant, but the other person clearly thought it was more complex and argued that it’s hard to beat the simplicity of a switch or if statement? Do you happen to come across a different understanding of what is simple and what is complex more often? 5. That kind of polymorphism you think of, how does it relate to beauty? Would you say it’s beautiful? Does that question even make any sense to you at all or do you think I’m taking it somewhere weird now? Sorry if I throw around concepts you’re not familiar with. I’m just trying to cover some area, hoping I hit enough overlap with your experience to find out if we think about the same thing or not.
s
@Marcelle Rusu (they/them) I’ve had the same experience since I began my career in the 90s. I'm reminded of a story of a large company that moved their COBOL system to Smalltalk by rewriting it as one giant class with all of the old COBOL functions as class methods. Unhappy with the result not being any simpler, more understandable, or more maintainable, they concluded OOP was a bad idea. The criticisms I've heard of OOP tend to be based on similar misunderstandings or lack of sensible coding practices and conventions. As they say, "There are no technology problems, only people problems."
r
Present object-oriented languages can be put into two categories: those without type systems and those with bad ones. This makes programming in the correct, interface focused style awkward: either interfaces are implicit, like in the original Smalltalk, or they are difficult to adhere to in a static system that limits subclassing. Solutions like f-bounded polymorphism and matching never made it into industry languages due to the Java effect. Single inheritance makes it more difficult to take advantage of polymorphism, while most cases of multiple inheritance are poorly thought out and lack niceties like method combination. CLOS comes close [sic] but its multimethods probably add more complexity than they are worth. What I'm arguing here is that the problem with contemporary OO is that the languages and contexts it is used within do not engender good ways of "object thinking" (I recommend the book). Programmers often find it easier to revert to more primitive methods, or seek out new ones on the opposite end of the spectrum, than to fight against their tools.
s
You know...this thread has also kind of reminded me of this passage from Patterns of Software by Richard Gabriel, which also gives another perspective/answer to the original question Page 20, Abstraction Descant: This implies that abstractions are best designed by experts. Worse, average programmers are not well-equipped to design abstractions that have universal usage, even though the programming languages used by average programmers and the programming language texts and courses average programmers read and attend to learn their trade emphasize the importance of doing exactly that. Although the designers of the programming language and the authors of texts and course instructors can probably design abstractions well, the intended audience of the language -- average programmers -- cannot and are therefore left out. That is, languages that encourage abstraction lead to less habitable software, because its expected inhabitants -- average programmers working on code years after the original designers have disappeared -- are not easily able to grasp, modify, and grow the abstraction-laden code they must work on. Not everyone is a poet, but most anybody can write usable documentation for small programs—we don’t expect poets to do this work. Yet we seem to expect that the equivalent of poets will use high-level programming languages, because only program-poets are able to use them. In light of this observation, is it any wonder that abstraction-poor languages like C are by far the most popular and that abstraction-rich ones like Lisp and Smalltalk are niche languages?
s
@Scott I've had similar thoughts but I would describe them a bit differently. In the early days of computing a "full stack engineer" would be expected to design and build a computer, and then write a language and operating system for it before starting to write an actual app, such as an accounting system. Later, the hardware was reused but each app would write it's own OS (particularly in games), then OSes would get reused but each app would write it's own UI framework, etc. Each prior stage seemed crazy from the perspective of the next because they had come to appreciate the complexity of those components enough to see reuse as the only viable approach. I think what we lack now is appreciation for the complexity of the things we currently keep rewriting which will instead be reused in the future and this is why most "future of computing" projects tend to be focused on ways of making it easier to rewrite the same old things over again, instead of trying to make it easier to reuse them over again. So the problem isn't that we need super engineers that can do everything. It's that we need to accept that doing anything right is very very hard, and find ways to minimize doing it repeatedly.
s
Let me throw in a wholeness perspective on that: If it’s generally true that we can better use a system if we understand how it works, how can we expect to design better systems, if we keep ignoring how large parts of them work? We designed ourselves into a corner by gradually building up complexity and hiding it behind abstract interfaces so that we no longer need to understand how they function as long as they keep functioning according to their interface contract. That’s a good thing, because that is how we scaled up to where we are today. But I’d also suspect that it keeps us from inventing significantly better ways of improving the whole stack, because it doesn’t fit into a single individual’s mind anymore.
s
@Stefan I like that perspective as well and would agree that there are times to reorganize and simplify multiple layers, but would add that the best opportunities for that may be when we abstract it such that all the value shifts to the higher level. e.g. the iPhone was an opportunity to get a very different OS on a phone because all that mattered to the end user were the apps. Avie Tevanian (one of the creators of the Mach kernel that iOS uses) once said (paraphrased) "OSes don't matter anymore because the users never observe them directly".
m
Could we say that browser engines are now at their peak, that we likely won't ever implement another from scratch? At least not in their current form (HTML/CSS/JS). MS Edge now uses the Google Blink engine, and I foresee others dropping their engines for Blink because of the sheer amount of work involved in keeping them up to date. Maybe Firefox will some day drop Gecko and use Blink. In the end, does everyone win when there is only one choice? I'm not saying that's a bad thing, it's just a philosophical question.
j
We all lose if the only remaining engine is built by an advertising company.
e
Monocultures are pretty much never good for anything
s
@Mike Austin The current direction (i.e. Figma) is WASM and custom rendering engines which effectively side steps that whole stack.
m
@Steve Dekorte trend for serious apps maybe, but what about average sites/pages/webapps? For WASM, are we then re-implementing the DOM? Back to Firefox... I really do think they will switch to Blink at some point in time - it just takes too much effort, money, and time to continually develop never-ending standards. MS probably held off for a long time before pulling the trigger.
s
@Mike Austin Personally, I currently target DOM/JS to leverage it's powerful cross platform layout and font rendering features, but I have abstractions that will make it easier to move to another rendering framework if/when that would be worthwhile. IIRC, Figma needed pixel accurate rendering which the DOM does not currently provide. Btw, I've run across even lots of macro layout differences between different browsers and versions and the web standards folks don't seem to be bothered by - kind of like the undefined behaviors of C/C++ compilers.
m
Got it. I need precise rendering writing apps, but not necessarily pixel precise layout, so I could see the use case. Browsers are sooo much better then they used to be, but there are some corner cases, and they are different implementations. I'm going to look for an article about Figma and rendering - sounds interesting.