A nice talk about Conway's law: <https://www.youtu...
# linking-together
k
A nice talk about Conway's law:

https://www.youtube.com/watch?v=5IUj1EZwpJY

s
Excellent find. This deserves reposting. This explains why we can't have good software. The first thing this makes me think of is Chromium. Obviously Chromium has a recently formed sidebar team that by accident has the privilige to push a new and poorly thought through feature of the month to a billion users with every release without checking if any of the other teams has thought of a better way to do it in the last 15 years of Chromium. The second thing it makes me think of is Gnome Text Editor, which is the default text editor in Ubuntu. It's based on the faulty idea that one team can make a shared library of GUI components that are universally applicable to all apps, and other teams can make apps by composing shared components and not making their own. As a result, Gnome Text Editor is based on the source view component in GTK, and it's terribly buggy, and the bugs cannot be fixed, because the text editor team and the GTK team don't communicate with each other.
d
the privilige to push a new and poorly thought through feature of the month to a billion users with every release
Is this a reference to something specific? Because I definitely have something specific that is happening to our app in Chrome at the moment that really feels like this 😛
s
@Daniel Buckmaster The Chromium sidebar has: • A reading list, which does almost the same thing as bookmarks, so now we have two features for the same thing, with different advantages and disadvantages, because the teams didn't communicate. • A bookmark manager, because the reading list alone wasn't enough, so now we have two different bookmark managers designed by two different teams, with two different UIs to learn. • Browsing history with the new idea to group entries by context, but without communicating with the team that made the previous history UI, so now we have two different history UIs. • Reading mode, because someone just happened to be working in the sidebar team, and it wasn't part of the job to think outside the box of the sidebar, so now we have a useless reading mode inside a little sidebar next to the full page we're reading. A reading mode could also never be the responsibility of the right team that could make a good one, because a reading mode is just a better ad blocker, which Google doesn't like.
d
Oh I didn't realise "sidebar" was literal in your original comment. I thought it was a metaphor for something tacked on without thought or care. Which... sounds apt.
c
I feel like he kind of buries the lead. If the limiting factor is human understanding, then encapsulation is good. I would guess that it is almost always the limiting factor, and encapsulation is the thing that has let monkey brains get to the moon. LLVM, is a marvel for me. This decoupling of the compiler architecture has been an enormous net benefit. It allows me, a half interested amateur to design a performant language...! The implicit assumption of his criticism of the Operator class diagram is that there is some simpler way of doing it. Is there?
s
@Chris Knott The presentation could have done better at explaining why the operator class hierarchy is not considered ideal. Among some people, inheritance is entirely out of fashion in favor of composition. When I've made compilers, I've never felt the need to make class hierarchies. To me there are just nodes, and each node has a pointer to the node type followed by pointers to the nodes for the operands.
k
I'd say encapsulation is good for 100% waterproof implementations of commodity abstractions, i.e. abstractions that are well understood by domain professionals. The equivalent of screws or cables in the physical world. Complex or leaky abstractions are a risk factor. I can't say where LLVM should be situated on those scales. Does every language designer understand the abstractions on which LLVM is based? Is the implementation free of bad surprises?
s
When talking about “encapsulation is good”, it’s important to differentiate for what. It’s almost always good for scale. Either scaling up your own productivity by doing something just by yourself with libraries and frameworks that encapsulate stuff you don’t need to bother with. Or for teams and companies to split up and parallelize work. And we love scale! So much so that some people probably don’t even get why I suggest differentiation. Encapsulation is hardly ever good to come up with good design. Most branches of the design space have already been cut out for you and you most likely don’t even understand a fraction of the inner workings of the encapsulated code you use precisely so you don’t have to. Without having access to all that explicit and tacit knowledge of how it all works together, there’s just an almost zero chance to come up with a better design. I think that comes across very well in the talk and in the paper. Counterintuitively, I’d say that “encapsulation is what let monkey brains get to the moon” is catchy, but misleading. The Apollo Guidance Computer is a good example for something that has not been created with scale in mind, and everything — even down to not just the hardware but even the material science of it — has been meticulously crafted and deeply understood and then put together in the best way they could at the time. Which also seems to apply to almost all tech we still build on today that came out of ARPA, Xerox PARC, etc. At some point we decided, “Ok, that’s good enough, screw looking for better designs. Let’s monetize the sh*t out of this!” And we haven’t looked back since.
c
I don't deny that good encapsulation is hard, and bad encapsulation is common and harmful, but for the design space to even be anywhere near human comprehension we're already assuming a massive amount of complexity being abstracted away. All software is built on encapsulating the physics of electricity into things like "clock cycles" and "bytes" which are abstractions that you never really think about because they leak so rarely (rowhammer type bugs excepted).
My point about LLVM is that its main innovation was introducing a well defined interface down the middle of compiler infrastructure that GCC etc didn't have. This turned out to meet a massive unserved need that allowed the value of low-level optimisations to be much more widely spread.
What's funny about this https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition is that the encapsulation is so obviously unnecessary. But sometimes it isn't...!