<https://florentcrivello.com/index.php/2019/09/04/...
# linking-together
w
a city is not a tree
👍 1
d
I love this quote:
Whenever we have a tree structure, it means that within this structure no piece of any unit is ever connected to other units, except through the medium of that unit as a whole.
s
Ok, so, can we talk about this for a bit? If you subscribe to the theory laid out in those two articles (unsurprisingly, much better in the one by Christopher Alexander), and if you believe that it can be applied to software, how does this change the way you design (as in architect — but that’s not a verb, as Alexander would say) systems? In essence one possible conclusion is that we’ve been doing software architecture all wrong for decades, trying to cleanly separate everything into distinct layers or subsystems with as little overlap as possible, modeling everything as a tree, when it should be a semi-lattice.
✔️ 1
k
d
I think microservices architecture (optimized for decoupled deployments) directly counters the tree (SOA optimized for minimal overlap) trend
k
@Don Abrams can you elaborate?!
d
Sure: In SOA everything had it’s neat little box where there were hierarchies of responsibilities and each service handled some (sub)responsibility that would never overlap with another. In a microservices architecture--where any service can talk to any other service-- the tree model doesn’t exist anymore. Instead you have the possibility that anyone can depend on you (and vica versa). It’s way way harder to operationalize because you can’t visualize the entire system.
(FYI: I purposely am not referring to the hybrid/tiered microservices architectures aka distributed SOA because they are mostly frankenstein monsters borne out of lack of tooling/understanding of the tradeoffs before starting-- my perspective matches the one described here: http://nealford.com/downloads/Evolutionary_Architecture_Keynote_by_Neal_Ford.pdf)
k
I'm still a little confused because I always thought microservices were SOAs. Ignoring labels, I think you're comparing the ideal of one side with the reality of the other. We've been creating overly complex spaghetti architecture diagrams long before either of these terms were coined. But I think I'm still not understanding you, so would appreciate further elaboration.
d
The Christopher Alexander essay is amazing. But no, we are not designing software wrong. This is explained by a section of Alexander's essay, starting with "The tree is accessible mentally and easy to deal with. The semilattice is hard to keep before the mind's eye and therefore hard to deal with." My point is that source code needs to be organized in a way that the human mind can comprehend, otherwise, it is not maintainable. On the other hand, it is fine if the output of an optimizing compiler is a big mess that you could not reasonably maintain by hand. In the first article, consider that photo of the topologically optimized machine part. Do you want your source code to look like that? If the requirements change, can you modify that structure to meet the new requirement? Of course not, but that structure is the output of an optimizing "topology" compiler. The source code used to generate it is quite a bit simpler and more orderly.
k
I mostly agree; you don't have to see reality "as it is" to reason about it. It's just worth remembering that hierarchical decomposition is an approximation, and that other approximations are possible (see my links above). All too often the response to a failure of hierarchy is to add more hierarchy. It's easy to confuse the map for the territory.
s
@Kartik Agaram Having read both of your articles you linked earlier, I think you're onto something… I'm just not sure if I understand it fully yet. Can you try to explain what you mean by tests being "like a Fourier transform" — is that your way of highlighting the property that reduces the granularity from individual atoms (all possible program states) to slightly higher-level and neatly organized composites? I'm also not quite sure how to close the loop from there to the trees vs. semi-lattices argument, and would love to hear more about where you see the connection.
There is something to micro services (although I have no experience there and would've never made the connection) or the actor model, or Alan Kay's original idea of object orientation that seem to at least point into an interesting direction. While it could easily be interpreted as just another form of modularization it has a different quality to it, although I'm having a hard time to describe it more precisely. I believe it has something to do with what we see in biological systems, a strong inspiration for Alan Kay when designing object oriented systems, and looking at nature it is a de-facto proven way of organizing much more complex systems reliably. They also clearly exhibit strong hierarchical tree structures, although part of why they're so clearly visible to us is likely that we're hardwired to see them (see paper The Architecture of Complexity). They also have many properties of much higher connectivity, which yields their emergent properties. And that I believe is the part that's missing in software. So far we have mostly designed pure trees, and not just used trees as a way to understand a much more connected semi-lattice; perhaps using several different trees to model different aspects of the same system.
We also usually build software from the bottom up assembling smaller components. Christopher Alexander describes his design process as the opposite, always starting from the whole and unfolding structure within it. I have thought long and hard about what such a process could look like for software, but haven't had an epiphany yet… I'm convinced more iterative design and development processes that try to loop around a working product and improve it in small increments are usually more successful because they are at least somewhat leaving the space of purely bottom-up assembly of components behind and come a bit closer to an unfolding process. But I doubt it's anywhere close to what Alexander has in mind.
d
@Stefan "Christopher Alexander describes his design process as the opposite, always starting from the whole and unfolding structure within it. I have thought long and hard about what such a process could look like for software, but haven't had an epiphany yet…" -- Isn't this just Top Down Design? Also known as Stepwise Refinement. https://inf.ethz.ch/personal/wirth/Articles/StepwiseRefinement.pdf To use this approach practically while coding and writing tests, the lower level modules start out as stubs, which can be refined into partial solutions that can pass some unit tests but don't fully solve the problem (because you haven't figured out how to solve the complete problem yet). This means your dependencies may change as you replace a stub or trial solution of a submodule with a more final implementation that meets all the requirements. It is tempting to let your dependencies determine your design, or to let whatever code is easiest to write determine your design. That's bottom up thinking. To break out of this, I have to alternate between sessions of coding, and sessions of top down thinking where I don't write code, and instead think about how my present code fails to implement my vision and goals.
s
@Doug Moen Ah, that's interesting. My initial reaction is "of course it's not", but I haven't read this particular paper of Wirth yet. 😉 I don't think the top-down nature of Alexander's approach is the important piece, but what he calls unfolding in The Nature of Order — uncovering the structure of the whole while looking at all dimensions at once. We certainly know how to develop software in small increments, but that usually means we're still trying to reach a more or less well defined goal and just cut the path towards it into manageable pieces, maybe even allowing slight diversions along the way. The process, however, is still primarily goal-driven. What Alexander describes sounds a lot more like the process of a sculptor carving pieces of material from a block until an image manifests itself, "discovering the sculpture that's been hiding in the block", but the sculptor wouldn't be able to tell you in the beginning what they're going to end up with. I'll give Wirth's paper a read, he usually is worth the time. Thanks for making the connection to it and sharing a link!
d
I haven't read the Nature of Order, so I'll look that up. "Uncovering the structure of the whole while looking at all dimensions at once" is something I am trying to do in my project. Since I've taken on a project at the limits of my abilities, I can only use an iterative approach, where I build a trial solution, and then iterate as new problems and challenges come to the forefront. Pure top down design (using a single iteration) doesn't work for me until the problem is trivial or well understood (by me). I have to write code to make progress and get a fuller understanding of the problem, but with new knowledge and insight comes the need to refactor the design and even change module boundaries (aka uncovering the structure of the whole). My brain isn't big enough to look at all dimensions at once, so I keep copious design notes to record my past insights and decisions, and work on a few dimensions at a time during each major iteration. Quoting Wirth, programmers "must learn to weigh the various aspects of design alternatives in the light of these criteria. They must be taught to revoke earlier decisions, and to back up if necessary, even to the top."
s
@Doug Moen Well, if you end up reading The Nature of Order, please enlighten us. It's four books with over 2000 pages total. You might want to read posts about it instead. I haven't read it either, but read several summaries and posts that focus on the unfolding process (which I believe is in book 2 or 3). I don't have the time right now to find the links, but I'm pretty sure you'll find some of them here somewhere…
Well, I guess I've got better at this note taking thing: Here's something to start with: • http://iamronen.com/quality/christopher-alexander-the-nature-of-order/http://www.livingneighborhoods.org/library/empirical-findings.pdfhttp://www.permacultureproject.com/wp-content/uploads/2015/02/Alexander-as-phenomenology-of-wholeness-dec-081.pdfhttp://jomardpublishing.com/UploadFiles/Files/journals/NDI/V2N2/SalingarosN.pdfhttps://arxiv.org/pdf/1303.7303.pdfhttp://zeta.math.utsa.edu/~yxk833/life.carpet.html

https://youtu.be/98LdFA-_zfA

I wouldn't expect going down this rabbit hole to yield any immediate and practical conclusions for what you're trying to do. You will also see that this is a strong diversion from the analytical world of ours talking a lot about abstract emotional concepts like beauty and our capabilities to "see" or feel these properties indirectly. I'm not surprised that Alexander ultimately jumps off a cliff of spirituality and religion, which isn't even closely what I am looking for by studying his work. But who knows, maybe he just reached enlightenment and has finally figured it out…
d
Volume 2, The Nature Of Order (Wikipedia synopsis) "Complex systems do not spring into existence fully formed, but rather through a series of small, incremental changes. The process begins with a simple system and incrementally changes that system such that each change preserves the structure of the previous step. Alexander calls these increments "structure-preserving transformations," and they are essential to his process." That's what I was trying to explain earlier.
s
Quoting Wirth, programmers "must learn to weigh the various aspects of design alternatives in the light of these criteria. They must be taught to revoke earlier decisions, and to back up if necessary, even to the top."
I think the underdeveloped skill in our industry that could also potentially be augmented with better tools is our capacity to work with several dimensions (different tree approximations of the same semi-lattice) at once as well as how easy we can jump back and forth between different levels of complexity (the depth within those trees), up and down the ladder of abstraction.
Wikipedia synopsis
Well, the rabbit hole just looks like any other from up there… 😉
d
"I'm not surprised that Alexander ultimately jumps off a cliff of spirituality and religion" -- I think that Alexander's notion of "life" is a very desireable property for an FoC system to have, even if the concept is slippery. Donald Norman jumped off the same cliff, I think. He wrote "The Design of Everyday Things" -- which has lessons for programming language design -- and later followed up with "Emotional Design: Why we Love or Hate Everyday Things" (which I haven't read yet). Norman recanted his previous belief that emotion has no place in design (and revised The Design of Everyday Things to remove this assertion).
from my notes, derived from Christopher Alexander: Quality Without a Name: * Usability - Will the feature make Curv more usable for novices? Is the feature something that developers will enjoy using? Would either group miss it if it was no longer available? * Readability - Is the intent of the feature clear and well presented? * Configurability - Can the user adapt the feature to his or her needs? * Profoundness - Does the feature strike the user as special or unique, but at the same time, insightful and correct?
s
I fully agree that what Alexander calls "life" in one period and "wholeness" or "quality without a name" in others is a very valuable concept and we could certainly use more of it — I'm just less interested in the spiritual conclusion from his late period which leaves me somewhat unsatisfied. On the other hand, understanding the extraordinary amount of several decades of work after his Pattern Language, which is what most programmers know him for, shows what a huge amount of ideas is still ripe to be adapted to our industry (and others that involve a lot of design).
@Doug Moen I like your list of how you adapted his principles directly to software design. Would love to read more about it. In particular I would find your take on the 15 fundamental properties of wholeness adapted to software design very interesting — not sure if that is already what you based those four on, or if he published that later. It's definitely in Nature of Order and I think that's in book 1.
It sounds like you break it down into features and then apply the principles to those features individually — do I understand that correctly? Do you also look at the system as a whole and how those properties are preserved across individual features?
15 properties of wholeness: • http://www.tkwa.com/fifteen-properties/https://blog.p2pfoundation.net/the-fifteen-geometric-properties-of-wholeness/2014/03/01https://www.archdaily.com/626429/unified-architectural-theory-chapter-11/ Forget all the links above and look at these first — I think this is potentially the most useful for us to adapt in software design. Alexander's unfolding process is what he came up with after identifying the 15 properties and doing a ton of empirical research to show that people can generally sense the presence or absence of these properties (in architecture of course). His process then makes sure that you create and preserve the properties. I'm particularly after Levels of Scale as one of the properties we don't really follow in software where the "size" or scale of our abstractions can be completely arbitrary — everything is just a function call (or a method on an object, if you're into that kind of thing), but we have no sense for which level of scale we're operating on, mixing and matching low-level algorithms in high-level structures and vice versa. Ah, that just makes me realize that I really need to stop posting here and get my act together and write some proper blog posts about this.
👍 1
d
I spend most of my time thinking about individual features, so the QWAN is easiest to apply in that context. I try to periodically look at the system as a whole and I try to see how the QWAN applies to the gestalt, but it's more difficult. I can occasionally see design alternatives where if I modify multiple features at once, then I jump to a different part of the design landscape where the fitness of the design is improved along certain axes.
k
@Dan Cook had a great thread back about Christopher Alexander back in March, that I got a lot out of, but.. it turns out Slack at some point added a restriction on free plans. It doesn't just keep us from searching for old threads if we don't pay, it also refuses to show them. Which means that what I thought of as a permalink for a thread isn't really all that permanent. That is some bullshit. Sign me up to move elsewhere. Anyway, where was I? Unfolding wholeness. It's a dashed subtle idea, and I never grokked it until March, and now I fear I've forgotten a lot of the nuance yet again. But it's not the same as iterative refinement. One very concrete sentence that particularly stood out for me because it fits my prejudices: > In order for code to be living structure, even the tools used to make the code need to be living structure.
@Stefan
Can you try to explain what you mean by tests being "like a Fourier transform..?
It's very easy to think of a program as the sum of its source code. Subsystems, functions, lines of code. This decomposition is often useful. I think of it as the time domain in my analogy. An alternative worldview is to think of a program as a space of inputs that it handles, decomposing into different regimes. Within a regime behavior changes smoothly/continuously. Across a regime boundary behavior changes abruptly/discontinuously. I find this decomposition more useful, partly because it's not reified anywhere in the source code and so easy to forget. I consider this the frequency domain in my analogy. One place where you can see the frequency domain if you squint, and if a program is written in a certain way, is by staring at its tests. Usually you'll have one test per regime, and areas around the test will tend to behave similarly. One could imagine trying to add multiple tests per regime to help nail down the boundary more carefully and so reify the frequency domain in the limit. If we did that, a whole new universe of tools would open up. But it's definitely still an open problem.
s
@Kartik Agaram let's see if I grokked this: I think I understand why you picked tests, and it certainly makes this easier to understand than what I’m about to try, but what still puzzles me a little is this: how the program is written is highly subjective, so one domain is based on an arbitrary factor — which makes sense, it's about how we design things after all. But then I would rather try to base the second domain on a different property dependent on the same arbitrary design, but tests are a completely different beast and also completely arbitrarily designed. I know we’re talking about a metaphor here and that's probably not the point, but I would’ve rather picked a property of the program design to stay within the same level of arbitrariness… I don’t know, encoded invariants maybe? Assuming a tester would try to hit all encoded invariants through something like designing the unit tests based on cyclomatic complexity, trying to exercise all possible code paths. Anyway, the metaphor works and I think I understand that part. Would you agree that the connection to trees and lattices then is this: time domain or decomposition in subsystems (program structure) is one tree approximation, frequency domain or decomposition in regimes (invariants?) is a different tree approximation, but both trees stem from the same program, therefore likely a more complex lattice structure?
k
> Would you agree that the connection to trees and lattices then is this: I hadn't thought about it quite that far 😄 Originally I linked my posts as examples where I point out flaws in one (dominant) approach to tree-based decomposition in software. So I was agreeing with you that we've been doing things wrong for decades. But perhaps my approach is also tree-based, so it isn't completely relevant to this particular thread. As Doug Moen pointed out above (at least in my interpretation of https://futureofcoding.slack.com/archives/C5U3SEW6A/p1574212062223500?thread_ts=1574069038.213900&cid=C5U3SEW6A), trees are valuable approximations or abstractions even if they don't capture every last nuance about cities or software. > how the program is written is highly subjective, so [the time] domain is based on an arbitrary factor.. I would rather try to base the second domain on a different [independent] property.. You're absolutely right! Congratulations, you found a soft spot in my argument I've waited years for someone to point out ❤️ I'd like the way we visualize the domain of a problem to be independent of how our code happens to be written. Unfortunately tests don't do that. They co-evolve with the code. So two programs written by different people for the exact same domain could end up having incompatible tests, if they choose a fundamentally different approach to solving it, and their internal data structures are different, and there are cascading effects throughout tests of different granularities everywhere. In defense of tests: • A lot of times there's one obvious architecture. Compilers have a certain flow of parsing, optimization and code-generation that is fairly timeless. • Having a frequency domain to visualize is super helpful even if it's only for your program's architecture. It's better than nothing, until we come up with something better. • I've been writing tests for years in a way that tries to mitigate this problem as much as possible. Rather than have tests run sub-components and make assertions on their behavior, I always run the whole program, emitting a trace of domain-specific events (facts deduced by the program), namespaced by different conceptual sub-components. Then different tests make assertions on the state of the trace. Fine-grained unit tests may focus on just traces pertaining to a specific namespace, while coarse-grained integration tests may look at a different namespace. I call this approach white-box testing (http://akkartik.name/post/tracing-tests) and it isn't a complete solution to the problem, because it is possible to imagine a program so radically different that it doesn't even share the same coarse sub-components. But for the most part, in practice, white-box tests help because they simplify the problem of compatibility to just having the same namespace names and trace format.
s
@Kartik Agaram I read your A new way of testing and I think you’re onto something that I can make a connection to in a totally different context but it feels like the same pattern to me. What you see in “white-box testing” is what I see in declarative frameworks like SwiftUI or React, or in the application of monads (as in Haskell’s IO), or in effect systems: I think it’s also what you call “obvious architecture” above. It’s not obvious at all, but I think I know what you mean. :-) I see a subtle trend in programming uncovering distinct layers, I would call them layers of scale in the Alexander sense, and designing more and more systems around those, by increasing the expressiveness within each of these layers with algebraic composition. When you declaratively specify a UI with SwiftUI, you don’t tell the system what to do directly (that’s the declarative part, but that’s not quite my point here), but you create a data structure that will be “parsed” and transformed into the view hierarchy at runtime. You’re kind of programming within programming, but not just in an eDSL sense, but more in an Inception sense. There are two distinct scales here, although you don’t really see them if you don’t look for them. When you use the IO monad in Haskell, you don’t cause the side effects to happen directly, but you assemble a data structure that will eventually become the “real program” and then reduce down to calculating all the side effects. Again, two separate scales. Both concepts look very similar to me. As does what you do with your “trace-based tests”. You’re separating levels of scale, slicing the system into different domains, but not in an elaborate manual way, designing each layer individually, but by exploiting a structure that already exists. I find it hard to describe, even though the pattern makes total sense in my head… we look for something like a pivot point to rotate the whole system around and at certain angles everything aligns neatly and we can see different aspects of the same thing clearly. It’s like splitting your system into distinct layers, but simpler — a lot less effort than designing the layers explicitly — and yet more powerful. These are the tree approximations derived from a lattice structure. They are not designed, but derived. They are already there and therefore don’t have to be designed, just discovered. And if we could understand better how to discover them, we could cope with much more complex systems. I find as we tumble down our rabbit holes and get quite deep into them, it becomes harder and harder to convey what we learn as what makes sense to us is just totally incomprehensible to others who have not been down the same rabbit hole as far as we have. Words with meanings obvious to us lead others down a completely wrong path. I felt like this when trying to decipher your Fourier transform analogy, and I wouldn’t be surprised if you will feel like this with what I just wrote.
https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh This is pretty much how I feel about Christopher Alexander — he’s telling me to “look up”, and I do but I still can’t quite see what he means because there’s nothing above the menu bar on my screen… :-)
k
Yeah, that's indeed my reaction. I'll try to reflect more on your comment.
d
You still want trees and categories, but you also need then to overlap sometimes. This can be as simple as using interfaces instead of abstract classes
m
[OT] what a great thread, sadly most of people on FoC will miss it and we collectively will miss it in a few months, this looks really valuable and reminds me of reading back and forths in the c2 wiki 🙂 What I would love for a tool (of thought 😛) is the ability of the people in this conversation to go back, summarize and highlight parts of the conversation for people comming to it later to get a gist or a refinement of the conversation after it happened since I think most of the participants have a clearer idea at the end than at the beginning and the clearest idea just after finishing the conversation, after that the subtelties will fade out (like tears in the rain 😛). I've been thinking about a slack/wiki hybrid where you start from threads and start refining them into articles, you can still jump back to previous iterations of the article and the raw conversation, but the entry point is an organized summary.
❤️ 2
k
Make it happen, @Mariano Guerra!
m
it may be my next (or next next) project if no one solves it by then 🙂