Let's talk about the positive benefits of accident...
# thinking-together
i
Let's talk about the positive benefits of accidental / incidental complexity in this thread.
Paging @Kartik Agaram and @ibdknox
k
For starters, some examples would be good.
i
As a caveat, I've read Silver Bullet, Tar Pit, etc. only a handful of times, so I might not have internalized all the nuance of incidental vs essential. So these examples are based on my naive view. Type systems, syntax highlighting, keyboard shortcuts, choice of language, literate programming, and all other programmer affordances are at their core incidental to the problem that the software you write needs to solve. They're also incidental to the act of programming. You feel this when you fiddle with your linter instead of creating a better abstraction or implementing a feature. For that matter, abstractions are incidental complexity. Again, they're an affordance for thought that isn't strictly necessary. Using a mouse to move the insertion point. The ability to delete characters. Agile. If you stood in the middle of your programming memory palace, spun around in a circle, and pointed at a random object.. chances are good it represents something incidental. Something meant to make your job easier, or even humanly possible. Now, this viewpoint is taking Brooks (etc) past the point of usefulness, far into absurdity. But the reason I like to do that is to establish that the essence vs. incident of complexity is not binary. It's not even a linear spectrum. Many kinds of incidental complexity are deliberate and ergonomic. Some kinds of incidental complexity are simply emergent. In both cases, you might prefer to have them. As I like to say, incidental complexity is a resource to be spent, not an evil to be purged.
s
This is not a fully formed thought but... When making something is hard, you think about what's really valuable. E.g. older movies without CGI would have to rely more on the storyline. Many newer ones are more about special effects.
💡 2
j
I’d argue that abstraction and duplication are both incidental complexity. The debate over them is about in which circumstances choosing one or another minimizes incidental complexity. Which would lead to the argument that you’re still trying to purge incidental complexity, it’s just that all of our approaches leave some incidental complexity in place.
k
@Ivan Reese @Justin Blank This is a kind of mind-blowing idea. I'm not a scholar of those papers either, but it never occurred to me to interpret 'accidental complexity' so broadly. Regardless, it feels like a very fertile line of thought.
j
To be fair, I’ve read Brooks twice and while I’ve read the intro to Out of the Tar Pit many times, I think I’ve only read the whole thing once. It’s possible that I’m not using the authors’ own interpretations of what the terms meant.
@Ivan Reese “Many kinds of incidental complexity are deliberate and ergonomic. Some kinds of incidental complexity are simply emergent. In both cases, you might prefer to have them.” This is a really great few sentences .
❤️ 2
m
This is a bit of a tangent but regarding abstraction I’m with Zach Tellman who argues that we can only talk about the usefulness of it in a given context. Highly recommend his book http://elementsofclojure.com (which we might have called elements of software as well) or his recent appearance on the CoRecursive podcast: https://corecursive.com/042-zach-tellman-software-in-context/ Thinking the same might be true for incidental complexity – you have to talk about the context.
❤️ 4
j
Is it really incidental though? If I have a text editor that allows me to move forward and back, and I’d like to move to the beginning of the document, I have a procedure for getting there. I press back many times until I get there. But there is a cost to doing this (time, keypresses, cycles in evaluating 1000
move-back
commands). So we introduce a home button. It fully and minimally captures the action I was trying to take. We have made the keyboard more complex, and every layer of software in between that needs to care about this keycode, but is that complexity really incidental? It is perfectly and minimally in service of a real™️ problem.
j
My take: it’s essential complexity for a document editor, but the idea of editing files is incidental complexity for programming.
💯 2
s
This topic reminded me of this essay on complexity: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1578176757362700?thread_ts=1578176757.362700
we can then start to analyze things like that fact that I want to "simplify my life". On its own, the statement is meaningless because we do not know in regards to which aspect in my life I would like to make easier. If I buy more appliances, to reduce my working time -- say a food processor, for example -- I save time in chopping. But it comes at the expense of having to buy the machine, washing it afterward, and occasionally performing some type of maintenance to keep it in working order.
i
I'll echo what @Ivan Reese said with a simple example from Eve, and add another perspective as well. From Eve: It's not essential in a relational language to have removal, instead the simplest mechanism is to assert a retraction. As a person though, that's pretty mind bendy. It's much easier to think of removing something as an action, rather than an addition of a fact to the world. Having removal or set (remove the current add this new one) is purely incidental, but as Ivan pointed out, extremely useful.
❤️ 4
The other perspective that underlies Ivan's points I think: programming is a process performed by people and as a result we have to deal with things like inferential distance (as the example above shows). What we forget though is it's not just about making things simple/intuitive/easy to work with, it's just as much about crafting the entire process to manage emotional context. If something is objectively better, but feels bad, then it's just going to largely get binned as bad. When we remove all incidental complexity from the process, there's a second order effect to the pacing: you've now moved all of the essential (and likely hard) parts of the problem all right next to each other.
The effect is that you've made it feel much harder than it used to be, because the complex bits were spread out among more mundane considerations. Monotony was breaking the difficulty.
Pacing and emotion management are very important aspects of the problem to consider and are well understood in other domains. E.g. you can't watch a movie that runs at 110% the whole 2 hours, it's emotionally exhausting and you start to tune out at some point. In the programming context, you get one of two effects: either you cause long thinking pauses in between the actual "points of progress," which makes it feel stuttery and like it never flows. Or you end up just getting distracted, because it's much easier to do something else than consider all the difficult things about your problem.
i
Beautifully illustrated, Chris. To zoom back in from my reductio ad absurdum to what Silver Bullet / Tar Pit were meaning — when looking at the frustrating cruft that builds up in, say, a tower of abstractions, each piece of cruft was originally created as an affordance for, say, legacy or convenience or familiarity or flexibility. Taking my absurd examples in one hand, and these practical examples in the other, you can see that it's simply a case of: one person's needless complexity is another person's accessibility.
i
Another interesting effect: if you make things simple enough, people then think they're trivial and ignore them 🙂 We've struggled a lot with that in various contexts over the years.
💯 2
s
objectively better, but feels bad
Sometimes I prefer to drive on town streets that are always moving instead of the freeway which has a single strech of very slow congestion. The freeway might actually be faster overall, but its more painful.
💯 2
s
Reading through this thread I get the impression that essential/accidental complexity is conflated with good/bad user experience or usability and accessibility. I’m still pondering how I feel about that, but felt like just pointing this out could be useful by itself…? My initial feeling is that these things are neither orthogonal nor directly related but somewhere in between (yay, what a safe position to take… ;-). It should be possible to remove accidental complexity in extreme ways and then still provide a good experience. Although some might interpret that as adding accidental complexity back in for the sake of experience. 🤔 The other observation I want to make is that we (including myself) prefer talking about removing accidental complexity. It’s always easier to identify things we don’t want. It’s much harder to identify what we want. I wonder if we’d be better off thinking about what essential really means other than that convenient “necessary for the problem at hand” definition.
k
Yeah, these great examples reinforce my sense that essential/accidental is a useful dichotomy -- and we humans are terrible at distinguishing the two. Our math sides push us too far towards declarativity IMO, and anything imperative is treated as accidental complexity. But anything that exists in users' mental models a priori is, I think, by definition essential. As an example, I manage a deployment system in my day job, and it operates as a convergence engine: you give it the desired state (package versions, number of hosts, etc.) and it takes the steps to get from here to there. Once we released it, however, we uncovered several cases where people care how we get from here to there: a) Sometimes there are hosts that we think can be reused but customers don't want to allow, for reasons our tooling can't see (and there'll always be some of those reasons: outdated secrets, corrupted data, etc.) b) Sometimes our customers want for a specific cluster to not be modified, while another subset is. Rather than have to painfully specify the desired state to be identical to the existing state in those cases, they'd rather just get a checkbox that says "hands off this cluster!" Both these changes have been difficult because of the deep architectural division between ends and means in our tool.
Anecdote regarding putting things too close together: we've been trying to onboard teams in my company to a new tool. Our good intentions originally were to do all the legwork and present them with a single go/no-go decision. And we kept finding that people would put off the decision. One of the things we discovered (this is an ongoing issue) was that our decision to show all the services owned by a team in a single document was counter-productive. Talking about each service separately helped people feel less overwhelmed.
j
I think things that give the user a good experience can be essential complexity, but it’s more debatable. We know that not crashing or corrupting data is essential, but you can always argue (and sometimes you’ll be right) that a particular thing that makes an application feel nice to use isn’t really essential. (Also, what’s essential now isn’t the same as what was essential in the past. At lunch, I watched half of Modern Compiler Construction, where Anders Hejlsberg compares a 32kb compiler to a modern compiler--what users want has changed).
k
Yeah, user mental model feels like a superior scalpel here than user experience for teasing apart the two categories. Experience may be at a local optimum. But what's in the user's head you can never get away from.
j
I guess I’m assuming that if something doesn’t match your mental model, you’ll probably have a bad experience.
j
I think we’re describing a property of a system. Since users are different, we can’t design something that will seem essential to all users, but I think we can design systems that empower users to strip away the pieces that they feel are inessential. Programming itself, I believe, has this property. I can choose to use a high-level library, or I can look at its dependencies and use those, or I can re-implement the small piece I need from scratch. Emacs, to the extent that its functionality is implemented as modes on top of a fairly bare-bones editor also has this property.
i
With respect to mental models.. I have a blog post draft I've been sitting on for over a year. It's about the positive value of flawed mental models, and by extension that harm that can come from us toolmakers being too concerned with instilling correct mental models in the users of our tools. A motivating example:
Hammers can hit nails. That's their very purpose. But they can also hit screws, which is a great way to make a screw stay put while you reach for the screwdriver. They can also dent and deform sheet metal, which is useful for crafting a steel drum. They can knock loose a stuck fitting or lid, especially when hitting the free end of a long wrench on a stuck nut. They can punch a hole in drywall, making it easier to tear down. They can also smash your hand.
Hammers are tools for working with nails. This is a conceptual constraint placed on hammers by their designers. Hammers are designed with this specific intent in mind. But sometimes, hammers are just tools for amplifying the force of your arm. Sometimes, hammers are but tools for surviving a forceful impact.
The [unfinished] draft, if you're interested: https://ivanish.ca/mental-models/
❤️ 1
So to that end, I don't think mental models are a superior scalpel — if these mental models are what drive the distinction between essential and accidental, that's going to force us to be even more careful about mental model correctness, and then lose the benefit of seeing our tool users as unreliable narrators, a font of happy accidents, or intentional creative misusers.
j
I think that’s right. I think it matters when the mental model has too much slippage. Or when the mental model and the underlying model differ in ways that result in the user being surprised/stuck.
i
Surprise can lead to excitement (further to what Chris Granger was talking about above), and being stuck can lead to creative alternatives (like my hammer examples). I see both outcomes constantly from the artists on my team.
k
@Ivan Reese I totally agree with that! By "mental model" I mean what people come to the computer with (which is usually about a problem), and not what the computer tries to 'instill' in them (which I think of more as the information architecture of a solution)
j
I don’t know how seriously we’re disagreeing. If you’re just saying that can sometimes be true, I imagine so. But if you think it’s the norm, I’m going to disagree and say it’s “man bites dog”.
b
I'm not sure to agree with "For that matter, abstractions are incidental complexity. Again, they're an affordance for thought that isn't strictly necessary." I always understood accidental/incidental complexity as relative to a problem that a human is trying to solve. So the problem and the fact that a human is trying to solve it are important in order to consider whether an abstraction is essential or not. You will always express your computation using some kind of abstraction. If your solution is expressed in the language of the domain, I would consider that there is little accidental complexity. But as soon as those abstractions get in your way and you see yourself doing more and more housekeeping then you have accidental complexity and a different abstraction is probably necessary to get rid of it.
And that's no different from scientific theories. To take the famous example of Ptolemaic epicycles. If you want to predict the position of a planet in our solar system, an heliocentric model is not "strictly necessary". It is just much easier because you have a lot less accidental complexity. What is also interesting in this example is that the abstractions of the geocentric model are easier than the heliocentric ones. They just don't scale as well to solve actual problems. Illustrating that what seems harder at first can make things simpler later.
☝️ 1
i
abstractions of the geocentric model are easier than the heliocentric ones
I don't follow. Would you mind explaining that point a bit more? Do you mean that we developed the geocentric model first, and thus it was "easier" to discover than the later, more elusive heliocentric one?
b
Easier because circles are easier to describe than ellipses.
They’re neat and simple compared to an ellipse.
d
I don't agree with Ivan's definition of complexity. The first sentence of Out Of The Tar Pit is "Complexity is the single major difficulty in the successful development of large-scale software systems." The goal of this paper is to explore ways of reducing complexity, so as to make software engineering less difficult. Ivan said "Ivan: abstractions are incidental complexity. Again, they're an affordance for thought that isn't strictly necessary." That can't be right. Programming without abstractions is virtually impossible. The only "programming languages" that lack abstraction mechanisms are mathematical formalisms like Turing machines and the SKI combinator calculus, and a few esoteric languages like Brainfuck. If you eliminate Ivan's kind of complexity from programming, then you are reduced to programming in Brainfuck. Brainfuck programs are not less complex than, say, well written Lisp programs, in any way that I can see. They are actually far more complicated, using any reasonable metric for measuring the complexity of a program. Abstractions are our principle tool in eliminating complexity from software. Tar Pit advocates the use of functional programming and the relational model to reduce complexity, and these are methodologies for constructing software abstractions. Whatever complexity Ivan is talking about, it's not the kind of complexity that Tar Pit is teaching us to eliminate from software.
i
Here's how I think about complexity. How many different ideas do you need to know in order to understand what a program does? The second-least complex way to write a piece of software would be to push a button and have the exact program you need be spit out. You hardly need to know any ideas in that case — push button, get program that magically does what you want. All the baggage and pain and coordination and effort of software development is gone, like magic. Every time you introduce a new idea that needs to be understood — whether it's functions, function composition, currying, partial application, the existence of one particular function in your codebase, the history of that function and previous issues it had that are now avoided by doing things in a slightly surprising way, the syntax for the documentation explaining that change, the fact that the change was made to bring the resulting program closer in line with the end user expectations for what this program does — you add an additional complexity that you, the programmer, need to deal with to work on this program, which you wouldn't need to deal with in the magical example above. All of this is incidental complexity. My whole point is that Out of the Tar Pit is wrong. They say that reducing incidental complexity is crucially important. I say that it's not — instead, you want to be empowered to choose exactly what incidental complexity you need to deal with. When you draw a line and say "These ideas are incidental complexity because they meet criteria X, and these ideas are not because they don't," you're making a value judgment. When you read Tar Pit with that lens, is painfully clear what values the authors hold ("Mutable state is evil"), and they of course base their what is incidental selection criteria on those values. Hogwash. Incidental complexity, by their own definition, is broad enough to include mutable state, immutability, any even the purely theoretical notion that state, and change, are concepts that exist. (The absolutely least complex way to write a piece of software is to not have to do anything. Hat tip to Steve Jobs, paraphrased the line of code that has the least bugs, runs the fastest, requires the least documentation, etc etc is the line you never wrote.)
k
I'm rereading Tar Pit for the umpteenth time. Never liked it, and jeez it has more italics than a Robert Ludlum novel.
@Ivan Reese My a priori biases are very similar to yours. However when I read your last comment my first reaction was, "that's not what they mean by incidental complexity." So I went back to reread the paper. You're right, they do say "accidental complexity is all the rest" except the essence of the problem as seen by users. Which is identical to what I was saying above, and opens them up to your interpretation. And that seems like the most acute criticism I've seen so far of this paper: the terms are so broad that they're very open to interpretation, and the authors' interpretation serves only to reveal their biases. The authors' "ideal world" is one where computation has no cost, but social structures remain unchanged, with "users" having "requirements". But the users are all mathematical enough to want formal requirements. They don't seem to notice that the arrow in "Informal requirements -> Formal requirements" may indicate that formal requirements are themselves accidental complexity.
💯 5
To push back one tiny part of your comment, though:
Every time you introduce a new idea that needs to be understood — whether it's functions, function composition, currying, partial application, ...
These seem like accidental complexity for a music-creation app, but essential complexity for a programming language.
...the existence of one particular function in your codebase, the history of that function and previous issues it had that are now avoided by doing things in a slightly surprising way, the syntax for the documentation explaining that change, the fact that the change was made to bring the resulting program closer in line with the end user expectations for what this program does — you add an additional complexity that you, the programmer, need to deal with to work on this program...
These seem like accidental complexity for the users of a program, but potentially essential complexity for someone working with it given our current context (the world already exists, the codebase already exists in the world, etc.) So the context seems crucial here. What is the system under consideration, who is the target audience, all this affects where the line is. The further to one extreme you draw the line, the less useful the distinction becomes.
💯 1
Perhaps my fundamental criticism of this whole dichotomy is that it treats "users" as immutable rather than capable of learning. As @Doug Moen points out, if you treat the user's goals as gospel (say that they want to make this one specific app), then maybe teaching them programming shouldn't include functions. The insanity of that proposition puts this whole framing in sharp relief for me. To some extent someone wanting to do something in a new area needs to learn from its traditions.
s
Trying to wrap my head around the different perspectives. Consider 'writing' and 'science'. Is writing essential for science? Is it accidental complexity? Lets break it apart a bit more.. it's not just the written form, but what the form represents - some abstract ideas (formulae, functions) - are these abstractions essential? If I am the science itself then I wouldn't need to understand it, I'd just be it. But I'm not science. There is something out there and science gives me a model to grasp that external something (presumably) and fit it internally - something I can understand. This gap is covered by a series of 'mediums' and 'messages' for lack of better words. 'Symbols' are one primitive medium (the idea that 'signifiers represent the signified'), we learn this medium very early in childhood. A message in this medium is 'tree' (doesn't look like a tree, but makes you think of one). The 'written medium' is quite useful too - signs themselves now have physical forms. Sentences are perhaps the next level of medium (a composition of symbols that represents a composite idea). Further, from maths we have variables, propositions, formulae, functions - these are all higher level mediums. We express science in these mediums (e.g.
f=ma
is a message that requires us to become the medium first and then absorb its meaning). What is the end goal in this endeavor? We're trying to predict something perhaps, or design something for a purpose - these are our goals, our vision of the purpose. Now a twist: how do we represent our purpose? The purpose is just another medium. When I put pen to paper, I'm using the physical medium to affect the written medium. Is pen and paper incidental? Yes, ideally I'd just think the words and they'd appear on paper. But follow that chain - why am I creating the written words anyway - maybe I'm distributing an idea across. So writing is just incidental - the greater purpose is distributing the idea. Indeed I could make a video instead. (So why spread an idea.. the chain keeps growing) So what's incidental depends entirely on where we 'cut off' our context. I think what Ivan is saying is that there are multiple alternative paths of medium/message stacks to get to our goals. All of these are 'incidental complexity'. The simplest solution is having the button with 'my goal' on it, that I push. (Well technically that still has incidental complexity because you'd have to grasp the medium of buttons - push, cause, effect etc.) But anything else on this path from the vision of purpose to purpose fulfillment is incidental. I've taken the position that 'programs' are a red herring. I think it's based on the fundamental position that all these abstractions, ideas, mediums and messages of thought are incidental and we can choose different ones.
🍰 1
💯 3
❤️ 1
b
@Ivan Reese "How many different ideas do you need to know in order to understand what a program does?" If that was a good description of complexity then we would all be programming with turing machines. You have very few ideas in a turing machine and they're very easy to understand. They theoretically let you compute anything. We don't do it because it doesn't scale to build entire programs. I understand where you're coming from. There are a lot of bad and over-engineered abstractions out there that are often unnecessary to solve the problem. There is also an advantage of having no learning curve when approaching a new program. It's really a fundamental question of design. 1) Do I put together existing (familiar) building blocks and end-up with a codebase that is easy to understand locally but bigger than the other option and not necessarily easier to understand in its totality. 2) Or do I create more appropriate building-blocks (not necessary familiar) to solve my problem and end up with a smaller codebase which might have a steeper learning curve but which will pay-off later. You can easily err on either sides. But overall as a discipline, I think we're erring towards the first approach. I'm also in disagreement with the second part when you say that an idea is either incidental or essential complexity. It doesn't make sense to me because whether something is incidental or accidental depends on the problem. An idea X can be incidental to solve problem A and essential to solve problem B.
👍 1
j
“How many different ideas do you need to know in order to understand what a program does?” If that was a good description of complexity then we would all be programming with turing machines.
I don’t think that’s true. I don’t think a human could be presented with a complex program expressed as a turing machine and understand what it does without developing, either internally or externally, abstractions over those operations. I think it may still be a useful description of complexity if we take into account that limited humans are the ones doing the understanding.
b
I think we agree. The goal should not be to reduce the number of ideas but coming up with the ideas that make the problem understandable/solvable.
👍 1
s
"How many different ideas do you need to know in order to understand what a program does?" If that was a good description of complexity then we would all be programming with turing machines.
But programming with Turing machines is complicated and programming with programming languages is easier? So maybe there is a cost/benefit aspect to each new idea? Each abstraction/concept has a cost and a benefit. The deeper question is this: is there any concept that is fundamentally essential to the problem being solved? E.g. are pure functions fundamentally essential and stateful objects fundamentally unessential to programming the game of Pong? My interpretation of Ivan's position is that it's all incidental complexity - and computing is a game of choose your poison.
👍 1
b
I think you mean "fundamentally essential" in the sense that you could not solve your problem without it. If so, this is the realm of the theory of computation/universality... But the terms essential/accidental are in the realm of software engineering (made by humans). A lot of our languages (abstractions) are universal and this is totally fine. This is not accidental complexity. You have many languages to understand and solve different kinds of problems. The same way you have many sciences to understand different parts of the world. Should physicists tell biologists to stop creating their own abstractions because physics is "enough" to explain biology?
I wanted to comment also on the idea of pushing a button to solve a problem. I understand it is an extreme example but it illustrates exactly the same point regarding scaling. Yes of course if you want to solve only one problem the push button is the simplest approach. The reality is that people don't have only one problem to solve. And if they had to learn the location of a button for each problem they have it becomes extremely complex. And if they don't have the button they will need to ask a programmer to make it for them. If instead you provide this user with building blocks they can rearrange to solve a wide range of problems then you design a robust and simple solutions. The problems for end users are exactly the same as for the programmers. And the state of UX (regarding building blocks) is even worse 🙂
i
@Benoît Fleury I think you took my argument the wrong way — but we are both agreeing on the conclusion.
"How many different ideas do you need to know in order to understand what a program does?" If that was a good description of complexity then we would all be programming with turing machines. You have very few ideas in a turing machine and they're very easy to understand.
I'm not advocating that we should be minimizing the number of different ideas. I'm just trying to establish that the number of ideas is how you measure complexity. Take that together with my overall assertion that complexity isn't evil, and you'll see that I don't think we should be programming with Turing machines. I think we should have richer, more complex programming machinery than we have today. --- You win — surpassing Out of the Tar Pit — when you recognize that inessential complexity is not binary. It's not even scalar. There are different flavors of incidental complexity. What kind of incidental complexity you want, or don't want, depends on a lot of factors. It depends on the essential complexity of the problem you're solving. It depends on what ideas you and the people around you know and can work with. @shalabh’s comment that starts "Trying to wrap my head around the different perspectives" absolutely nails it. This is not just about software engineering. This is about epistemology.
j
I think one point that has come up repeatedly though, is that what is inessential is a matter of context. I suspect that what is being thought of as useful or good inessential complexity is just essential complexity from a different context.
👍 2
i
Yes, though I think it's better if you say it the other way around: What is being thought of as essential complexity is just useful or good inessential complexity from a different context. All complexity is only as essential as your context requires — and from a gods-eye view, you have total control your context. Also — I really want to avoid conflating essential and good. I think the good stuff should be called preferential complexity, because then it feels selected to suit on your context, and not imposed by the universe. For instance, there's great harm in making your starting position, "I am a programmer," because that includes a whole boatload of unchecked assumptions about what your context is. (This is the sin that Out of the Tar Pit commits.) If you disavow yourself of that perspective, you'll do a better job of seeing certain bad inessential complexity that might otherwise be seen as essential, and you'll be better equipped to figure out what preferential complexity you should be working with. Again, @shalabh nailed it:
Now a twist: how do we represent our purpose? The purpose is just another medium. When I put pen to paper, I'm using the physical medium to affect the written medium. Is pen and paper incidental? Yes, ideally I'd just think the words and they'd appear on paper. But follow that chain - why am I creating the written words anyway - maybe I'm distributing an idea across. So writing is just incidental - the greater purpose is distributing the idea. Indeed I could make a video instead. (So why spread an idea.. the chain keeps growing) So what's incidental depends entirely on where we 'cut off' our context.
and
computing is a game of choose your poison.
So, I have 2 beefs with the Tar Pit. 1) "Incidental / accidental complexity is bad and should be reduced to the absolute minimum" — this breaks down as soon as you ask questions like, "Is my type system essential or incidental?" 2) "Essential complexity is that which remains when you strip a problem to its most minimal essence" — this breaks down when the above breaks down, because as you approach the most minimal essence of a problem, stripping away complexity after complexity, it's going to look wildly different depending on who does the stripping — and you won't reach truly the most minimal essence until you've reduced the problem to total black nonexistent nothingness.
❤️ 1
b
If your type system gets in the way and prevent you to do certain things. You will have to go around it, this is accidental complexity. You didn't need to solve those extra problems caused by the type system to solve the original problem. But if your type system works and help you then it is not a complexity at all. We do not have to put everything in either bucket. I think the point of accidental/incidental complexity is to figure out whether the problems you're having are related to the actual problem or caused by your tools. I don't see why you would want to try to have problems caused by your tools. So saying "there are certain kinds of inessential complexity that are good" seems to me to be a contradiction in terms.
i
Exactly. And that's why Out of the Tar Pit is wrong — their definition of accidental complexity sucks, as @Kartik Agaram pointed out here: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1580796314493500?thread_ts=1580751504.462400&cid=C5T9GPWFL Where this becomes a problem is that the Tar Pit definition of accidental / incidental complexity has come to be seen by programmers writ large as a bad thing. Everyone has a loosey-goosey feeling about what is meant by "inessential complexity", but I think there are a lot of good kinds of complexity being caught in that net. I'm not saying we should actually "strip everything to zero" or "embrace magic" or "don't use type systems" or anything like that. Those are all just examples to push your perspective to the places where it's easiest to see the flaw in the Tar Pit argument.
👍 1
b
I see. Thanks! I didn't understand tar pit that way. I will reread it to clarify. It doesn't make sense to me to talk about complexity for things that help you solve your problem. So I don't see a type system or or a user interface that matches more closely your mental model as incidental complexity.
i
And I don't see mutable, stateful objects as incidental complexity — I see them as tools, good for some jobs and bad for others. Tar Pit's authors present their personal values as empirical truths, as @Kartik Agaram said.
👍 2
s
It doesn't make sense to me to talk about complexity for things that help you solve your problem
I think it's that any solution also brings its own complexity.
👍 1
I'm just now reading Tar Pit, I've only ever skimmed parts of it earlier. Already have a few things to say! For instance, isn't there a lack of imagination here:
There are two widely-used approaches to understanding systems (or components of systems):
Testing: This is attempting to understand a system from the outside — as a “black box”. ...
Informal Reasoning: This is attempting to understand the system by examining it from the inside....
Of the two informal reasoning is the most important by far.
So reading the code and testing is it? For a while I've thought 'reading code' isn't really going to scale. How about "querying the system"? Can I say "given these kinds of conditions show me how _these kinds of outcomes arise_"? Of course we can't ask this of a program, but if you consider a whole programming system, it may have an abstract evaluator that can try and figure this from the model (whether it's code or something else). It would then give you 'abstract execution traces' showing the internals of the system. But doing this means you have to first design the system and model with this use case in mind and not presume a workflow of programming exists.
i
Is it not the case that property-based testing gives you that same information? If not, I don't follow what you mean and would love more explanation.
s
It’s been too long since I read Tar Pit and reading some comments here I’m not sure if I want to invest the time to read it again. I do find a distinction between accidental and essential complexity useful. Of course, for it to be useful we need to (a) consider some context and (b) agree on what these mean in such context. If we don’t, we talk past each other. And “we” includes Tar Pit authors. For instance, if I put my mathematician hat on, then of course stuff like available registers and memory and the time it takes for each instruction to execute are incidental complexities that I don’t need to describe efficiently what the essence of computation is and a Turing Machine is a beautifully simple model to cover all cases of what can be computed. Now, if I take that hat off I’m just an engineer and I’m furiously angry at that mathematician who clearly has never built anything useful in their life because then they would’ve noticed that a Turing Machine is a piece of crap that makes even the most basic calculation way too complicated to express. And how can you possibly do anything useful without considering the engineering challenges of building a real computation machine? Execution speed and memory and instruction sets and architecture and freaking laws of physics are clearly essential components of such a system. And let’s not even get into what happens when I put on my UX designer hat… or my business founder hat… let’s say they don’t get along that well either. It’s almost like a little FoC Slack just inside my head… TL;DR: Wear more 🎩!
👍 2
🧢 1
s
Is it not the case that property-based testing gives you that same information?
There's an overlap but not quite. I mean specifically abstract interpretation (property testing does actual execution with a large number of inputs). For example, if you have a chunk of untyped Python code you can informally reason about the types of values flowing around by reading and simulating in your head. An abstract interpreter (pytype) will actually evaluate the code in terms of types (not values) and can show you the predicted types of various parameters and locals. It can get much further than mental simulation, because it can evaluate much larger chunks of the code. Technically this might belong in formal reasoning, which the paper mentions in the following paragraph:
The bottom line is that allways of attempting to understand a system have their limitations (and this includes both informal reasoning— which is limited in scope, imprecise and hence prone to error — as well as formal reasoning— which is dependent upon the accuracy of a specification)
I think the abstract interpretation approach could be extended so you "feed in scenarios", e.g. the user says "what if the local
a
here is an integer between 0 and 1000 and
b
is an empty list" and the system does abstract interpretation (specifically one execution and not 1000 different executions) to find other properties of an execution under that scenario - dead code, exceptions, and notes "`c` will be
a+20
" etc. A more apt name for this kind of approach might be computer aided reasoning - we're not reading static code on paper and we're not writing complex types and have the system prove something, but we're simply asking targeted questions. I'd love to ask the system "show me why this dependency is invoked when this kind of request arrives" and then follow up by zooming into a part of the abstract execution trace. A related idea is "program slicing" - point to a value and have the computer tell you the subpart (slice) of the program that affects that variable. I think these are all good ideas to make state trackable, a different angle than going state-free and aiding informal reasoning. Even with the most 'readable' code, I'll note that reading doesn't scale very far - how much can you read in a day anyway? We might climb out of a small tar pit only to fall into a larger one. But targeted questions in a query language might be able to handle very large programs and even large systems with multiple programs! They'd have to be built using a model that is designed for something like this and scales up.
👍 1
Here's a related tweet thread: https://twitter.com/chatur_shalabh/status/1126201095636652032. Take the simplest of programs and compose them in a small distributed system, and you'll see informal reasoning ability disappear. How about e_xpect complexity and design to handle it_ rather than avoid complexity.