Future of Coding • Episode 61 Peter Naur • Program...
# share-your-work
i
Future of Coding • Episode 61 Peter Naur • Programming as Theory Building 🫶 https://futureofcoding.org/episodes/061 What a bop! This paper offers a compelling explanation for many of the difficulties we encounter when maintaining large, long-lived programs. It makes us question the value of source code notation, commenting, documentation, and other artifacts of the programming activity, especially when it comes to communicating the ideas behind a program between different people working on it. When we work on our FoC projects, are we even understanding the right misunderstandings of our understandings? To make sense of that nonsense, we turn to the master of terrible bulls#$% (Ivan's take, not Jimmy's) — Gilbert Ryle! Next month, we're honouring the recently-deceased Fred Brooks by dancing on (or, perhaps, with) Mythical Man-Month and No Silver Bullet.
p
I just started the download. I can't wait to have time to listen!
This was a great listen, and I think it's your best episode since the reboot. I've seen this paper and its thesis mangled many times and I think you both did a good job of putting in the work to understand it and find ways to explain it. My boring response to Ivan's question about why one would treat a theory as a single object is that it is easier to talk about that way to try and explain the concept to someone who hasn't heard it before, much like all of the weightless massless pulleys one encounters in a physics class. As usual, the best parts are those where you are working to understand and/or explain the author's intent and the worst parts are those where you criticize the author. Nobody's perfect, but very few of us who study the very best works of the brightest minds in our field will be qualified to distinguish the difference between an error and a point that we do not understand. Your discussions are both more enlightening and more respectable when you approach these masters from the perspective of students, rather than critics.
c
Im curious how the use of programming paradigms relates to theory building. Paradigms (particularly the combination of them) seem to be part of the foundation in which we build more domain-specific theories off of. 🤔
p
@Christopher Shank I think that's a good example of Ivan's point about the theory of a program not being a single object, but rather an amorpheus and growing mass, assuming I understand him correctly. I would speculate that the theory of any given program incorporates within it to varying degrees some portion of the theory of the associated language, paradigms, operating system, operating environment, and to whatever extent the program maps to objects in the real world, the theory of manipulating those real-world objects.
w
The way I write, after a while, the code represents more theory in it than I keep in my head. The source eventually mirrors the conceptual model. Comments explain the whys with links to the spec/issue/conversation that lead up to the solution. And automated tests cover the fiddly bits that are liable to go wrong when making changes. In that sense, I practice TDD: Test Driven Debugging.
k
A more horrifying implication of theories being amorphous: bad decisions can become part of the theory once accepted. As more people use the bad idea it becomes increasingly adaptive to understand it. Put that in your pipe and smoke it before you undertake your next code review. No pressure.
After listening to the first 30 minutes I realize I dislike most philosophical work because it doesn't know when to stop. Jimmy's Ryle has a central nugget of enormous value, but it's lost in the angels-on-pinheads discussion of whether thoughts exist. (Though it did present the fun daydream of a Ryle cagematch with Julian Jaynes, he of the obsession with the inner experience of consciousness.) Anyways, I'm glad Naur did the work to bring it down to my level.
m
“When the things you do to try to make things better actually make things worse.” If I got this right, this is an indication that you don’t have the right theory? Seems to be pretty applicable to the most of the field of programming these days. A lot of the things that should make things better actually make things worse. And things that are horribly wrong somehow make things better (or are better…yes, Worse is Better) Also, the whole “theory as being able to do” seems to relate closely to the concept of pragmatics (vs. semantics) in linguistics. And maybe information theory also?
i
I can't speak to most of that (it's above my philosophical pay grade 😅), but I'd agree that much software is created with an absence of the "right" theory.
m
Ahh…whisper AI is pretty cool. The original text from Naur is “If our understanding is inappropriate we will misunderstand the difficulties that arise in the activity and our attempts to overcome them will give rise to conflicts and frustrations.“. You guys said it a bit more clearly: “”But we will actually misunderstand the difficulties that arise. And then our attempts to overcome them will be wrong because we’re based on an incorrect theory of what we’re doing”“. (Around the 11:40 mark). Resonated with me in the sense of “we’ll fix this with an abstraction layer”, oh, but what if our theory about what abstraction mechanism to use is wrong?
e
I’ve been obsessing over this paper for a bit now. Thanks so much for this great episode! Coming from Ryle, it theory building seems to invite an issue of relativism — where, through being so rooted in a behavioralism (having had to experience a thing to know a thing) classes of folks are totally pushed out from being able to build/contribute to theories. To push against that, though, I wonder about being able to generate theory off of artifact — like, I can’t see dinosaurs, but I can still develop a theory removed of dinosaurs by experiencing what we’ve got. While not a theory about dinosaurs directly, it’s a theory of its own about bones and geology and movies. Similarly, with software, sometime you don’t have access to the team that did it first, but you’ve got what they’ve left behind. If that is a viable way to build theory, could teams intentionally generate artifacts to help new folks build their own theories? Like, what is a theory building approach to documentation? Is that sort of what teaching to a canon does? Or, like, I think about talmudic commentary where there is a core text (executable program) but then layers and layers of commentary on that text by various folks who’ve worked with the core text.
The place where I eventually land is that our tooling sucks. And that everything being text files is really boring.
Like a rust dev might scoff at me writing ada, but we’re both just text files.
p
I think the word "theory," when talking about dinosaurs, is a different word than the word "theory," when talking about a theory of juggling or the theory of a program. I think the words "skill" or "ability" or "capability" might be easier for some people to grasp when talking about the theory of a program, and I usually use those words when trying to explain this type of theory.
e
Yeah, you are right. I’ve used a bad example. What I meant is building a theory from the leavings, though experiencing what you can access, even if it is different than the initial folks.
p
Is it possible what you are describing might also be described as a model, since it's about modeling what was, rather than predicting something or gaining an ability to modify something or perform an action?
e
Perhaps — what my core question is, is less if what I’m describing is a thing unto itself, but if it is something a team could plan for — could a team produce documentation to invite future folks to develop their own theories? Like, could a team leave behind a curriculum to follow?
p
I'm going to say yes, but it may not be the same original theory, and it typically takes so much work that it's not economical. Some people have brought long dead programs back to life, but all of the examples I can think of are hobbyist projects where ROI doesn't matter.
e
but it may not be the same original theory
Yeah! That, to me, is sort of what is exciting about this — it invites mutation, and for each generation of folks encountering a thing to develop their own theory. Ideally, I think they would then contribute back into the documentation, so you’d end up generating this layered onion around the program as subsequent team’s come to their own theory.