Future of Coding • Episode 68 Philip Wadler • Prop...
# share-your-work
i
Future of Coding • Episode 68 Philip Wadler • Propositions as Types 𒂶 https://futureofcoding.org/episodes/068 The subject of this episode’s paper — Propositions as Types by Philip Wadler — is one of those grand ideas that makes you want to go stargazing. To stare out into space and just disassociate from your body and become one with the heavens. Everything — life, space, time, existence — all of it is a joke! A cosmic ribbing delivered by the laws of the universe or some higher power or, perhaps, higher order. Humanity waited two thousand years, from the time of the ancient Greeks through until the 1930s, for a means to answer questions of calculability, when three suddenly arrived all at once: • General recursive functions by Gödel in 1934, with functions of sets of natural numbers. • Lambda calculus by Alonzo Church in 1936, with anonymous single-variable functions. • Turing machines by Alan Turing in 1937, with a process for evaluating symbols on a tape. Then it was discovered that these three models of computation were, in fact, perfectly equivalent. That any statement made in one could be made in the others. A striking coincidence, sure, but not without precedent. But then it was quietly determined (in 1934, again in 1969, and finally published in 1980) that computation itself is in a direct correspondence with logic. That every proposition in a given logic corresponds with a type in a given programming language, every proof corresponds with a program, and the simplification of the proof corresponds with the evaluation of the program. The implications boggle the mind. How could this be so? Well, how could it be any other way? Why did it take so long to discover? What other discoveries like this are perched on the precipice of revelation? Philip Wadler is here to walk us through this bit of history, suggest answers to some of these questions, and point us in a direction to search for more. And we are here, dear listener, to level with you that a lot of this stuff is miserably hard to approach, presented with the symbols and language of formal logic that is so often inscrutable to outsiders. By walking you through Wadler’s paper (and the much more approachable

Strange Loop talkâ–ľ

), and tying it in with the cultural context of modern functional programming, we hope you’ll gain an appreciation for this remarkable, divine pun that sits beneath all of computation.
j
I have been blasphemed! I did not need to be censored! @Ivan Reese is lying to you with his edits. Do not believe him. Hear with your own ears:
i
Look at how he manipulates us all! Do not be fooled into action by his duplicitous, underhanded provocations. See clearly that he has played me like a puppet, and performs for you even now. Pull the wool from your eyes, he is more than the creator of sufficiently evil compilers, he is… a neurolinguistic programmer!
a
c
I saw this idea first in https://ncatlab.org/nlab/show/computational+trilogy from the category theory people on nCatLab. Beside type theory & programming languages it adds the category/topos theory aspect to this general computation context. It was crucial for me to understand the difference between classical computing, parametrized computing (effects) and quantum computing. I hope it's helpful also to you 🙂
f
The ask at the end of this episode I think Eugenia Chen might cover the category theory for a lay person? https://en.wikipedia.org/wiki/Eugenia_Cheng
p
I found the discussion of the history of logic fascinating, but I didn't really follow what the actual correspondences were between propositions and types, proofs and programs, and reduction and execution. Did you actually explain how these correspondences work and I missed it?
i
To @Csongor Bartus and @FreeMasen — Thank you! @Personal Dynamic Media, no, you didn't miss anything. We didn't explain it. I don't even understand it, and twice now I almost finished reading the paper, and watched the talk a handful of times (which certainly doesn't try to explain it). I would at least be curious to know how subtle or obscure the explanation is. Like, would an undergraduate-level understanding of logic suffice?
j
Maybe I'm remembering the conversation rather than the edit, but I'm fairly sure we give Wadler's example correspondence between Gentzen's Natural Deduction and the Simply Typed Lambda calculus. Disjunction is the disjoint sum, conjunction is a cartesian product, and implications are functions. Of course the correspondence doesn't stop there, there are tons of logic's for which you can do similar things. Here the two little snippets from the paper. The one with red and blue is Lambda Calculus the only blue is Natural Deduction and you can see the direct correspondence
i
Yeah, but how does the correspondence work? Like, what's the proof that they're equivalent? I've just been taking it as granted because I have little hope of understanding the explanation.
Eg — "disjunction is the disjoint sum" — why? How? What other things might disjunction correspond to elsewhere, say in physics or geometry, and how do we prove that?
p
@Ivan Reese @Jimmy Miller I guess I need to watch the talk and read the paper. At a very high level, I can grasp how propositions might correspond to types, since a proposition defines the set of values for which it is true and a type is a set of values that are allowed. I can also very roughly grasp how evaluation and simplification might be related to each other, but not well enough to explain it. I definitely don't understand how a program corresponds to a proof. Given an arbitrary program, what does that program prove? Is it something trivial like "The answer this program calculates can be calculated"? Or is it something more meaningful that I don't get?
j
@Ivan Reese I don't understand the question. We can make a mapping (that is structure preserving) between disjunction and disjoint sums. Think of it as a recursive function that just pattern matches on the bits. We are able to make a function from Lambda -> NaturalDeduction and vice versa. I mean, you can walk through the formal proof it. But I'm guess that isn't what you are looking for.
@Personal Dynamic Media Definitely recommend reading the paper to understand more. Here's the example of a proof being simplified, and a program being executed.
p
Also, thanks for the reference to the incredible proof machine. That was a pleasant way to kill a few hours so far, and I've still got a lot of proofs left. 🙂
j
As for the arbitrary program, you could find its correspondence by converting it to lambda calculus and then converting to a proposition. If the program is expressible in the simply typed lambda calculus, you can directly see the proposition it is a proof of. Of course the simply typed lambda calculus isn't the most expressive, so people usually add more. For example, this was a proof I wrote forever ago in Idris whose type system is based on this stuff. https://github.com/jimmyhmiller/PlayGround/blob/master/idrisTests/src/AbstractAlgebra.idr (You can ignore the elab stuff which is more tradition proof system stuff. The code below is really just functional programming that gives you proofs)
p
@Jimmy Miller Thank you for the examples. I will have to try and understand them after watching the talk and reading the paper.
j
@Ivan Reese
What other things might disjunction correspond to elsewhere, say in physics or geometry, and how do we prove that?
I don't understand the physics part. Physics just uses math. As for geometry, there are a bunch of complicated stuff like homotopy type theory that deal with that kind of thing. But as you mentioned in the episode, there are unions and intersections which are sums and products in the same way disjunction and conjunction are.
i
@Jimmy Miller The physics thing comes from something Wadler said in the Q&A after the talk: that there are likely similar correspondences between Logic/Comp Sci and Physics (and likely other fields as well). My question has to do with… how people figure out whether there's a structure preserving mapping across disparate domains in a way that actually means something and serves as a novel contribution that others can build on. Like, if it was an easy and straightforward thing everyone would do it and there'd be a ton of them and it wouldn't be as big a deal. So clearly it's not so easy. Why? If one were hunting for new correspondences, what does that work entail? Etc.
j
@Ivan Reese That's what people like Frank Pfenning spend their time doing. I think the answer is to spend lots of times in lots of different formal systems looking for these connections. This kind of work is also what category theory is all about.
i
Yeah, but but but I don't liiiike category theory. It's haaaard and the people say mean things about CoffeeScript.
s
I listened to the episode today and wanted to share my thoughts: “Writing programs” is a narrow framing of what’s possible in a programmable dynamic medium, so on seeing this episode title my first thought was “has this podcast now dropped into #present-company?” However, I thoroughly enjoyed the episode, since I do enjoy correspondences in math/logic and also like listening to the decidability problem discussed from different angles. I found myself emphatically nodding and clapping along (mentally, I was on a plane at the time) with Ivan’s “critique” on the implied legitimacy in this paper. I don’t think programming is more fundamental than any other human activity, essentially we are just interacting with the universe and grappling with the outcomes. Any material we work with is just as fundamental. Why is rolling a ball down an incline less fundamental than arranging some electronic bits in a machine’s memory and letting them roll? I was left thinking this: clearly I can’t create a program that does not comply with some fundamental laws of the universe. That would be firstly, really cool (like inventing a machine that doesn’t follow the laws of gravity) and secondly, more powerful. Jimmy alluded to this when saying these things still have a correspondence with the theory here, just “complicated correspondence”. Which then raises the question as to if the types/proposition theory is just one perspective that by co-incidence is being considered the primary perspective of the universal. It just happens to match certain kinds of programming models. Perhaps there are other perspectives, maybe even with some theory or formalism yet to be devised, in a medium yet to be invented, that will have a complicated correspondence to this type/proposition business, but a simpler correspondence to the future “programming medium” we are seeking. What if we’re on the wrong track here with (present day) types?
This episode also reminded me of this essay about two view on computation: the theory of computation (machine oriented) and the theory of programming (language/logic oriented). https://pron.github.io/posts/what-we-talk-about-when-we-talk-about-computation
We can summarize the two positions in the following way: The TOP people say, “computations are programs, and programs are mathematical objects whose structure is captured by their types; the two models of computations are equivalent when asked to represent first-order functions, but diverge when asked to represent higher-order functions”. The TOC response is, put simply, “what the hell is a function?” Just as a falling apple doesn’t compute integrals — it just falls — so too Turing machines and any other machine model compute neither higher-order nor first-order functions; they compute bits. What imaginary mathematical concepts we then choose to describe those computations is a different matter altogether, one that does not affect the nature of the computation just as calculus has had no impact on the behavior of falling apples.
I’m definitely in team computation.
m
Yeah, also very much on team computation. Or actually, team running programs on a computer. I also find Wadler’s idea of universality+ fun to contemplate, but he actually misses a rather significant problem with that idea: it doesn’t even cover computing, never mind the multiverse. “Reducing terms to produce an answer” describes only a small and ever-diminishing portion of what we program our computers to do. They store data, retrieve data previously stored, communicate, have effects in the world, interpret stuff that happens in the world etc. And that is not the incidental stuff, that is the main task of programs. Algorithmic work does occur, but it is more and more incidental. But that’s just the practical view, apparently there’s theory to back it up as well: https://cs.brown.edu/people/pw/strong-cct.pdf I am not sure how relevant those theoretical objections are, but the practical ones seem pretty strong to me. (And yes, you can obviously implement many of these other concerns using term-reduction/substitution. But you can also do that using NAND gates…)