I tried to write a narrative of how studies in uni...
# thinking-together
p
I tried to write a narrative of how studies in universal algebra and category theory is providing a deep understanding of how programming languages are united/distinct in the way they treat computational structures: https://github.com/prathyvsh/morphisms-of-control-constructs I am pretty new to this field and thought the story of this evolution is less told away from academic circles and started piecing together parts of the story as I read twitter feeds / blogposts / papers on it. If anyone here is knowledgeable in this, can you please provide some feedback on what I have missed or what needs polishing? Pretty sure that there could be some significant details I have missed out.
👍 3
s
I don’t know much about this and certainly a lot less than you do, but I am super interested in this and really appreciate you writing about it and putting together all these resources. So here’s my feedback: thank you and please keep going! :)
👍 2
p
Thanks a lot for the encouragement! I just bumped into this after exploring random stuff related to PLT. A certain kind of recurrent idea here seems to me that there is something fundamental about computational structures (in a topology like sense) that we are all groping at in our daily lives. Programming seems to me now to be an activity in creating some kind of <*hypothesis*>semiotic ground for manipulating Platonic forms</*hypothesis*>. I am trying to see if those things can be made first class in a visual fashion and PLT semantics looked to me like the place where I can dig in as I have some idea of PLs and a bit of logic knowledge. Also, there are a great community of people studying this deeply and communicating about it in this field. But it is filled up with waaaay too much jargon to repel most programming practitioners as the notations are dense and require an understanding of mathematics before approaching it.
This sort of aligns with a direction of using computation graph as a first class entity which I think some people here and in Twitter is about to embark on. This can be seen in Twitter convos in the form of RoamOS or as Codex lately proposed: https://twitter.com/codexeditor/status/1303985191912747009 This would result in a computational would of structure, I think, close to the attached image. Where each context has a program inside it like in a graph editing tool which then becomes the ground for further computation. Think of modules referring each other and creating intermediate representations of computations in this stage, just before getting visualized out as UI elements in a pixel matrix.
But I think the links go deeper, and (un)fortunately semanticists, philosophers, and mathematicians working on specialized areas seems to be the ones with intimations of what these general structures could be like. Here is another image from Rocco Gangle’s work (He mixes Category Theory with philosophy to propose some kind of “diagrammatic immanence”). This but for programming is what I think is lurking inside the morphisms of control constructs.
s
Yes! I understand maybe half of what you’re saying, but that half feels true and important to me. I’ve been looking at linguistics, category theory, bidirectional transformations, parsers/grammars, complexity theory, and Christopher Alexander’s work on Wholeness/Life and his generative process of unfolding which I just recently started to understand as having parallels to Chomsky and grammars (search for Greg Bryant, if that sounds interesting). All these things are very different disciplines but somehow I started to see strong parallels and connections which I totally believe can be expressed mathematically. And very likely we have already identified all the structures we need, we just need to make the connections explicit. Unfortunately, I often find it hard to describe and it makes total sense in my head, but then I’m struggling explaining it to others. It looks like you’re onto something that ties into this and might help me find ways to explain some of these connections better. And that’s not even touching on all the exciting ways this might help us find better visual representations for programming systems!
💯 3
p
Exactly my thoughts! You put this better than I did. Christopher Alexander’s work was a big influence and he is a strong center of this kind of interdisciplinary thought. Much respect to him for his work. I need to look up Greg Bryant, thanks for the pointer! I am working on and off on my mathematical skills to see how I can represent these ideas well. This is why I started the research on notation to see how historically they have helped us in expressing that other realm that we have inside us: https://github.com/prathyvsh/notation I am pursuing it under a feel that there is a good ROI from having a good fitness between the content/context relationship for the constructs you device to ground these forms. Will definitely share when I have some clarity on this. Thanks for the good words ✌️
s
Exciting. I’ll keep following you here and on Twitter. Re Greg Bryant start here:

https://www.youtube.com/watch?v=X-5KG73fzJ4

Then read this whole blog in chronological order (it’s not as much content as it seems): https://chomskyalexander.blogspot.com/?m=1 And feel free to ping me on anything related to this. I still have to dig through all what you posted but maybe there are chances to loosely collaborate.
❤️ 1
g
i’m going to have to give this sincere focused time, but i wanted to reply before my initial reaction before i lost the references—i’ll come by and fill these out with hyperlinks later. i too feel like all of this programming language research is swimming in a dark cave, and in our hunting around i’ve got this suspicion that there’s a light switch not too far away. and that it has a lot to do with reifying the process of evaluation and letting users interact with it directly. in particular,
...there is something fundamental about computational structures (in a topology like sense) that we are all groping at in our daily lives...
I am trying to see if those things can be made first class in a visual fashion and PLT semantics looked to me like the place where I can dig in as I have some idea of PLs and a bit of logic knowledge. Also, there are a great community of people studying this deeply and communicating about it in this field. But it is filled up with waaaay too much jargon to repel most programming practitioners as the notations are dense and require an understanding of mathematics before approaching it.
and
All these things are very different disciplines but somehow I started to see strong parallels and connections which I totally believe can be expressed mathematically. And very likely we have already identified all the structures we need, we just need to make the connections explicit. Unfortunately, I often find it hard to describe and it makes total sense in my head, but then I’m struggling explaining it to others.
resonate really strongly with me—better-worded versions of stuff i’d tried to explain to other people in the past the aforementioned references im working with off the top of my head: • call by push value • ohm/ometa • kernel/vau/f-expressions • partial evaluation • programming should eat itself • the work going on at red planet labs, alluded to by a few talks on the specter library for clojure • meander, another clojure library • the stuff rich hickey has started saying about functions having some knowledge about what they require to work properly (instead of specifying requirements on data structures directly) • towers of interpreters • f-algebras, recursion schemes • scoping and binding in the rebol programming language, APL • defunctionalization and refunctionalization • concatenative languages, because: the current continuation for them is always the state of the stack plus the rest of the tokens in the source, so you can always split a program at any point, pause it and restart it, and composition instead of application is the default action of putting two words next to each other. another way of looking at it is that every concatenative program always carries with it the context it’s operating in/on. plus there’s this cool video i shared before:

https://youtu.be/R3MNcA2dpts

❤️ 2
s
Here’s something I’ve been looking into for a while that I didn’t see you mention specifically: the connection between • (embedded) domain specific languages • parser combinators • abstract algebra • transducers (you kind of mention that with generators/iterators, I think) • transformation passes (towers of interpreters?) • …it appears I’m going to list all of computer science if I keep going, but there are certainly some more items that belong in that list There are quite obvious connections between some of them, for instance parser combinators are more or less directly applied abstract algebra. But there seems to be a more fundamental pattern that is reflected in all of them (and here we get to the part where I’m usually failing at describing it well enough — probably due to my lack of depth in mathematical understanding): • They all involve (or at least can facilitate) transformations from a sequential to hierarchical structure. • They all represent a set of well-defined composable entities that together form something like a grammar (some more directly than others). • They all in a sense resemble words of a language, which can be combined to describe a lower level thing in more detail (-> Alexander’s Pattern Language). • They all enable and/or are based on a fundamentally recursive pattern which allows them to be used on various levels of abstraction at the same time; they’re something like an abstraction of an abstraction, if that makes sense? I mean, maybe I’m just looking at lambda calculus shining through in all of them (and in all of computing), and that’s that — and there’s nothing more to see here. But I don’t think that’s it. I know, this is kind of weird, but maybe to some of you what I just wrote makes somewhat sense and you will have some comments that help me on the path to computational enlightenment… ;-)
👾 2
c
@Garth Goldwater pointed this out to me in a recent conversation: continuations could be seen as the same as monads. I dug up this little discussion with more discussion around this idea https://stackoverflow.com/questions/4525919/continuation-passing-style-vs-monads
👌 1
g
p
@Stefan
Then read this whole blog in chronological order (it’s not as much content as it seems): https://chomskyalexander.blogspot.com/?m=1
I went through all of the blogposts and it was pretty intriguing to see him combine Chomsky’s and Christopher Alexander’s work. I have only read Notes on the Synthesis of Form by him and the bibliography of it have sent me inquiring in a lot of directions. I would love to read up APL and NoC sometime. I loved the idea of how he thought of unfolding as in a morphogenesis of a biological organism can be brought into computer programming. This would be a really fertile area and I want to check out what he has done on https://www.corememory.org/ I already love the diagramming he has done for it. Pretty intriguing stuff. I will dig into these a bit more and report back. Thanks for the share!
@Garth Goldwater Out of the ones listed there I am unfamiliar of: • kernel/vau/f-expressions: No idea what these are. • programming should eat itself (Is this the work by Nada Amin?) Been out of touch with Clojure community for a while. Would love to know where these things are headed: • the work going on at red planet labs • meander, another clojure library • stuff rich hickey has started saying about functions having some knowledge about what they require to work properly (instead of specifying requirements on data structures directly) • towers of interpreters: Is this related to Nada Amin’s work on the color languages? Black, Pink, Purple et al.? • f-algebras: Would love to know how F-Algebras fit into this scheme. I have been looking at initial and final algebras, though they seem squarely situated in the algebraic tradition of semantics now. Is there a more accessible work done for programmers here? • recursion schemes: Would love to know more about this one • defunctionalization and refunctionalization: I am familiar of work done on this by James Koppel. Would you know of other good resources?
s
Tip: skip APL and read A Timeless Way of Building instead, before moving on to Nature of Order. APL is actually not that relevant. ATWoB explains the idea behind pattern languages much better and is more of a philosophy book. Some more thoughts here: https://twitter.com/stefanlesser/status/1298305359824789509?s=21 (I’ve been reading on, almost done with it, but stopped tweeting about it.)
❤️ 2
p
You are pretty much resonant with the intuitions that I have had on it. May I invite your attention to 3 resources which I think are aligned with these descriptions? 1/ Universal Algebra is the study of these algebraic structures in a general setting. There’s a really rigorous way in which a unit becomes a list becomes a tree becomes a lattice becomes a (a)cyclic graph. These incidence structures I feel form a rigorous hierarchy by the nature of properties they gain/lose (reflexivity, symmetry, transitivity). This alternating (ir)regularity I think continue in other structures like magma, monoid, (quasi/semi)groups, rings, fields. All these take in operators as their signatures and spawn hierarchical (and heterarchical) structures which then gets projected/lifted for various computational purposes. What I find to be severely missing in this field is a visualization of how these abstract algebra structures really look like and this is one of the central things that I’m working towards. 2/ The ideas I summed up above leads us to topology being a general setting in which these ideas can be given form. I came across this crazy good blogpost a while ago, that extends Harper’s computational trinitarianism to quadritarianism and beyond: http://comonad.com/reader/2018/computational-quadrinitarianism-curious-correspondences-go-cubical/ In essence, its a summary of the mathematical correspondences that have been getting unearthed with regards to computational structures. I think the author should turn that into a book with some amazing diagrams and it would really change the game of how programming world sees its connection with algebra/higher mathematics. 3/ Your idea of abstractions of abstractions and recursion as the fundamental idea and tying it up with Chomksy is pretty striking. I’m out of my depth to exactly point out how, but each level in Chomsky hierarchy I think is realized by reflexive recursion that gives it more degrees of freedom to explore distinct structures. This was my first jibe at it: https://twitter.com/prathyvsh/status/1304667510101295105 But there’s also this aspect of mirroring / symmetry that comes out which ties it up with graphs and abstract algebra. This is one place I want to develop my intuitions and Ershov Hierarchy / Turing Jumps are a key part of the puzzle: https://en.wikipedia.org/wiki/Turing_jump This is one place where I think recursion on recursion is grounded in a mathematical setting on arithmetic hierarchy. It also ties it up to complexity studies. I would also like to add something about Kant’s philosophy with respect to all of this, but suffice to say that his philosophy helped influence the development of topology and category theory. Yeah, so bottomline is, if we develop sufficient rune reading powers into mathematics at the moment, I think there’s a tonne of wealth of knowledge hiding behind all of these. But there’s also another way to realize these if we start with good fundamentals, explore and reflect. Because all that is being studied there will reflect as problems/leverage points in our own systems of making. Which is why I love and look forward to see how some of the people solve problems here.
👍 2
@Stefan
Here’s something I’ve been looking into for a while that I didn’t see you mention specifically: the connection between
(embedded) domain specific languages
parser combinators
abstract algebra
transducers (you kind of mention that with generators/iterators, I think)
transformation passes (towers of interpreters?)
I will definitely go through these and attempt to catalogue them on the page. I have renamed it to morphisms of computational structures to capture more of these morphisms. I think it would also help me bring in work like that of Hoare/Milner and talk about them if I go a bit more general than control constructs.
They all involve (or at least can facilitate) transformations from a sequential to hierarchical structure.
They
all represent a set of well-defined composable entities that together
form something like a grammar (some more directly than others).
They
all in a sense resemble words of a language, which can be combined to
describe a lower level thing in more detail (-> Alexander’s Pattern
Language).
They all enable and/or are based on a fundamentally
recursive pattern which allows them to be used on various levels of
abstraction at the same time; they’re something like an abstraction of
an abstraction, if that makes sense?
g
maybe we should see if we can schedule a zoom call or a collaboration on some kind of asynchronous document—i’m starting to lose track with the amount of scrolling i have to do here: @Prathyush: nada amin is a huge influence. towers of interpreters

https://youtu.be/Ywy_eSzCLi8

and programming should eat itself

https://youtu.be/SrKj4hYic5A

are both titles of talks she’s given that are available on youtube. programming should eat itself specifically deals with the color schemes, whereas towers of interpreters is more of an overview of the technique kernel is a programming language designed as part of a phd thesis based on reviving f-expressions in a lisp-like language. f-expressions are a expressions where the lists aren’t evaluated by default, and they can be used to implement a lot of primitives in a simple lisp interpreter. i see it as manipulating code as data more directly than traditional macros do. “vau” is the author’s terminology for the f-expression version of lambda. https://web.cs.wpi.edu/~jshutt/kernel.html specter is a library for manipulating nested data structures that’s like a more restricted version of prolog, or like lenses kind of. talk here:

https://youtu.be/mXZxkpX5nt8

. there are really interesting compilation techniques hinted at that i suspect form the basis for red planet labs’ approach to their FoC startup f-algebras and recursion schemes are closely related. there are a lot of talks on youtube directed at programmers, but i haven’t yet felt like i have a 100% grasp on them. here’s an example talk:

https://youtu.be/JZSoPZUoR58

. the essential concept is that you can use the structure of a fold and related transformations to kind of backtrack recursive abstract data types into a compact representation. you can take the dual of this (unfold, or a recursive definition of a data type) to create some data, and then perform a fold to interpret that data (i think this is a hylomorphism but i’m completely unsure). the f-algebra is a way of interpreting down those unfolded structures. defunctionalization and refunctionalization: as i understand it this has to do with turning functions into data vs turning data into functions. talk i’m working off of:

https://youtu.be/vNwukfhsOME

@Stefan some terminology ambiguity here. i was talking about APL the programming language rather than A Pattern Language in that context. as in: https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.pdf
👍 1

https://youtu.be/v7Mt0GYHU9A

and

https://youtu.be/lc4IjR1iJTg

are a couple of talks on apl... somewhat impenetrable
❤️ 1
s
@Garth Goldwater Yep, I was aware and didn’t think it was ambiguous because in that paragraph @Prathyush was exclusively talking about Alexander, I think…? Not sure what NoC is though; I just read that as Nature of Order… APL the language is for sure interesting too. I read Aaron Hsu’s thesis and spent days working through some of the compiler code symbol by symbol. That’s when I first looked more closely into APL. There are some clever things going on, in particular the way scalars, vectors and matrices are handled transparently, and that you basically only have one or two arguments per “function call”. That is really what the terseness is about. It’s easy to just see single-symbol function names and dismiss that as a peculiar design choice and then miss all the interesting stuff (that’s what I did before). For me APL however is mostly interesting for defaulting to vector-like structures, which makes it especially suitable for algorithms optimized for parallel execution and GPU compute, and it seems APL practitioners have adapted to mostly take advantage of that style of programming. From a language design perspective or a mathematical perspective I don’t find it that exciting, but I may very well be overlooking something.
👍 1
I’m totally up for a video call with the two of you (and anybody else who feels crazy enough for this conversation; it seems we scared most of them away though… 🤣)
👍 2
@Prathyush Your last responses will take me a while, but the references are great, thanks!
@Prathyush regarding your 3 suggested resources: 1 Universal Algebra — this seems to be what I was hoping for when I started getting into category theory (we even had a study group here). For category theory I concluded that the basics are great to know, and they btw align with Christopher Alexander's intent with pattern languages, which also state that patterns are independent from the actual entities and are completely defined by the relations between them. But as I'm still more interested in applying that knowledge to actual problems, category theory then quickly seems to drift off into abstract math land where it seems more like an entertaining thinking sport for a higher math priesthood. Anyway, Universal Algebra: I'm familiar with orders, groups, sets, monoids, etc. If I'm specifically interested in the mathematical definition and transformations between lists and trees and lattices, do you have any great introductory pointers to read/watch? 2 Topology — I guess I need to brush up on topology. Will read that blog post. 3 Recursion + Chomsky — If I understand you correctly, I think you're interpreting what I said already on a more abstract level than I meant it, which is interesting. That's the main problem discussing these things: we talk about abstraction all the time and we do it all the time, but we don't really have a sense for which level of abstraction we're at right now. We can only determine where we are in the hierarchy of abstractions in a relative way, by pointing out what's above or below. There is no stable absolute coordinate system that's granular enough. Feels a lot like category theory. The way I just recently started to understand Alexander's pattern language, seems even more down-to-earth, pragmatic application of a grammar with certain terminals and non-terminals, which form a hierarchy. The point is that such a language can be described with a finite amount of elements (or patterns, in Alexander's case), but can be combined to construct an infinite amount of resulting artifacts (built structures, in Alexander's case). Through the language we get hierarchy, so there are patterns that apply to different levels of scale (up from patterns for towns, through patterns for houses, to patterns for ornaments). These correspond to where in the syntax tree they are; higher-level (town) non-terminals often rely on several more layers of lower-level (house) non-terminals until you finally resolve to a terminal (~ ornaments?). So that's all pretty basic generative grammars stuff. Alexander doesn't like recursion, because there isn't really an infinite sequence anywhere in nature. It all begins and ends somewhere. So it doesn't map perfectly. But I'm willing to consider that an "implementation detail". 🙂 But I expect more clarity on this from his Nature of Order series, which I will finally attempt to read soon.
👍 1
p
@Garth Goldwater I have been meaning to look at Nada’s work for a while, but I lacked the math-fu to tackle it and sat through the first 20 minutes wondering like what was even happening when I saw it a couple of years ago. But I think I’m reasonably ready now to slowly work towards absorbing the lessons there. That + MiniKanren by Friedman/Byrd been on my list for a while. Thank you for the links on Kernel, coming across of it for the first time. Will take a closer look at this one. Yeah, folds and unfolds (Origami programming I think it’s called? https://archive.alvb.in/msc/11_infomtpt/papers/origami-programming_Gibbons.pdf) points at that deep correspondence between Initial/Final (co)algebras that I have been talking about. I think the idea here is that you can modularize everything down to a sequence of unary/binary function and then talk about folding them to get data and unfolding the data to get the sequence back. This I think is a powerful idea and continuations I think are like things that does these sporadically and jumps to another part (like goto) to create a computational structure. Bird/Meerteens/Gibbons seem to be the people who pioneer work in this domain. Their Algorithmics movement I think is a great advancement in this direction. I found this person on Twitter who have been reading papers on these and doing a review of them: https://twitter.com/chansey97 Re: Specter/Lenses is another place, though I don’t know how they related to these structures. But is something I would love to read further. I see Profunctor Optics have slowly turned into a central focus point among theorists. One of the central things I think in computing and most of mathematics is the idea of duality. So that defunctionalization/refunctionalization as much as I understand is a way to reify a continuation so that it can be treated as a datastructure. If my guess is not far off, what you get is the above mentioned algebra structure which the folds and unfolds work over. The beauty about algebra is that you get a lot of nice properties which yields itself to different ways of instrumenting into the program. Like associativity guarantees that the brackets around operations a(bc)(de) can be moved around giving (ab)(cd)e meaning you can execute each of those functions in any context (any thread when mapped to a computational substrate) and then rejoin the result to get back the original one. Mathematicians go further and generalize that to ideas like Tamari lattice: https://en.wikipedia.org/wiki/Tamari_lattice where all such orderings are studied and their properties are investigated. So this bigger perspective of looking at things gives an idea to locate where the dualities arise and then instrument your program accordingly. Writing all that makes me wish there was an interactive space to show how all these things fit together. Thanks a tonne for your input and I’ll definitely look through them and respond back when I have better clarity on these.
@Garth Goldwater Yep, I was aware and didn’t think it was ambiguous because in that paragraph @Prathyush was exclusively talking about Alexander, I think…? Not sure what NoC is though; I just read that as Nature of Order…
@Stefan I did indeed mean Nature of Order. Sorry for the typo. Re: APL I was pleasantly surprised to find APL in the form of K, making headway into financial markets in the form of Shakti and high performance numeric computing: https://shakti.com/
and anybody else who feels crazy enough for this conversation; it seems we scared most of them away though
Yeah, I really wish there were more programming practitioners who tried to bring these ideas back into a setting which we can inspect like that Javascript talk on folds/unfolds. These ideas I think provide a great leverage point to think about what is happening when you compile your program or work towards implementing a certain UI interaction. There is very less awareness about the perspective that logical inquiries and mathematical language is a way to cast these ideas with least amount of distracting details and then inspect them for their properties and what they provide/take away from us.
❤️ 1
If you guys would like, I found two books that talks about the benefits of abstract algebra in a concrete setting. Sawyer’s work: https://archive.org/details/AConcreteApproachToAbstractAlgebra that reads like a novel rather than a theorem (see exercise for proof) → lemma → corollary → exercise (prove theorem) textbook 😛 Abstract Algebra: A Computational Approach – This ties in with APL mentioned previously as the exercises ask you to work out the exercises using APL: https://www.amazon.com/Abstract-algebra-computational-Charles-Sims/dp/0471098469
We can only determine where we are in the hierarchy of abstractions in a relative way, by pointing out what's above or below. There is no stable absolute coordinate system that's granular enough. Feels a lot like category theory.
If I may venture a guess, I think this is where metric spaces and gauge theories come in. Once you have vector spaces to locate a manifold, the idea of measure gives it a way to locate certain points at a certain distance from one another. This is how topology differs from geometry, in that, topology or set elements for that matter, doesn’t have a notion of distance between them. So you can continuously transform them without losing the “resolution detail” between the points. Having access to a coordinate and then being able to ground these structures gives a way to talk about distance from your point of reference and which way the hierarchy is aligned. This is a stretch but the idea of left and right adjoints I think can be made visual once we have a coordinate access acting as an indexical. Also, as a related idea, if you take a different coordinate axis, you can see your perspective on these changing and the structures being cast in a different manner. Category Theory studies the perspectiveless space and hence opens up to a very general ground for studying these structures. If you would like to have a general picture of how these things stand: https://www.mathphysicsbook.com/ is one that walks you through the ideas without going all textbook style. Mac Lane, (co)inventor of Category Theory has written a book called Mathematics, Form and Function: https://www.amazon.com/Mathematics-Form-Function-Saunders-MacLane/dp/1461293405/ but I think it needs a decent grounding in at least some mathematical domains before you read it. Its his whirlwind tour though mathematics detailing how each of them relate to one another. Motivation to invent Category Theory can be seen in this work. That is my understanding so far, open to corrections if I have made any wildly wrong speculations here.
Alexander doesn't like recursion, because there isn't really an infinite sequence anywhere in nature. It all begins and ends somewhere. So it doesn't map perfectly. But I'm willing to consider that an "implementation detail" 🙂
But I expect more clarity on this from his Nature of Order series, which I will finally attempt to read soon.
I look forward to your thoughts as you go through it. Christopher Alexander is a deep thinker who I think have stumbled onto something fundamental about us.
Realized I missed answering this question:
Anyway, Universal Algebra: I'm familiar with orders, groups, sets, monoids, etc. If I'm specifically interested in the mathematical definition and transformations between lists and trees and lattices, do you have any great introductory pointers to read/watch?
I recently came across Brent Yorgey’s study on Combinatorial Species and I think they are a good setting to study these: https://vimeo.com/16753644 There is a way in which trees become lattices. I need to examine the exact mathematical properties that create this transformation, but I think it is when boundaries merge together around a center. For trees, you can think of nested circles, but as soon as they become a lattice, you have intersecting circles that share the same entities. I think you would have come across this, but ICYMI, Christopher Alexander has written a paper on lattices (Part I, Part II)