Falsifiable theory… Theory: Function-based thinkin...
# thinking-together
g
Falsifiable theory… Theory: Function-based thinking greatly restricts thinking about FoC. Test: if this theory is true, then examination of the source code for The Mother Of All Demos will reveal that not all parts of the demo system were tightly inter-connected into a single synchronous, functional blob of design. How?: Where can we find the source code for TMOAD? If we obtain the source, how can we reverse-engineer the design out from the implementation details? If we can get at the design, we should look for how the sub-units of software are interconnected. We need to examine if the code is designed as many islands of synchrony vs. being designed as just one big blob of synchrony. Corollary: if TMOAD was designed as many islands of software and hardware, then it is unlikely that anything as interesting as TMOAD can come of building software on computers using only synchronous languages, like Python, Haskell, using concepts such as thread libraries, theorem-provers, etc. [Thread libraries are but assembler-level work-arounds that enable the use of the function-based programming paradigm with languages like Python, Haskell, etc. Theorem provers need single islands of synchronous code, to work]
g
I take the success of Excel as evidence that functional programming is the most natural way to express programming for non-programmers.
g
How is Excel equivalent to functional programming?
g
Excel is a functional programming language. The cells contain either values or functions of those values. It is a lazy, partially spatial (as opposed to purely textual) functional programming language. The latest version even has user-defined functions (a major update to its functionality that went largely unremarked).
Excel is perhaps the most successful example of the sort of thing that this group is about.
s
I have a feeling that you, @guitarvydas, have a very specific definition of what you call “functional” in mind, that may not fully overlap with what many of us here think it means. My interpretation is that you think of functional more in terms of structured programming perhaps? I also don’t think “functional programming” is particularly well defined either. I think Conal Elliott is known for criticizing this.
g
I wonder if the issue might be with the word “programming”. I try to be careful to use the phrase “function-based programming” instead of “functional programming”. Function-based programming covers many more programming languages than functional-programming covers. I think of “programming” to mean solderless - quick and easy - reconfiguration of reprogrammable machines. At that low level, functions are not inherently supported by hardware - you have to /add/ software and hardware to make “programs” work like “functions”, i.e. you have to add lots of inefficiency to allow manipulation of reprogrammable hardware to make the hardware expressible as mathematical equations written on paper. I think that the paradigm of functions-grafted-onto-hardware is inappropriate for many modern problems, like internet, robotics, gaming, GUIs, etc. as witnessed by the invention of extreme gyrations and work-arounds such as thread libraries, promises, monads, etc. Early FORTRANs and BASICs did not express hardware manipulation as mathematical functions. Early Lisps showed that grafting functions onto hardware was viable and was a productive /paradigm/. Sector Lisp shows just how clean and small this can be when the paradigm is respected. Yet, early games showed that this kind of thinking was /not/ necessary. I feel that so-called “computer science” ran with only the one paradigm - i.e. inefficient, function-based thinking mapped onto hardware manipulation - at the expense of cutting off many avenues of problem-solving. For the record, C and Pascal and Haskell and Python and JS and WASM and ..., are function-based, while PROLOG is not function-based, and, StateCharts are not function-based. I think that the function-based mentality deeply affects developers and, therefore, affects what developers can invent for non-programmers. I think that spreadsheets are just a stop-gap technology. Spreadsheets are “the best” that programmers can provide for non-programmers given developers’ function-based mentality. Mathematical 2D notation is OK for use with papyrus and clay-tablet media, but, is not necessarily the most appropriate way to think about reprogrammable electronic machines in 2024. So, in my mind, we need to change the culture of /developers/ before even trying to imagine FoP (Future of Programming). I think that TMOAD (The Mother Of All Demos) was not bound by the function-based paradigm and that function-based programmers hold TMOAD in awe because it looks non-understandable - like magic - from a mono-paradigmatic perspective. I think that it would behoove modern programming researchers to delve deeply into TMOAD and to see how it differs from function-based thinking.
(FTR, I fleshed this out some more into a longer essay and posted it to my substack https://programmingsimplicity.substack.com/p/2024-08-03-functional-vs-function?r=1egdky)
s
@guitarvydas Thanks for sharing that essay. I have a weird cognitive dissonance when reading this. On the one hand you seem to criticize something about complexity that I deeply want to agree with, on the other hand you express it in ways that feel strangely foreign to me, as if you use words that mean completely different things to you than they mean to me. “Functions” being the obvious one here. You made it clear that you don’t mean “functional programming”. But I don’t really understand what it is exactly that you identify as the culprit? The closest to a definition of what you mean I can find in the essay is:
Developers base most of their thinking on the idea that CPUs must be used as function-interpreters. This is what I call “function-based programming”.
It’s not going to be helpful, I suppose, but from a mathematical standpoint — which I would think most programmers actually ignore — isn’t that reasonable? Computation, or the theories we have of it by Church and Turing, are rooted in "computable functions”. And electronic computers are approximating these theoretic models, adding their own complexities of having limitations in both time and space. The only way I can make sense of what you’re saying is paraphrasing it as, “Now that we figured out how to build actual CPUs as approximations of these models, let’s go and find better(?) models” — which to me sounds like a cart before the horse situation. What should developers use CPUs as instead?
I feel that so-called “computer science” ran with only the one paradigm - inefficient, function-based thinking mapped onto hardware manipulation - at the expense of cutting off many avenues of problem-solving.
Any chance you have an example of one of these many alternative avenues of problem-solving? That would probably be the most effective way to see where I am misinterpreting you.
Functional programming notation denies the existence of time. “Functions” can be manipulated faster than the speed of light.
I wouldn’t say “denies”. More like “doesn’t need”. Computation can be modeled without factoring time (or space) into the model, which makes the model simpler and some would even say elegant. You can integrate time into the model, if you want — see Functional Reactive Programming for instance, or monads (which are a kind of ordering as you would get from sequential execution). But the key is you don’t have to, you only do that when you need to. That most
map
functions run sequentially is not a fault of functional programming. If you come from the other end, which seems to be your starting point, I wonder how you think all that state and implicit ordering in CPUs or electronic circuits (through sequential wiring, synchronization, and clock signal feedback loops) has benefits in generalizing a new computational model from that, if what we have is already coming from such a more general (and quite elegant) model in the first place?
So-called “functional programming” is just a way of forcing CPUs to act like macro processing engines at “run time”.
Assuming you’re now talking about actual functional programming, I think you’re missing an important aspect. Yes, term rewriting is doing a lot of the heavy lifting (which I assume is what you’re going for with the macro analogy), but ultimately even pure functional programming languages grapple with side effects / IO and do manipulate state eventually. But there are clear benefits of modeling as much as you can in a pure functional fashion, which doesn’t make any assumptions on how your data types are actually modeled in memory and how your operations are sequenced, unless you need to. Which makes handling stuff like state and concurrency a lot easier to deal with. See Functional Core, Imperative Shell. And Out of the Tar Pit.
For the record, C and Pascal and Haskell and Python and JS and WASM and Scala and many other programming languages, are function-based, while PROLOG is not function-based, and, StateCharts are not function-based.
I can see how you consider PROLOG different, as logic programming feels very different to either imperative or functional programming, even though it can be seen as a more extreme form of functional programming, where the actual computation is completely wrapped in the query engine. No idea how StateCharts integrate into your mental model. I hope this doesn’t sound like an attack. I’m trying to poke at some of your statements, hoping to reveal insight in what it really is that you try to make us aware of. As I said in the beginning, I’m tempted to agree with you. For instance, this sounds a lot like something I would criticize too:
In 1972, the game of Pong could be built with about $100.00 worth of chips on one circuit board, whereas in 2024, Pong needs megabytes of memory and a full-blown computer. 2024-1972 = 52 years. Hmm. Sounds like a bad investment, when put in those terms.
Yeah, I’m totally with you here. For lots of reasons that seem completely different from yours.
🤔 1
g
@Stefan Just to let you know - I really appreciate your having taken the time to give me your detailed comments. I keep grappling with how to put this info into words and am spiralling inwards (I hope :-). There’s something deep down in my intuition that I’m managing not to say clearly. Seeing how my words are interpreted helps me dig deeper. One of the first things I built in my first real software job in 1981 got me wondering why I could build reliable hardware but have never been able to build software that meets the same level of reliability (it ain’t “essential complexity”, it’s something else. I don’t have the answer, but, I’m pretty sure that it ain’t “more of the same” (to which, one can ask “what is this The Same thing - is there some underlying commonality that is causing woe?”). I need to mull over choosing new words. It may take me a while to respond in more depth.
❤️ 1
@Stefan A tiny slice of my thoughts on your questions: (more to come…) (does this shed more light on what I mean to say?): The idea of forcing FP to handle ALL of programming is based on the belief that FP is “close” to the hardware. It’s not. Use FP for what it’s good for and develop other notation(s) for the stuff the lies outside of the boundaries of FP, e.g. mutation, control-flow, heaps, etc. It is thought that the C language is “close to the hardware”. It’s not. C supports “recursion” which is not inherently supported by the hardware. You must add extra software and hardware and operating system magic to implement C. [Specifically, function-calling].
g
I’m fairly strongly convinced that Functional Programming, and other higher-level abstractions we’ve created for programming computers, such as the Relational Model, serve two main purposes: 1. They are an easier abstraction for humans to reason about (author, read) than say the abstraction of pure machine language (noting that ML is also an abstraction — there is no “language” in the hardware; there is just state driving the Turing Machine); and 2. They support modularity. I can see no way that either of these is any sort of obstacle to the future of programming. And I believe that better, higher-level abstractions (eg linear types in Rust) are likely to be the future of programming.
g
Evidence suggests that “strong typing” and the rise of FP is correlated with bloatware. FoCers must take such observations into consideration and remain open-minded.
g
“Evidence”?
s
@guitarvydas I know exactly what it feels like to have a strong intuition for something, up to the point that it’s absolutely obvious to you, but everybody around you doesn’t “get it”, and somehow you can’t find the right words to explain it to them. When you say "FP is close to the hardware”, you mean your “function-based programming”, not functional programming? I think functional programming in the colloquial sense (as opposed to imperative programming) is seen not at all close to hardware. If closeness to hardware is a factor that’s important for your FBP, how do you feel about Forth? Is the ability to define a new word and compile it on the spot and then later call that newly created subroutine from anywhere in your code FBP? Does it have all the issues that you criticize, or does this minimal implementation lack some of the issues more complex languages have? And I can’t quite tell if you are suggesting that we should move closer to hardware in language designs? The way you talk about C as an example makes it sound like you want to see something that’s “closer to the hardware” than C is? (Arguably, C was close to a hardware, the PDP-11, but shockingly even I am not old enough to really know about that.) If moving closer to hardware is directionally what you’re looking for, I’d love to hear more about this in contrast to the fairly consistent motivation to abstract over hardware we have seen in programming since its inception. And I’d also like to hear more about “evidence” for a connection between strong typing and bloatware. I’ve recently been rediscovering Conal Elliott’s work again, and it seems to me like he’d be pretty high on your main villain list as he seems to represent the almost exact opposite of your values. If you can stand it, I’d love to hear what you think about what he says in this podcast episode. Alternatively, you could read this paper, but the podcast has so much more valuable context (and should be much easier to digest if you don’t like picking apart mathematical formulas). There's also a video for a seminar that might be more digestible than the paper. Either way, all options will take several hours of your time, so I understand if you don’t have that time.
g
... Working On It ... in the meantime, note that I used the word “correlation”. Correlation does not necessarily imply causation. Evidence:
ls
,
wc
,
MacOS Finder
,
Windows Explorer
, contrasts with Sector Lisp and BLC, etc, etc.
... Hmm, I’m eyeing the phrase “... abstract over hardware we have seen in programming since its inception. ...”. I wonder if this is the issue. I believe that most of our programming languages do NOT abstract over hardware. Function-based languages (i.e. most programming languages from C to Haskell) abstract over only a tiny sliver of hardware - i.e. the CPU. Hardware actually tends to be massively asynchronous (like 1972 Pong), yet, PLs tend to be restrictively synchronous. We can easily describe the innards of a VLSI chip with one of the popular programming languages, but, not so easily an asynchronous circuit composed of many chips. [It is a Design Decision to let the synchrony leak out and subsume more and more of the circuit, but, at some point this becomes a losing proposition. Say, for example, dealing with nodes on the internet -- we can express the innards of nodes, but we are reduced to caveman-like grunting at an assembler-like level when expressing the network of nodes. Hardware circuits are like the internet, whereas CPUs are only a small part of any actual circuit. Modern Computer Science is like modern Physics - we understand how everything works, well, except the 95% of the Universe called “dark matter”.]
s
I just watched the video linked in this thread. @guitarvydas This seems close to what you’re talking about.
g
@Stefan Thanks, I did’t watch this video until I saw your recommendation. We seem to reach the same conclusion. His final moto is: “Unlock their potential by rethinking programming systems”. I think we get to the conclusion by following very different paths, though: By “their” he means hardware. By “their” I mean FoCers and programmers. In my mind, over-use synchronization doesn’t just slow hardware down but, also, slows down programmers. My gut says that there is no “Moore’s Law” for software because of the over-user of synchronization.
s
@guitarvydas I think we’re getting somewhere. I was hoping this presentation would enable us to “spiral to the center” of what you mean. I still think your “function-based” criticism is a red herring. Functions (in the mathematical sense) have nothing to do with parallel or sequential execution. Just like you said they “deny”, or like I said they “don’t need”, either for the model to be sound. That historically we chose to double down on sequential execution is a different story, and certainly that has impacted how we think about computation — which I believe is closer to the core of what you’re criticizing. Or mental model of execution is certainly biased towards unnecessary synchronization and sequentialism. Down with the sequentionalists! ;) But I also want to appreciate that you seem to talk about more than just that. I don’t know what a “Moore’s Law for software” would measure and state exactly, but I share your frustration with software being needlessly limited and hopelessly complicated, partly because of sequential execution bias, but also because of other things. For me all these things can be summed up under “complexity”, but that’s not really helpful either, I guess.
g
@Stefan Hmm, there’s a definite distinction between “functions” in the mathematical sense and “functions” implemented in hardware. The things called “functions” in programming languages have different properties than “functions” in mathematics. Functions in programming languages are just mathematical-function wannabes, but, aren’t mathematical-functions because of issues of physical reality and sequencing. Maybe by using the term “function-based” I really mean that “using the name ‘function’ in programming is a bad idea and gives the wrong impression of what’s really going on”. We’ve spent 50+ years fumbling around bumping into gotchas caused by this naming. This needs more shower time...
s
@guitarvydas I didn’t mean that your distinction is wrong or doesn’t exist, I’m just trying to point out that it’s maybe not helpful for what you are trying to achieve, because I think it creates more confusion than it helps you make your point. Yes, the distinction exists, but in functional programming for instance there is at least a recognition to aspire to the mathematical meaning, knowing very well that this can only be an approximation that ultimately is constrained by its implementation. And then there is a long history of debate of what to call these things, and if further distinctions between different kinds need to be made: functions, procedures, subroutines, methods, etc. Even though there are some patterns around use in context of OOP or pure functions vs. side effects, it’s inconsistent across languages. And now you come along and try to tag something else on top of this already confusing and incoherent mess. I’d be in favor of throwing around the term “function” more carefully, but it is what it is and the point you’re trying to make seems to be more important than risking it to disappear in the noise of historical inconsistencies.
There’s another aspect of your idea that I’m trying to wrap my head around: What about function/subroutine’s capability to contain arbitrary complexity and expose it as just a symbol? Is that a feature or a bug in your book? Of course, it’s probably not a simple either/or question, really. I wonder if my rant about additive design and cultivated ignorance resonates with what you’re saying, or if I’m just reading into it what I want to read into it…?
Ha, I just noticed I start that article with yet another meaning of the word _functional_… 🤪
g
@Stefan The word “function” means something in math, but, is mis-applied to software, where it is more of a wish than a reality. The mis-application of the word “function” to software things causes researcher / programmers to mis-believe that they are dealing with math when, in reality, they are not dealing with math, they are dealing with something different and new. Aside: to me, mathematics is 2 things: 1. Deep thought 2. A notation for expressing the conclusions arising from deep thought. Computer programming is about issue #2. The medium is different. In traditional math, the medium is papyrus, graphite, clay tablets, cave walls. Computers provide a new “medium”. Using concepts and words derived from papyrus-based notations is not appropriate for this new medium. Language affects thought, hence, using familiar concepts to describe new territory is inappropriate for the new medium - and, this practice is downright misleading and wasteful. I agree that mathematical functions contain complexity and expose it as just a symbol. But, I don’t agree that it is the only way to do this kind of thing. Imagine a non-programmer walking up to a white-board. The non-programmer draws a rectangle on the whiteboard. That rectangle contains arbitrary complexity, too. (Aside, in Javascript, we draw a rectangle using the two ASCII characters “{ and “}”). The rules for containing arbitrary complexity using textual, mathematical, programming notation are onerous (“no side-effects”, “no mutation”, etc. in opposition to the reality of CPUs attached to RAM). The rules for using a rectangle on a whiteboard are less onerous. The difference is “isolation” - we don’t care if the innards of the stuff contained in the rectangle use side-effects or mutation, as long as those effects are not allowed to leak out beyond the borders of the rectangle. Hardware ICs contain arbitrary complexity by encasing circuits in black epoxy. The rules for such hardware ICs are much less onerous than the rules for using FP. Business people already know how to contain arbitrary complexity - they call this technique “Org Charts”. Org Charts allow asynchrony, whereas mathematical notation does not allow asynchrony. Org Charts for successful, scalable businesses, has rules that forbid “micro-management”. Function-based programming, though, is all about “micro-management”. The goal of computer science should be to find many ways to add rigour to notations that encourage isolation, and, not to apply just only one form of rigour. The implied goals of FP are good, but, it is not the case that FP is the only way to achieve those goals.
Hmm, something about this conflation of concepts "..function/subroutine’s..." indicates that I haven't said something clearly enough ...
status: thinking about this thread, I jotted down "a few points" that were on my mind. I'm up to 22 points and counting. https://open.substack.com/pub/programmingsimplicity/p/2024-08-09-swing-thoughts-about-programming?r=1egdky&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true @Stefan Every point, on its own, seems trivial, but, when combined, they lead to difficulty. I guess that I should elaborate on every point, but, I haven't got there yet - in words.