<@UEQ6M68H0> That’s how I feel about FP. Though no...
# thinking-together
s
@Edward de Jong / Beads Project That’s how I feel about FP. Though no one should take either of our statements of opinion as an argument.
🤔 1
🍰 4
k
@Steve Dekorte can you elaborate? We've seen defenses here of OOP (usually taking the form that Java and C++ are not OOP) but never an attack of FP.
d
Since you've started this, I'm going to jump in instead of getting 🍿 .. 😄 I think we can learn from both OO and FP, but being purist is always a mistake. There.
i
One minor point: FP isn't even remotely similar to what the low level execution model of the hardware is. (I'm thinking mostly of the CPU, since you could make an argument that GPU parallel cores are slightly more FP-ish.) This difference means we need to translate our FP code to the CPU execution model, and translate feedback back. The FP translation is generally more lossy and incidentally complex than other language models. For instance, how will you know whether you're optimally using the branch predictor? In C and assembly, you know because you wrote every branch. In higher level procedural / OO languages, it's more likely that your branching constructs map to predictable low-level code. In FP, it's that much harder to know. In something like Prolog, all bets are off.
👍 2
d
Yeah, I don't lie awake worrying about that.. 😄
g
at the very least i support the idea that monads (and other category theoretic concepts) are much easier to understand in a dynamic OOP model, where you add interfaces to a datatype that enforce constraints
🤔 1
so much so that you end up using them without knowing it
d
The term "FP" is only loosely defined, so it's better to be more precise when attacking/defending it (or exploring relative strengths/weaknesses, for the less combative).
@Ivan Reese "FP isn't even remotely similar to what the low level execution model of the hardware is." That's certainly true for lazy evaluation in Haskell. Most functional languages use strict evaluation, though. Curv is a referentially transparent, pure functional language, and one reason I chose that is to make it easier to generate code for the GPU. However, Curv also has mutable local variables, assignments and a
while
statement. I include those things to simplify the programming model for imperative programmers, and so that I can transliterate GLSL into Curv without converting loops to tail recursion, etc. So Ivan's statement isn't really true of Curv.
k
How is Curv referentially transparent and pure in the presence of mutable local variables and assignment? That fact that you're finding it useful to include imperative constructs suggests exactly the opposite, that @Ivan Reese's statement is true of Curv.
i
How does Curv do with CPU cache lines? Are you able to control data usage patterns to make sure you're keeping stuff in L1 as much as possible? My point isn't that FP (as a philosophy) is "bad", or even that common functional language implementations have poor performance. Rather, it's that FP languages like Haskell, Clojure, and Scala, and other non-imperative / high-level / managed languages like Prolog, Smalltalk, Java, JavaScript, etc., tend to make it a primary design goal to abstract away low-level hardware details. They regard that as incidental complexity, or as a security risk, or as a source of error and confusion. I have trouble with that, because I write very (very) performance-sensitive code. So for me, one of the primary goals of FP / declarative / managed languages (broadly) is diametrically opposed to one of my primary concerns.
💯 1
I recognize that FP (as a philosophy) has a long list of concrete benefits compared to other paradigms. What's missing is the caveat — those benefits apply iff the things that FP language designers regard as incidental complexities are actually incidental. In games, they often aren't. That's why you see folks like John Carmack praising Racket and immutability (or as C++ people call it, "const correctness" ugh) and pure functions. They're useful ideas. But those same folks continue to write C++, or C. They borrow the good parts of FP philosophy, but don't bother with true FP languages because those languages by necessity put a big wedge between the programmer and the CPU.
d
@Ivan Reese My main focus is to abstract away from the hardware and make programming easier, so I agree with you. Some theoretician probably has a way to write efficient device drivers in an FP language, but it's not something I'm aware of.
@Kartik Agaram Curv has immutable values, and all functions are pure. It has a referentially transparent expression language, in which the order of evaluation can't be exposed by side effects, and it has a statement language that is imperative. I don't know what paradigm this language belongs to. I call it a pure functional language, but FP culture rejects imperative style programming, so people may argue that it can't be FP, even though it has all the properties I listed.
i
That sounds a bit like how people categorize Scala — multi-paradigm, where one of those paradigms is functional.
t
I don't think that FP tends to be less performant is a particularly strong argument against it. Razor's-edge performance isn't a requirement in order for a language to be useful to people— look at Python, which is ~200 times slower than C, but wildly popular, and IMO a joy to use. I don't think it's a positive thing if a system makes you think about its internal details in order to use it; that makes it less easy to wield, and just generally it's poor encapsulation. Yes, there are performance critical applications where you do need to micromanage every instruction, and those demand very low level languages/systems. But I'd argue those cases are quite rare, and lie in a problem space that is just about perfectly disjoint from the kind of system we could call a "bicycle for the mind".
I see the "FoP problem" as needing to push in the direction of "a computer is a magic box that automatically does what you ask it", and "but what are my cache lines doing?!" is very nearly the opposite direction 🙂
👍 2
s
@Kartik Agaram I have no formal arguments (and can’t recall ever hearing any for either side). My experience and intuition (for what they’re worth) suggest that immutable objects are useful in some places (like strings & number arguments, immutable proxies to protect ownership, pass by copy for distributed objects, etc), but counter productive in most cases (particularly when trying to enforce these policies on the whole system). By counter productive, I mean people will take longer to produce working code and will produce worse (less understandable, maintainable, scalable, flexible) code when they are required to update all references to an object to change it, at least outside of the realm of some narrow math focused components.
🍰 1
i
@tbabb I'm not making a Python. I make FoC tools for people who make detailed, highly-interactive games that run on the web. I also make FoC tools that, themselves, are more like games than text documents. So for me and the users of my tools performance is paramount, and every iota counts. I borrow the ideas of FP where they benefit my needs and don't impose a cost, but in general they're directly opposed to one of my primary concerns.
👍 1
Gang — the goal of this thread, as established by @Kartik Agaram, is to lay out an attack on FP. We're a community of people that, broadly speaking, are all so intimately familiar with FP that we all know the myriad benefits it offers compared to OO, procedural, etc. In present company, FP doesn't need defending.
❤️ 4
We should also be the best people to know which skeletons are in the FP closet. Let's have fun pulling them out and dusting them off! Let's not take such an attack personally, and defend the fact that we're (broadly speaking) making good use of FP in our own work. Nobody here needs to be "converted".
t
@Ivan Reese True. In the domain of high-perf applications, I completely agree with you. 🙂 To the point of "FP's biggest skeletons", I think @Steve Dekorte hits the nail on the head— some problems are best naturally expressed with mutability, and when you give that up, things get messy. That really does feel like a usability/ergonomics problem to me, and I think it's hard to fix.
👍 2
s
It seems like there’s a strong parallel between the immutability vs mutability debate and the static vs dynamic types debate. Most people seem to feel everything must be one or the other, instead of the ideal being to use each where useful. I think they’re all useful in different situations. My beef with “FP” is only with the focus on immutability absolutism.
d
For general purpose programming in a pure FP language, that typically requires garbage collection. This has a performance cost that the C/C++/Rust communities consider unacceptable. (GC is not an absolute requirement. Curv uses reference counting, at a cost in expressiveness: there's no way in the language to create cyclic data structures.)
Haskell is the only well known functional language that requires immutability. Older functional languages, and current members of the functional family, like O'Caml and F#, support mutable objects. So, in terms of "skeletons", I think it's better to ask what are the limitations of programming in a pure functional style, without using mutable objects. (As a language designer, I also want to ask if there are missing features that would make pure functional programming easier or more widely applicable.)
👍 1
t
@Steve Dekorte I share your distaste for absolutism. When it makes sense, I think languages should not impose a particular style of code-organization on the programmer; that's a design choice for them to make. Classic example: Java saying "OOP is hot, thou shalt make every single part of thine program into OOP"— yuck, even though OOP can be an excellent way to organize some (parts of) programs. My own project is a pure FP node-and-wire tool, and I wish "FP vs imperative" was a choice to be selected by the programmer, not imposed. But the moment nodes have side-effects, the entire system becomes impossible to reason about (data dependencies appear that are not visible as wires; execution order is determined only by wire-dependency, so the behavior would collapse to non-determinism), so the choice to allow side effects is disastrous for usability. I do offer some niceties like an "imperative loop" node, which contains a network which transforms the previous loop iteration state to the next one. It's still side-effect free, but is less mind-bendy than recursion. Overall I think it is a necessary evil to make a system with simple, limited parts that are easy to reason about. @Doug Moen Does "immutable" not imply "acyclic"?
d
Shared mutable state imposes a global complexity tax on all of the code in a program. This "tax" makes programs harder to reason about, and inhibits compiler optimizations. Not all language features impose a tax on code that doesn't use it. That's why some people are "absolutist" about this feature. At the PWL conference yesterday, there was a beautiful and insightful presentation about how to reason formally about these kinds of issues in programming language design, so that it's not just a matter of taste or opinion. I can't reproduce all the arguments here. https://pwlconf.org/2019/shriram-krishnamurthi/
🤦 1
@tbabb Haskell supports cyclic immutable data structures, which are constructed using recursive definitions.
👍 1
s
@Doug Moen “makes programs harder to reason about” for whom? I can tell you that I don’t find it easier to program that way. Am I wrong about myself?
🍰 1
k
I like imperative programming too, so touché! But this thread actually started about OO vs FP rather than any absolutism. I tend to find FP ideas more useful than OO ones, and I find my programs are better when I try to limit mutation. So I don't think OO vs FP is like static vs dynamic typing at all. FP ideas seem strictly superior to OO.
The best parts of OO feel complementary to FP. In the large, OO provides some guidance on how to organize code, while FP provides constraints. The best object-based designs use lots of stateless objects, I think. So the two feel like positive vs negative space, not necessarily in opposition.
s
I think what is desired via mutability is the notion of identity. This is quite easily and directly represented in objects. This can and is simulated in FP as well, but the primary ideas bring up a sense of disembodied values floating around and getting transformed. That is my main criticism of FP.. it doesn't start with a stateful substrate or strong notion of identity.
✔️ 1
👍🏼 1
❤️ 1
i
Thus far, we've conjured the following interesting angles of attack: • [the benefits of FP] apply iff the things that FP language designers regard as incidental complexities are actually incidental • some problems are best naturally expressed with mutability • For general purpose programming in a pure FP language, that typically requires garbage collection. This has a performance cost • [FP] doesn't start with a stateful substrate or strong notion of identity. What other things do you find lacking about FP? Where does FP have room to grow? At the risk of over-anthropomorphizing: what parts of OO should FP envy? We all know that shared mutable state is a double-edged sword — so what parts of FP are surprisingly dull? If we can't come up with some really good ones, on par with our total smackdown arguments against OO, what's the more likely reason — that FP is utterly without such shortcomings, or that we just aren't properly seeing them?
k
@shalabh Hmm, at the risk of getting into angels and pinheads, I actually think of FP as providing a stronger sense of identity. Mutation introduces ship-of-theseus effects. Should it be considered the same thing once I've modified it? Concrete implications: * We all know not to modify keys in a hash table. * Interning string literals is an abstraction leak you only run into if you ever modify them.
i
I heard a good one the other day. Can't remember the source. Here's my bastardized paraphrase:
Google Search, from the perspective of the web searcher, isn't a pure function — every time you do a Google search for the same term, you get a different result. You could try to model it as a pure function, but that'd require some contortions that are easy to describe ("Just take the current index as an additional hidden / curried argument") but that fall apart upon further reflection. Google search is actually a process, not a function, and rather than trying to model it using the ill-fitting formalisms of math, we can more naturally model it using the formalisms of systems theory.
OO embodies some elements of process / systems theory very elegantly. Erlang is probably a great touchstone here. Compare with Milner's π-calculus, a mathematical formalism that encodes some parts of process / systems theory, which I find to be not nearly as simple or elegant as the very OO process model in Erlang. (Hedging against expected counterarguments: Of course, that doesn't mean there can't be a way to encode process / systems theory in mathematics that is simple and elegant. But I think it's fair to say that this is a case where currently OO does better than FP.)
❤️ 2
g
all of my feelings are about ergonomics. i don’t have very many feelings about mutability or immutability (i think it’s kind of pushing on the problem of seeing data move through your system without quite addressing it directly). but one example is the degree to which pipe operators end up being used in functional code: it’s really useful to start with data, and then apply operations to it.... almost like asking the data to transform itself

https://youtu.be/dkZFtimgAcM

monads are better presented as objects argument at timestamp 13:53
incidentally as i slowly heat up my Hot Take Machine i think javascript proxies are strictly more useful and understandable/direct than 90% of category theory
s
@Kartik Agaram We're using very different meanings of 'identity'. For me 'identity' is that very aspect which is preserved while the thing itself changes. The tree grows - is it the same tree tomorrow? There's definitely an aspect of identifying the tree that we wish to preserve even though there are new leaves present. When I drive the car around and the gasoline, temperature and location fluctuate, does it remain the same car every moment or are there monads involved? This notion of identity is so deeply ingrained in our interactions that it is almost invisible. Virtual things in computers have identity too. I may regenerate the file, move around the paddle to hit the moving ball. Even looking at a function pipeline that operates on data, I might track this data as it moves through the functions. I'm arguing that everything that is presented to us and all our interaction with computers involve identities of these virtual artifacts. Identities are tracked in FP by various patterns like attaching ids to data, logging, maybe STM and so on. In Excel, for example, I think of location as identity (this cell). By contrast OO tends to lack the 'consistent snapshot' notions represented easily in FP. I don't think there's any dichotomy here btw. I just think it's better to have an OO substrate and then incorporate the notions consistent snapshots and pure transforms.
👏 2
k
Can you give an example of a situation where the sameness obviously outweighs the differences? I tend to find that this is one area where bits are not like atoms, and our intuitions about the notion of identity don't really translate. You can't rely on analogies here. Just this past week I had an incident at work where a pipeline somewhere got wedged and refused to make forward progress. Turned out a user somewhere had changed the name of a record. Then 10 minutes later they changed their mind, reverted the old record, created a new record with the new name. But in the meantime the pipeline had run and the new name was saved downstream. Further updates failed; you couldn't create a new record with the new name because the name already existed. The pipeline assumed that only the id of the record decided identity, but the uniqueness constraint meant that the name was also effectively part of the identity of a record. These sorts of subtle bugs are very common, I'm sure you've encountered them. Asking when two values are snapshots of 'the same record' is often intractable. The insight of FP is to encourage architectures that sidestep the question altogether. And if we could resolve the performance problems we'd use this idea exclusively. Mutation should be merely a hack for performance.
s
It's not the bits themselves but what they mean to us. Identity in part of our conceptual system. If "a user had changed the name of a record" - what is the system implementing here if not the mutation of an entity with identity? If it wasn't for the 'sameness' across those different scattered bits, this wouldn't be a bug, would it? 😉 So I think identity should be core in our expressions. Anyway, I think we're talking past each other.
Mutation should be merely a hack for performance
If you say at the machine level bits shouldn't be mutated in place except as an optimization, I fully agree. I'm thinking more in terms of higher level processes that we describe - identity and mutation should be part of the language as well as snapshots and consistency..
🍰 1
❤️ 2
k
I'm thinking more in terms of higher level processes that we describe - identity and mutation should be part of the language as well as snapshots and consistency..
That's definitely a level-up on the rhetoric I've heard so far in favor of state! Great thread.
If it wasn't for the 'sameness' across those different scattered bits, this wouldn't be a bug, would it? 😉
The bug was just that the pipeline was wedged. It really didn't matter whether name
X
got assigned row
m
or
n
. I think that's extremely common. Deep in the guts of our computers, software is often just arcane book-keeping without any meaningful mapping to the real world. And this has been true of bureaucracies long before software existed. Was it a Kafka short story where the guy's record says he's dead and he can't convince the bureaucrat that he's alive? I think focusing on 'identity' risks confusing map for territory.
To connect up with an old thread (https://futureofcoding.slack.com/archives/C5T9GPWFL/p1557860308364300), I think emphasis on identity is pretty modernist (maybe even AHM: https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility). The post-modernist insight is that the concept is so fuzzy that it'll mislead you just when you need it most. Better to just not rely on it.
❤️ 2
s
Haha perhaps. Is static typing authoritarian high-modernism? I'm tweeting that it is 😜 Thanks for the links - great to re-read those. BTW I realize I'm quite an ad-hoc modernist - want order in some aspects, chaos for others..
🍰 1
Following up on @Ivan Reese’s question about OOP 'envy', I want to propose FP really envies that the computation model inside the OO program is very compatible with the model outside it. In OOP, how do you send a message to another process? Well we've been sending messages to other objects all along so 'very much the same way'. The models of pure functional code and 'sending a message' are incompatible enough to require a fair bit of algebraic gymnastics. What do you think? Legit criticism?
👍 1
🍰 1
s
On static vs dynamic types and immutable vs mutable data, I’m reminded of a recent podcast on tight and loose cultures: https://podcasts.apple.com/us/podcast/sean-carrolls-mindscape-science-society-philosophy/id1406534739?i=1000448378141 which makes connections to excessive and insufficient synchrony in mental disorders. The TLDR is that an excess of either produces failure states and some balance of the two is optimal. Could programming cultures unconsciously be dividing themselves on similar biases?
p
Let me add my intuitions here. I tried to create a coherent argument but I failed. Please help me finding these out:) So these are related to the OO substates and mutability vs immutabilty mentioned above, but I'd like just to give some high level thoughts. 1. 0 or 1 ref to a piece of data is no problem for sure, things go crazy above 2+ references to the same piece of data. 2. A. Multiple (sub)states enable different rate of (state) change in different parts of the system, meaning loose(r) coupling. 2. B. Multiple (sub)states introduce a syncronization problem/requirement on a higher level: between the subsystems using the (sub)states. 3. Using a mutable variable by the reference feels like an implicit subscription / an implicit syncronization pattern. 4. There is too much ideas around time (syncronization and subscription): maybe the idea of state is just too low level to be useful and a state should be viewed as a snapshot in a "stateStream" to be even able to reason about? 5. Immutable data encourages creating new pieces of data (+swapping references) so we end up managing (and syncig) multiple pieces of data like in OO (sub)states. 6. Virtually everything can be solved purely with mutable/immutable data with some extra code, so it seems they just have different "defaults" in their meaning, requiring different extra pieces of code to mimic the others behaviour. 7. Immutability and local state feel like a bit like "locks". Wat. +1: Wtf am I talking about? I mean I can't express these better but it feels really odd.
👍 1
💯 2
d
In OOP, how do you send a message to another process? Well we've been sending messages to other objects all along so 'very much the same way'. The models of pure functional code and 'sending a message' are incompatible enough to require a fair bit of algebraic gymnastics.
In modern languages, "OOP" is just classes and inheritance, and "sending a message" is just overwrought terminology for indirect function call. Even in Smalltalk, message sends have synchronous function call semantics. Functional languages also have function calls. O'Caml is a pedigreed functional language with OOP (as in: classes and inheritance), so there's no conflict between FP and class based OOP. When you talk about "sending a message to another process", it sounds like you are talking about the Actor model. The only Actor language I know is Erlang, which is considered a functional language.
👍 2
p
@Doug Moen Erlang it is considered an FP lang, but there are opinions that it is the "most OO" lang considering the "original" OO term used by Alan Kay. I coud not find any direct Alan Kay opinion on Erlang. But ofc "modern OO" is not even close to either of them.
s
@Doug Moen Doesn’t Erlang have mutable state within a process and doesn’t it send messages between processes? So at least on the process level, Erlang is about mutable message sending objects?
d
@Steve Dekorte
Doesn’t Erlang have mutable state within a process and doesn’t it send messages between processes? So at least on the process level, Erlang is about mutable message sending objects?
Haskell supports this as well. Every general purpose language has threads and concurrency, and threads are useless unless they can change state in response to messages (or whatever the IPC mechanism is). Haskell has several IPC mechanisms: if you use the Control.Concurrent.Actors library, you can even use the Actor model. What makes Erlang functional is that data structures are immutable values and variables are immutable: they cannot be reassigned once they are bound to a value. An Erlang process manages its mutable state using a tail recursive function F that is passed the current process state as an argument. Each time F is called, it receives a message, processes it, then calls itself with its updated state as an argument. This is a functional style, not an OOP style of programming.
s
@Pezo - Zoltan Peto believe it or not what you're saying makes sense to me. What you call stateStream, I call identity. In typical OO an object is always pointing to the latest version of everything else - even if the latest version is only half done for some objects (this is the pitfall of OO). Ideally you want to move your references to the latest version in some organized way. And you never want to be pointing a half-done state of anything. FP typically doesn't have widespread stateStreams (well there's Clojure refs..) and you may have to build these streams yourself if you need from the immutable substrates. I think a more approachable option may to keep the OO model but introduce deep versioning of objects so they can manage their evolution through their stateStream.
❤️ 1
e
I find both OOP, FP, and Actor model to be lousy paradigms. They each have their own problems, and the more purely that philosophy was pursued the lousier the language. Smalltalk, absolute garbage. Try doing a bitmap rotation in it, crazy hard. Java, the COBOL of our time, and i don't use Haskell but i find it abhorrent. We are about to add 100 million more programmers in the next 5 years, and surely they will not want to work with any of the prior languages.
Whatever the next popular language is going to be, it will have to be simpler, and in totality, require less learning, and be 10x faster and easier to debug, because that part of programming for a newbie is a real turn-off.
p
@shalabh I am glad it made sense to someone! :) So I think what you are talking about is basically the thing the "syncronization problem". What is interesting about that to me is we dont even have to bring in concurrency, we can still mess up the "syncronicity" and end up in an invalid/inconsistent state. Having multiple refs to the same mutable state might solve that sometimes out of the box (if they have the same identity as you say?) which seems interesting, but other times it is just like "accidental" temporal similarity of two things which might refer to DIFFERENT aspects of something - and this similarity makes us call them often by the same name at different parts if the system. Funny observation: maybe if that is the case what we have (at least partially) is a kind of "naming problem" as well. :) I am just dumping my thoughs, however I am glad to hear any response, because that feels important.
I also found a related not I wrote a couple weeks ago: - Observer (pattern) feels an abstraction which does the syncronization "on (virtually) every event" (which make the value change). - But because (as said above) we sometimes need to alter references to states in different & multiple steps (as business logic / consistency requirement) : the syncronization/consistency of our system is at risk. Note that 2 different references pointing to 2 different piece of data might refer to the "same thing", but because of the differrent rate of change we often have to copy data to use. But what "different rate of change" virtually means from the Observation pattern view is like: we shall have a mechanism which does NOT syncronization on "on virtually (every) event" (which make the value change) => to sum up: We need something like a syncronization abstraction specifying the events / conditions when we want to synchronize. Identity/Model = State + the Events it changes on. << is there anything known about that? That feels similar to a "Lifecycle". But "Lifecycle" feels similar to a "Process". And I think this is what we as programmers have to fight a lot these days, but it feels partially accidental complexity, but if not at least it seems we (or at least I) lack of the correct viewpoint to think about these.
One more related note I found: Observer pattern has the same problem in some regard as imperative programming: there is too much "temporal dependency" going on which are not "tamed". In both cases it is not just valid assumption to have as a programmer "things change out there" but a requirement - and we have no initial awareness of the problem of syncronization/consistency problems this involved. To me both approch/school say (if they were a person): "oh, for sure, the programmer has to take care of these, but I don't have anything to do with that. I am not even aware of that being so hard and important aspect of the work to help the programmer at any level".
Let me add also: maybe unfortunately there is no accidental complexity there, but still I feel I lack of a level of understanding.
s
@Edward de Jong / Beads Project “Smalltalk, absolute garbage. Try doing a bitmap rotation in it, crazy hard. ” Isn’t that a function of the libraries and not the paradigm? e.g. it’s easy to do in Objective-C using AppKit.
👍 2
@Doug Moen “What makes Erlang functional is that data structures are immutable values and variables are immutable: they cannot be reassigned once they are bound to a value.” I think you’ve missed my point, which was that while it looks like an FP if viewed from within a process, it is an OOP (messages and objects w mutable state) if viewed at a higher scale of what one sees happening between processes. Does that make sense?
👍 3
i
It'd be interesting to see a well-done stratification showing how the paradigms sit with respect to one another, in terms of how "in the small" / "in the large" they are. For instance, you'd probably have FP (pure functions, immutable values) and procedural (mutable, side effects, place-oriented) at the small end, OO (classes, methods/messages) and modules/mixins and Gang of Four-style design patterns in the middle, and then things like CSP/Actor/process calculi/dataflow/MapReduce and HPC stuff in the large. It'd probably be useful to reference this when having "my dad can beat up your dad" debates, to avoid saying sort of useless things like "my dad can beat up your great grandpa" because, like, duh.
For example: of course the feeling of complexity from OOP is different than the feeling of complexity from FP — they're for solving problems of different magnitude. Instead, you might be better off contrasting FP and procedural programming.
e
In the original Smalltalk 80 book by Goldberg, which i eventually threw out (now a collector's item), because it wasn't worth carrying from apartment to apartment ( a true sign of a bad language when your $50 hardbound book is tossed), there was code to do a bitmap rotation that used recursion to subdivide the image into 4 sub-squares over and over, truly mind boggling, so tricky. Smalltalk may be bad, but the Pharo IDE for Smalltak is arguably the most refined and clever IDE extant for any language. Just goes to show how lots of elapsed time can polish something very well.
s
@Edward de Jong / Beads Project I don’t see what that has to do with either the paradigm or language.
👍 2
e
There are some purists who claim that Smalltalk is the only true OOP language. Languages like Java which are considered OOP by the vast majority of people is really a blended language, containing many ALGOL aspects which is procedural in style. Very few languages are pure; most throw in aspects and features from other paradigms and languages. Today it's more common to see kitchen sink languages like Swift and Rust. This is why i discourage use of categorizations which are approximate at best. All languages eventually map to a registers+RAM+mutable data underlying form during execution on the hardware. The only thing that notation affects is ease of initial coding, ease of debugging, and ease of transferring to other people the code base. The inventor of FP John Backus was striving to create a world of interchangeable parts. I would argue that current FP languages have near zero interchangeable parts results. I think the high water for prior art in interchangeable parts was VB6, which was ages ago. A lot of the people pushing FP are members of the programming priesthood who have discovered how to lay a nice thick blanket of obfuscation over programming. Nothing like gnarly terms like functors and monoids to befuddle the newbies. We are going to see 100 million programmers added in the next 5 years, and the people in the field today are a bit insecure about the number of people coming onboard, and are erecting natural defenses. The demystification of programming is inevitable however. This next batch is less inclined to put up with arbitrary complexity.
s
@Edward de Jong / Beads Project I agree with most of that.
d
We are going to see 100 million programmers added in the next 5 years
how do you know that?
s
@Ivan Reese I’d also like to see a chart of these features but I feel the conventional terms like FP and OOP combine too many ideas to be useful here. For example, one can have a well encapsulated message sending OO language with immutable objects, as the original actor languages did.
👍 4
k
In general, bundling is a huge problem in software. Our entire discourse suffers from speaking in terms of Erlang and Zookeeper rather than referential transparency and Paxos.
e
The 100 million number is just repeating the estimate of Bob Martin. He did a recent interesting talk at Oxford where he estimates the current base of programmers (including VBA programmers from Excel), and extrapolates the growth rate. Pretty reasonable estimates. Martin does take a long time to get to talking about the future, but his honoring of Alan Turing is a worthy detour as he was a super genius. So sad they didn't treat him better.
s
That link didn't work but I did find this:

https://www.youtube.com/watch?v=BHnMItX2hEQ

e
Yep that's the talk. He has written a lot of books about Agile, and C++, but like most programming book authors has not built very many large projects. I don't expect him to, but Fred Brook's Mythical Man Month book is a bit more authoritative because Brooks managed 2000 programmers. Who gets to do that? To me a dozen is a big team.
p
@shalabh @Kartik Agaram I think I could connect some dots thanks to this thread. 1. When I started programming I thought the “smart”/“hacky” solutions are nice. 2. Later when I realized it might lead to dead-end code which I am unable the refactor I dumped this. I started to overcomplicate code as an anti-thesis. 3. Later, I recognised features/ideas die, so I will have to get rid of some parts of my code no matter what. The “synthesis” is trying to write as little code as possible and focus, cutting and getting rid of (“overlapping”) features as much as possible. (I also realized writing code which is easy to change is a feature itself!) However as focusing on that I tend to write “hackier” code again, at least I am always questioning myself why I don’t just keep it simple instead going too abstract - is it really worth it? But being “simple” also adds a quite hard limit what I can or can’t do. This is what leads me again to be “(mutable-)hacky”, which extends the possibilites on that “hard limit” - so what I try to do is push “standalone” hacky solutions into Units so the whole system might just be ok. I suspect this “valueable hackyness” has to do something with “identity” / “problems naturally expressed with mutability”. With preserved identity and mutability it is OFTEN MUCH EASIER to add some “clever” code to add a limited set features. Even the Unit tests are simpler and the code has smaller “surface” to test. I am aware things can blow off any time that way, but the conclusion I have today is its easier to rewrite “easy” parts than going accidentally too abstract (which is kind of having the wrong abstractions for the domain which is a kind of technical debt itself). I am curious what do you think about the relation of simplicity, “clever/hacky” solutions, mutability, identity, FP. - Is it something you also feel? Do you feel similar pressure?
s
I hadn't connected the clever/hackiness ideas with mutability, FP etc. Maybe there is something there. I definitely feel the pressure between 'simple but weak' and 'complex (uses abstractions) but powerful'. BTW, what does 'hacky' mean exactly? Sounds like something that is super good in one aspect (quick to implement) but worse in another aspect (hard to maintain or understand / error prone / inconsistent with the rest of the design). It's a specific trade-off that we make - but we make trade-offs all the time. So some trade-offs are classified as hacks?
❤️ 1
👍 1
d
@Pezo - Zoltan Peto an example would help.
p
For example: I know some part of code can be reached theoretically multiple ways which should be handled on the proper layer of abstraction. But I also know via other constraints (eg.: knowing the navigation options in a menu), that some events might just happen in a given order. Theoretically I could write up beautiful invariants on that to express these constraints, but the menu itself can change so in that case I had to prove the same thing over an over again and it is not obvious how to do that if not using a dependent typed language. Even with that, often I "just know" which ordering of events are possible and which are not. Of course it is really easy to forget about a case, but via hand-crafted tests we all assume we enumerate "all the important" cases and that might be a problem.
@shalabh Thats a nice question: what is hacky? To me it seems to be a piece of code which does not respect "the layers" of codebase and uses a shortcut. With that we can "skip" the "full, proper" implemententation of the 2 layers we are connecting - or all/any intermediate layers. Sometimes a hack means: the existing implementation and abstractions are out of date, but without a rewrite, without respecting the higher abstractness of the problem (and without the possibility of introducing a much broader set of problems harder to reason about and implement): we can cheat and the existing rules and abstractions here and there. Maybe this is a little bit too small example but the "for" of "procedural" is hacky compared to "functional"s "map", but it has the 2 magic keywords: break; and continue "map" itself lacks of and must be mimiced on a higher level.
g
you could always wild out in function-land and use callbags or transducers to get some of those keywords back 😉
👍 1
s
When people use the term functional programming, do they usually mean the heavy use of immutable data structures, or the organizational convention of separating data and functions?
k
For me it's the former. Particularly https://en.wikipedia.org/wiki/Referential_transparency Separating functions and data feels pretty foundational to pretty much all paradigms (except maybe Prolog and SQL?) There used to also be a connotation of using higher order functions with function values. But it too seems to be pervading all languages and being taken for granted.
s
@Kartik Agaram “Separating functions and data feels pretty foundational to pretty much all paradigms” Do you mean besides the paradigm of OOP?
t
To me, the defining feature of "functional programming" is "dynamic function composition"; that is to say that procedures are data, and they can be stored, moved, invoked, and composed based on runtime decisions. Referential transparency is another important concept which I see as closely tied to functional programming (though not strictly the same thing); specifically it's closely associated with the "pure" subset of functional programming. FWIW, it's the need for referential transparency that forced me to adopt pure functional programming in my own project's design. Consider closures in non-pure languages like python or JS, though— you can have a function which is not referentially transparent, by nature of referring to a mutable variable in an outer function scope (including the global one). I would still call the use of such closures "functional programming", though.
k
This is an interesting disconnect. We're all saying very different things, about a subject I used to think a consensus existed on, if not on what terms mean, at least in the 2 things a term could mean.
Me:
> Separating functions and data feels pretty foundational to pretty much all paradigms.
@Steve Dekorte:
Do you mean besides the paradigm of OOP?
I see, I misunderstood what you meant by "separating functions and data". The term 'functional programming' existed long before OOP: • The 1978 Turing Award lecture, "Can programming be liberated from the von Neumann style?" makes no mention of OOP or Simula (1962) or Smalltalk (1972). • The famous 1990 paper "Why Functional Programming matters" makes no mention of OOP or C++ (which first came out in 1985). I mostly get my sense of the term from those sources. While I usually don't care to argue semantics of terms, 'FP' seems still relatively crisp in meaning and so worth defending.
Ah, @tbabb, I had to read your comment a few times to realize it was the same as the third connotation I mentioned. Yes, that connotation has a long history. I tend to distinguish between referential transparency and first-class functions using the terms "pure FP" and "FP" when necessary.
👍 2