Just a random thought: I was looking at Bret Victo...
# thinking-together
i
Just a random thought: I was looking at Bret Victor’s old Learnable Programming, and it had a little note about how programming consists of decomposing problems, and I realized: that’s not what I do these days. I spend all my time thinking about messaging and communication between systems, and “decomposition” feels like a luxury. I don’t know if that’s a general change in programming, or just the nature of my work or professional stage… but it feels like a real change to me. That is, I spend a lot of time thinking about these things: 1. What are the entities in my system? These could be as simple as objects, but might be remote services, or different processes, browser tabs, etc. 2. Who knows what? 3. Who needs to keep track of what? 4. From any given context, how do I get access to the other entities? 5. How do I communicate with them? The push/pull of functions or RPC? Pub/sub? Some wonky event system? 6. Where does a particular change originate, and which entities are simply reactive? This all is where a lot of modern language development leaves me cold. Types don’t offer much here. Going further, I think there really is a kind of modernist/postmodernist break here (ala http://homepages.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf): modern approaches attempt to create self-consistent and robust systems, and postmodernist approaches accept that we operate in a diverse systems where a lot of important things happen at the intersection of incompatible modernist systems. I don’t have any conclusion in mind, but I’d be interested in people’s thoughts.
❤️ 7
k
Modernist/postmodernist seems a really fertile distinction alongside pro-evolution/anti-evolution (my favored framing for what used to be west-coast vs east coast: http://yosefk.com/blog/what-worse-is-better-vs-the-right-thing-is-really-about.html) and liberal/conservative (https://gist.github.com/cornchz/3313150). (I’ve been thinking about these categorizations lately after reading https://josephg.com/blog/3-tribes, which feels less general but still wrong in a constructive way.)
❤️ 1
k
That reminds me of the explanation that Gerald Sussman has give for MIT's switch from Scheme to Python as their introductory language (it was in some video, I don't have the reference at hand unfortunately). He said that programming had changed, basic algorithms and data structures were less important now gluing stuff together. Python was a better fit for that kind of work.
@Konrad Hinsen I’ve seen that as well. It’s mentioned in http://wingolog.org/archives/2009/03/24/international-lisp-conference-day-two so perhaps there was never a permalink for it. I’ve always found this argument tendentious and a rationalization. Even if the world turns out to be post-modernist (a big if), there’s value in the first few stages of learning being modernist.
i
I don’t particularly like the term “glue”. It makes it seem like people are just slapping things together. IMHO the modernist work, encapsulated into a single process running a well-defined language, is the easy part. That’s why I spend all my time on the interprocess part… because it’s so much harder! And it tears down my attempts to create good programming language abstractions, because those abstractions don’t apply widely enough. Ideas and goals are spread out across many systems.
👍 2
e
Sussman didn't have the guts to admit that LISP is a tired dead horse, that should have been abandoned a long time ago. It has one of the lowest readability-by-other scores of of the known languages. I can think of only 2 languages worse than LISP for readability, and those are FORTH and APL. In Sussman's heyday there were only a few hundred computers in existence, and now that computers are everywhere, and there are millions of programmers, it is extremely important to be able to read and use other people's code. Take some random chunk of LISP and good luck trying to fit that. You can't determine the data structures being used in LISP unless you mentally execute the program in your head, and we all know that chess masters can play multiple games in their heads blindfolded, but that is not a widely distributed skill...
🎉 1
k
@Ian Bicking I definitely empathize, happy to stop using “glue”. It didn’t have negative connotations in my mind. The recent thread on distributed computing may be relevant if you hadn’t already seen it: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1557434052245400 I’m trying to be cognizant of my modernist bias, but there are levels of questions here: • How much of the difficulty creating abstractions that work across systems is due to design failures in individual systems? • Is it possible to cross the chasm to new, equally diverse systems that don’t suffer from those foundational design flaws? Valid answers: • No matter what you do, the complexity of integrating across system boundaries will dominate. (In all domains?) • Maybe there’s a perfect world somewhere, but there’s no way for us to get there. We’re locked in by past decisions. • Change all the things! I think the answer depends on domain. Games, for example, often don’t have to deal with multiple platforms, and where they do they can polyfill the heck out of them to make integration plug and play. But that’s of course just one tiny end of the spectrum. One may also be able to get away with modernism at lower layers of the stack. My background is in systems programming, and that maybe explains something. Anyways, just trying to throw out ideas that someone else can maybe run with to show me the limitations of my worldview: • Yes, we can and should change all the things. • Good fences make good neighbors. Good components simplify integration problems. • It will take a long time. Probably longer than a lifetime. • Nobody will ever have all the answers, so above all make foundational design decisions open to revision (with less effort than it will take with the current stack). • Lots of respect for people who have to live with and deal with the current state of the world in the meantime. (Like me in my day job.) Bonus link if you read this far: http://akkartik.name/post/deepness
👍 1
Paraphrasing from the above link http://homepages.mcs.vuw.ac.nz/~kjx/papers/nopp.pdf (section 5):
It doesn’t make sense to say a Bovine object is an abstraction of a real cow.. or that the object in the program is “implemented by” a cow in reality.. or that the
Cow
class is a Platonic ideal of the immutable, eternal form of a cow.. Instead, the object in the program can be seen as a sign of the object in the world. Unlike abstractions, which can be reasoned about using deduction (from causes to effects), signs are effectively implications, and are modelled using abduction (reasoning from effects to causes).
❤️ ❤️
❤️ 2
c
@Ian Bicking I don't want to self-promote too much but your pain points here are too in-line with what I'm trying to do to hold myself back. https://strat.world/ Strat's whole purpose is to provide language abstractions over what you've laid out up there. It's still pretty early in development but it can deploy arbitrarily complex systems.
s
This all is where a lot of modern language development leaves me cold. Types don’t offer much here.
Absolutely. A lot of 'system design' happens outside the scope of a usual programming language. In the end languages are employed in the service of the system, but there seem to be a sharp contrast in the models of composition 'in-language/in-process' vs 'cross-process'.
I spend all my time thinking about messaging and communication between systems, and “decomposition” feels like a luxury.
Yes. There is an incredible amount of model duplication in the way we build systems today. Each part is like an island where you define, from the ground up, each of the entities you wish to deal with in that island. The description of the larger system-wide entities and processes is shredded into little pieces, glued together with implementation details such as messaging libraries, and separated into various such islands. (From the linked paper)
We consider that the term “program” is both too big and too little for post modern computer science.
“Program” is too small because often we are working on multiple programs.
I suppose I heavily lean towards post-modern in this regard. "Programs" aren't interesting to me anymore and any model where you first compose programs (~processes) and then compose them into systems doesn't scale up well. Instead what's interesting is the whole lifecycle of the system, inspecting and updating the higher level processes that it embodies. Can we work with these directly? Update them, see the inspect the hypothetical effect on other processes, etc.? I wrote a bit here: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1557508482325700
👍 1
w
@Ian Bicking not just you. To me it seems that the operating systems are what have mostly failed to raise to the occasion.
👍 1
k
After reading the paper, I find I actually have a reasonable ‘post-modern component’ to my belief system: • Section 6: I think ‘requirements’ are not a good way to frame collaboration on software. • Section 7: I have no bias between ‘high’ and ‘low’ computing culture. Mostly because I’m too dense for these ideas to even register. (However, being blase about “faults in construction” is hard to stomach.) • Section 7: I love the idea of “ancient programs living in connection with programs not yet written”. Here’s my original introduction post from last year: https://futureprogramming.slack.com/archives/CC2JRGVLK/p1536962970000100 • Section 8: I’ve written about our over-reliance on modules: http://akkartik.name/post/modularity. I care primarily about encouraging outsiders to read source code, and there’s a fundamental tension there: abstractions help authors manage a codebase but initially hinder outsiders in understanding the codebase. Alright, I’m done navel-gazing.
❤️ 1
s
Yeah I can't fit myself in either category either. I definitely like grand narratives, though less detailed and more abstract than those described in the paper. Also yay for modularity and uniformity in some sense, and many metaphors. I think the paper fails to note that underlying the great heterogeneity is deep homogeneity that makes the diversity possible. TCP/IP, DNS under the various higher protocols. The standard web browser under the diverse web sites. Perhaps the question is where we wish to draw the modern/post-modern boundary in our systems.
🍰 1
d
When you write a program, does it contain more than one function, or more than one class, or more than one object? If so, you are decomposing the problem. Perhaps in your case, the "decomposing" part of your work is sufficiently rote and automatic that you don't consciously focus on it anymore.
w
@Kartik Agaram only just now got to read the 3 links in your first comment, and they're all rather interesting. Looks like I am mostly a Camp 2 conservative who embraces evolution, though I have aspects of all sides I think 🙂 Unlike the 3 articles, I am afraid I do not follow how Post-Modernism relates to software, so no idea there.
If anything, seems like this Slack has a vast majority of "camp 1" people, which in turn seems to overlap with "The Right Thing" and being anti-evolutional (instead of making existing programming tools better, want the world to jump to their new vision of programming).. that would, at least from the perspective of those 3 links, not bode well for this Slack 😉
👍 1
s
@Wouter do you think a 'new vision' could be arrived at in an evolutionary way as well? One example is 'buiding airplanes' vs 'making locomotives faster'. Almost none of infrastructure overlaps, and in the early days it's not clear air travel is even viable, but today both modes co-exist. This doesn't seem anti-evolution to me, but more of a fork at a lower level in our stack of concepts.
w
@shalabh I have nothing against attempting clean sheet revolutionary ideas, I was merely reflecting on the implications of those articles.. and yes, there is a lot of power in an evolutionary approach (assuming it means iterative, with lots of feedback from actual use). It certainly has the advantage that you can pull lots of people along, whereas revolutionary ideas initially meet mostly resistance 🙂
And nothing in this space even looks remotely like the revolution of airplanes to me. A lot of people here seem to look to Brett Victor for a revolution. You know what would be a revolution? If he demonstrated his ideas by implementing a complex piece of software, like a compiler, game engine, whatever, which would be remarkably simpler to create thanks to new techniques. Instead, we're not even remotely close to such a revolution.
s
Yeah I mostly agree with you. I imagine us as being in a pre-wright-flyer phase. We have some ideas of success criteria (e.g. 'would be remarkably simpler to create thanks to new techniques') and some guesses around what perspectives to pursue (this may vary quite a bit between us) but no real breakthrough.
k
I just found the best 1-sentence summary of "worse is better" ever. And it's by the author: "It is far better to have an under-featured product that is rock solid, fast, and small than one that covers what an expert would consider the complete requirements." (https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf, pg 219) Talk about burying the lede.
☝️ 3
w
One reason I've not seen mentioned why "worse is better" is better is compositionality: choosing a simple implementation with a possibly suboptimal API composes efficiently, whereas doing "the right thing" across multiple levels produces extreme inefficiency and impedance mismatch. Imagine a CPU-emulator written in Python running a Ruby interpreter 😛
w
Ruby APIs: always a semantic mess yet usually easy to get good stuff done: quicker, easier, more seductive.
g
can't handle a culture that thinks stacked layers of metaprogramming is the way to go at the framework level.
Some of us want to actually understand the code.
s
What's a concrete example of better compositionality via 'worst is better'? I've generally considered those two ideas unrelated.
g
there's one in the essay around interrupts: https://www.jwz.org/doc/worse-is-better.html
w
@shalabh I guess it is more that "the right thing" has worse compositionality, since such systems tend to have a lot of translation between what the implementation is doing and the idealized API. Layer multiple such systems and you end up with many unnecessary translations that affect performance and robustness. A "worse is better" tends to be more barebones, such that if you compose them, whatever glue you need will be the minimum necessary
k
@Wouter I absolutely agree with your sentiment, but your definition of compositionality is confusing. Usually I think of compositionality as a purely functional matter without taking performance into account. For example,

https://www.youtube.com/watch?v=5l2wMgm7ZOk

discusses how compositionality tends to break down once performance constraints enter the picture (rather than that compositionality causes bad performance). So I'm with Shalabh that the two ideas seem orthogonal. A bare-bones design can very easily be non-composable. A great design can seem very composable, particularly within its area of control (i.e. if people only do what the designer anticipated). In the end, the knife-edge the designer has to walk is between good design and totalizing design. You have to provide a good environment for use. But if you take control of the environment you'll eventually hamstring yourself.
I finally got around to watching this talk after citing it here multiple times, and.. Worse is better is indirectly referenced ~25 minutes in.
e
Composition in algebra, which is where you do f(g(h(j(k))))), is what LISP's power emanates from. However, it is hard to read, as one must execute it in your head from the inside, outwards. It's just a hard to read syntax, regardless of how it inherits traditional algebraic notation.
g
That's the job of a formatter (hopefully not the programmer). When indentation is consistent, the parens fall away.
d
Composition is the general idea of building up a complex object from parts. It isn't a syntax. In Curv, f(g(h(j(k)))) can be written as
k >> j >> h >> g >> f
, which is just like a Unix pipeline, with data flowing from left to right. Most functional languages have an equivalent syntax, and of course the Unix shell has pipelines.
👍 1
w
@Kartik Agaram maybe I used the wrong word, I am talking about what happens when you layer several larger systems. I agree that compositionality is usually defined in how easy it is to compose, but that is not even that a desirable quality, if it doesn't also entail efficiency, robustness, non-leakyness and many others
👍 2
e
The pipelining approach pioneered by APL proved very powerful, but in the end fairly obscure. Your pipeline example presumes a single input to each function, which is almost never the case. Functions have many options, and external conditions that they reference. There is a very good reason why languages that are based primarily on composition as the "power tool" of the language ultimately become obscure. It is commendable that Curv uses a much nicer syntax with the pipelining, but in zero of my sample programs in beads did pipelining occur. APL was very good at unraveling 2D matrices into 1D so that the pipelining would be more usable, and to this day APL is one of most compact languages ever devised. Unfortunately a function name does not disclose the mapping of what the function does, and the sequence k >> j >> etc. would require deep study to understand, particularly if the data is structured, and not merely an array. I can see why Haskell splits out the mapping of data types of a function away from the field names, but I don't like it one bit; it creates a double declaration of the function which i find wasteful. But i see why they did it.