Reading what <@UJNJQD2AC> and <@UJM9NG84Q> above w...
# thinking-together
p
Reading what @Drewverlee and @Ian Bicking above wrote leaves me thinking that what I love most about writing software in Elm (and to a lesser extent React) or SQL is what I don't have to think about. Particularly these last couple of years writing decentralized systems writes can come at me from anywhere at any time -- if I had to translate those back into specific "display updates" the reconciliation would be abominable. Vice versa, by channeling writes through a single point I get tremendous leverage to be able to interpose whatever storage engine I want. The flip side of all this, of course, is that it doesn't scale. By that I mean that the abstraction breaks down inevitably when speed becomes a first-order concern. That happens in SQL databases when you get a few gigabytes more data than your system has RAM (if not before) or in Elm / React when you are trying to respond on the next frame for things like text entry.
I don't have any real conclusions from all this -- maybe that Reactive Programming could be a good idea -- and I can't explain why I don't have a positive sentiment towards Haskell despite it demonstrating these properties.
❤️ 1
Broadly, I will note (as a recovering database person) that in the end you always have to look inside the box and inside the box is always a horrifying maze of abstractions and optimizations... or worse, it isn't.
d
It's true that performance often adds complexity, in part, that's why it's interesting to consider a more declarative model to start. Eventually it might only become documentation, but it gives us something invaluable throughout, which is clarity of intent. I think trying to get something perfect here is too much, if we can just get something slightly better that's enough, both in terms of personal satisfaction and improving the ecosystem.
👍 1
❤️ 1
w
What with Turing tarpits, the value of a programming environment is not in what it can, but in what it does not do well. It's not the features but the limitations.
d
Sure, I'm not aware yet of the constraints around datalog as a query language. I'm sure they exist, I just haven't encountered it in my very short travels. Though using it only as a query language is one form of limitation. Datalog is based on prolog, so that would be interesting to study as a look into how the model applies more generally. More directly, the biggest thing you can't do, is fine tune performance at this layer of abstraction.
w
Datalog is a good example since it's like Prolog the good parts.
k
Eventually a declarative model might only become documentation, but it gives us something invaluable throughout, which is clarity of intent.
@Drewverlee This is a really great sentence, thank you. The term 'declarative' has historically tended to conflate two concerns: * More concise phrasing. Syntax and so on. * Error checking. The conflating is unfortunate, because when you introduce a new syntax for the declarative model you also restrict what one can say in the lower level imperative substrate. That then prevents the organic "retreat to documentation". Creating a language has prevented you from thinking certain thoughts (or at least blocked on the authors to support your use case). Writing certain programs now requires leaving the declarative model entirely, with all its error checking benefits. All I want is machine code with an extensible DSL for assertions 😄
d
Thanks Kartik, I wasn't aware declarative implied anything about error checking. I was thinking more of this definition:
denoting high-level programming languages which can be used to solve problems without requiring the programmer to specify an exact procedure to be followed.
For example using "map, filter, reduce" is more declarative because they don't specify the flow control. This is useful because, simply put, its one less thing to get wrong. Another example would be Datalog being more declarative then SQL because you dont have to specify the joins. Leading with these examples, i think a big next step could be made by reactive datalog, which would mean the client/browser dow declears its data needs and the rest of the system (from the users perspective) doesn't have to worry about the flow control of how it gets there. This of course, will always break down at some scale, but so does everything, and as i was suggesting, the declarative functions can remain, only their interpretation has to change. This is common practice in our field, make a function, now how it works can change without breaking callers.
All I want is machine code with an extensible DSL for assertions
interesting, this would seem to be at the opposite end of the spectrum from what i'm describing. I dont have much machine code experience!
👍 2
k
I was at least half kidding with that last line. I’ve been digging into low-level guts lately, so it’s been on my mind. But most programming models become intractable in the presence of unsafe code, so the extreme end of my spectrum reduces to absurdity. We’ve all seen the surface definition of ‘declarative’, but that was exactly why your sentence seemed so insightful. It got me to return to first principles and ask where the benefits lay of communicating the ‘what’ rather than the ‘how’. None of the examples you cite — map/filter/reduce or Datalog — support the use case of treating them as executable documentation, of just specifying invariants that someone who wants more performance can implement for themselves. It seems worth designing declarative models with this escape hatch in mind.
s
"A programming language is low level when its programs require attention to the irrelevant." - Alan Perlis Of course even lower level languages hide some irrelevant details from us to provide something useful. The machine instruction set codes are irrelevant to the person writing assembly language.
what I love most about writing software in Elm (and to a lesser extent React) or SQL is what I don't have to think about
Yes. Until the irrelevant become relevant: mangling an SQL query to make it perform better, fine tuning struct memory layout in C, intertwining layers of caching code to improve performance, etc.
in the end you always have to look inside the box
Yeah. I wonder if it's possible that inside the box is also a nice, clean model that looks just like outside the box. Instead of trying to affect performance via 'side effect' (i.e. code munging), what if you could specify the implementation details separately from the higher level description? In most cases it seems the layer of abstraction is too hard. Rather it should be permeable when necessary. I wonder if any systems do this.
d
Isn’t this what some compilers do? They analyze code and optimize whatever they can. I’m thinking primarily of AOT, but JIT as well.
They let us write code that’s readable while still being fast. Rust is excellent in its commitment to “zero cost abstraction”, which I think is relevant to this discussion.
k
Yes, there are two broad fronts: a) Start at the lowest level and gradually “pull ourselves up by our bootstraps”, writing code at a higher and higher abstraction without losing performance or flexibility. Rust definitely pioneers a lot here. But there’s still access patterns where you’d be forced to use runtime reference counting and may therefore decide to switch to give up on Rust’s safety invariants. Even ignoring such cases, Rust programs are still pretty imperative. You get performance, but may not be very declarative. b) Start at a high level of declarativity and gradually support better performance. Prolog is the classic example that achieved huge gains in declarative expression. But if it was slow you didn’t really have a way to keep declarativity and performance. You’d end up with
cut
calls polluting your nice declarative program. Other high-level languages have the same problem. You get declarative but have to pay some performance cost at times. As far as I know there’s no high-level declarative model that also provides an “imperative side channel” independent of the declarative program.