Some of the strengths and (common) weaknesses of d...
# thinking-together
a
Some of the strengths and (common) weaknesses of declarative programming: https://www.toptal.com/software/declarative-programming
k
The author has an interesting manifesto: https://altocode.nl
e
I think that Federico only addressed one kind of declarative language, that of a domain specific language. PROLOG is an example of a deductive language that is declarative in nature; you don't move the execution pointer down a sequential list of instructions, instead you declare a series of facts, and the runtime magically does the logical implication steps for you so that then you can ask questions given your facts. The computer does a lot of tricky work for you in PROLOG, and it is an unfortunate fact of history that at the time the french labs invented PROLOG, there was a huge battle for funding between the AI people using LISP (primarily at Project MAC at MIT), and the MIT team won, got the funding for "automatic programming", and then proceeded to fail miserably to deliver on any of the promises of computer-aided programming, and it tarnished the term "AI" for over 20 years. Now that the old episode has been forgotten AI is now new and shiny again, and getting massive funding all over the place (and for good reason, the latest AI techniques are fantastic and make the old stuff look extremely crude). Anyway PROLOG was taken up by the Japanese, in their ill-fated "5th generation language" project which burned hundreds of millions. So that buried PROLOG as a failure of that magnitude doesn't go unnoticed. But PROLOG has some unique aspects that are worth considering. Using deduction is a very powerful technique.
👍 4
w
Isn't he just describing leaky abstractions?
👍 2
d
The critique about DSL and the complexity gap ring true. I'd much prefer good libraries not some half baked yaml/xml/json DSL. I'm looking at you Ansible, Gulp, Ant etc.). However the real essence of a declarative approach is defining an end state and not instructions for how to get there. IMHO the actual problem with declarativeness in that in the general case, you just can't omit "how to get there" because you can't eliminate pathological performance cases. I'll call the declarative run-time a
solver
from here on because that is what we are really talking about. I don't believe you can have a solver that can analytically determine the best method to tackle a problem because it suffers from multiple types of halting-problem-esq regresses e.g.. - the "methods to determine the method" which can also be pathological - the solver would need to implement every possible execution strategy The work-around tends to be ways to hint to your solver about it's approach to avoid pathology but I think it can only ever be a partial workaround. The general case requires a "complete hint" i.e. the explicit list of instructions to perform.
👍 1
@interstar read your comment - Yes! Thinking along the same lines. IMHO prolog is a textbook of example of how declarativeness is impossible at the general level. The language writes a check that the solver can't pay.
m
I think Prolog didn't ever lose, it was subsumed by strongly-typed languages. Every time you use type inference you trigger a unification algorithm in the type checker. Program domain codified in the type system is a proof that some aspects of your runtime code is valid and the type checker verifies that through unification. https://en.wikipedia.org/wiki/Curry–Howard_correspondence
My point is that when we write in strongly-typed languages (even those that are commonly perceived as "imperative"), we're using declarative logic programming on the level of types regardless of what we think of usefulness of declarative programming.
👍 2
d
strongly-typed languages (even those that are commonly perceived as "imperative"), we're using declarative logic programming on the level of types
I like that. I don't think it redeems Prolog as a general purpose programming language. It says something different: declarativeness works for constrained problems. Given various type-systems are turing-complete I guess the problems are only really constrained by practice not power.
m
yep, it also helps to perceive most strongly-typed languages as 2 separate languages: one for "actual programming", be it imperative or declarative; and another declarative logical language on top of that in which you can write additional "type specifications" for a language "below". So when people say "these type systems are ridiculously complex" it's fair to answer that most type systems are just separate languages for declarative logic programming that help you write a "proof" for a "lower-level" language. Btw, this is what makes dependent types so fascinating to me: they make it possible to "bundle" a "formal verification" for your program if you put enough effort in making type signatures detailed and expressive enough.
as far as I understand dependent types allow having proper declarative "languages" on the type system level, here's how vector concatenation looks in Idris, the type signature part is the most interesting to me. I don't have much experience with Prolog, but I wonder if it's as powerful as dependent types in Idris
Copy code
(++) : Vect n a -> Vect m a -> Vect (n + m) a
(++) Nil       ys = ys
(++) (x :: xs) ys = x :: xs ++ ys
t
His main point seems to be that external DSLs are bad, but writing declarative style code can be good and composable. The embedded HTML example he gives reminds me a lot of how the HTML module works in Elm. https://package.elm-lang.org/packages/elm-lang/html/latest/Html