we keep telling computers how to work instead of w...
# thinking-together
d
we keep telling computers how to work instead of what they should do
🦞 1
p
Prolog and Haskell users might disagree with you. Now if we could only figure out why neither one took over the world...
k
There are quite different expectations about telling computers "what they should do". One is "let me give instructions in plain English, with the computer knowing the context as well as any person I might talk to". The other extreme is "let me write a formal specification, and have the computer derive a provably correct implementation from it". Prolog and Haskell are in this camp, but they haven't achieved the goal so far. Prolog accepts the formal specification but then solves it for a specific problem by trial and error, rather than deriving once and for all a suitable algorithm that works for many inputs. Haskell does nothing more than any other programming language. It cannot do anything with just a specification. But functional code looks more similar to specifications, so there is the illusion of progress. As for the first version of the goal, it has been the holy grail of AI for a few decades. Recently we have discovered that stating a goal informally entails the risk of the computer filling in the blanks in ways that we don't like.
👍 1
☝️ 3
a
I think using a computer should be more about asking 'what if' than telling it what or how
🤔 2
❤️ 3
d
Just to explicitly confirm, you know about "What not How", the declarative programming battle cry? Is that where the OP comes from? Or just amazing coincidence?
I call this The Inversion or the Imperative-Declarative dual, fwiw 😄
b
SQL is perhaps the biggest success of "what not how" so far. (EDIT: with biggest impact & reached audience — including people who don't consider themselves programmers, I've seen someone call it a success #CLYCGTCPL)
a
"Conventionally, programming techniques are divided into either an imperative or declarative approach, the first addressing the question of how the program should run and the latter addressing what the program should achieve. Live coders instead tend toward the question “What if?,” where the notation is not used to describe a desired procedure or outcome but instead to simply take the next step in an exploration. Each such step is guided more by the coder’s musical results of the previous step as they are perceived and less by an overall plan. The role of notation in live coding then is not to define, prescribe, record, or transcribe but to take a step into the darkness, into which the interpreter immediately throws light." https://livecodingbook.toplap.org/book/
s
On closer inspection, the boundary between “what” and “how” is often blurry, at best. At least I have a hard time finding it. Consider SQL. There are many details about “how” that are not hidden. You have to make a choice about how the schema should be split across tables by deciding which normal form to use for representing the exact same information. For a given schema, there are multiple alternative queries that will return the same results, but have wildly different performance characteristics (a simple example is using a JOIN vs an IN clause). Explicitly having to join a primary key with a foreign key is also arguably about “how” and not “what” (why is this not implicit? it doesn’t make sense to join on another column). Prolog and Haskell programs end up in the same situation as well. I consider SQL to be successful and useful, but I hesitate from calling it declarative. Many other declarative languages have similar issues. In “Hints and Principles..” Butler Lampson says declarative is just having fewer steps:
I agreed to write a piece for Alan Kay’s 70th birthday celebration,R60 and recklessly provided
the title “Declarative Programming”; this seemed safe, since everyone knows that declarative is
good. When it came time to write the paper I realized that I didn’t actually know what declarative
programming is, and searching the literature didn’t help much. I finally concluded that a program
is declarative if it has few steps; this makes it easier to understand (as long as each step is under-
standable), since people are bad at understanding long sequences of steps. Often it’s also easier to
optimize, since it doesn’t commit to the sequence of steps the machine should take.
The idea of how vs what seems related to non-declarative vs declarative and also related to incidental and essential complexity. For instance you could say you only want to express the essential part of the solution in your program. In any case, I think we want both - what and how. We want to specify the “what” to be able to clearly express the semantics of the system and want to express the “how” for pragmatic reasons.
❤️ 1
k
It just sunk in for me that this top post might be a response to the previous question about problems. If so, @Don Abrams I'd say this is a solution masquerading as a problem. You're assuming a value without motivating it. What ills stem from the way we currently tell computers how to work?
d
mostly it increases the amount a person needs to know to efficiently solve a problem
k
I try to distinguish between forces and problems. Forces are one-sided. Buying something costs money. Moving a rocket costs propellant. A problem contains at least 2 opposing forces. You have to buy this thing because.. The opposing force is usually rooted in a specific context. A rocket ship has a very tight weight budget. What's the context in which the amount a person needs to know is the critical bottleneck in computing? And why is this particular segment of stuff a person needs to know on the chopping block, and not say reading/writing, or math, or basic programming concepts, or the domain being operated on, or a bunch of stuff in between?