What are people’s opinions on Behavioral Programmi...
# thinking-together
j
What are people’s opinions on Behavioral Programming?

https://youtu.be/cXuvCMG21Ss

👍 3
e
There are several competing theories of how to look at programming. One theory is the actor model, which is a pure object oriented conception, from Hewitt i believe, and was embodied in various languages like smalltalk, where messages are passed. Other systems make explicit a finite state machine, and you have state transitions caused by various events. There are other approaches. But as Joe Armstrong pointed out in his talk with Hoare and Hewitt, at the reunion of the "3 old men", only his Erlang/Elixir system actually worked. The concept of having the agents track the state themselves leads to chaos; an almost impossible to debug system. I think Luca's example of TicTacToe shows how it makes the game far harder to understand and prove correct than something much more conventional.
👍 1
e
"Who needs it today?" Paul Graham asks this question a lot. It helps you workout who the early adopters are. This rather looks like an idea looking for a problem. Which sometimes works. But it can be hard to find early adopters if you start with an idea and then search for early adopters. Have a read of http://www.paulgraham.com/bronze.html
p
I’ve checked some Conal Elliott videos on youtube and I’m a fan (but that’s for now). 😄 I try to use technologies which are conceptually closer: I’m learning cycle.js for “FRP” (it can be married with react.js). This (Stream based programming / “FRP” / real FRP) and Dependent Types are my 2 things I’d like to focus in the next years. I really believe they will help a lot and mainstream is just matter of time. (But sure, knowing a bit about this industry it can take a lot of time to be mainstream.)
j
@Eddy Parkinson Thanks, that was a very good read.
i
I've heard about behavioral programming while researching Harel's statecharts (which are themselves a nice idea). I'd like to try it out in a toy project, for the time being, since the idea of moving the immutability to the code base itself is tempting. That's how I see those bthreads as of now, as a means of implementing an immutable code base. Although, append-only better describes the situation.
g
this reminds me of a project that I made where the program was parsing many different versions of an xml (it was a long time ago) file. I wrote the version 1 was a recursive decent parser class. Version N+1 extends N and override the parts of the AST that had changed. it ment that I didn’t break parsing of older messages. it did mean that i couldn’t deprecate the code old versions that weren’t used anymore. it made easy to see what changed from between versions.
a
@Edward de Jong / Beads Project
… it makes the game far harder to understand and prove correct than something much more conventional.
This seems right at first glance, but I would want to learn more about the tooling before I agree. I still have only skimmed the linked papers, but the original authors rely on model-checking for various aspects of BP already, so I suspect proving correctness of tic-tac-toe is within reach. As for understanding, well, if the idea that the event log is all that matters to understanding legacy code is true, then a tool that lets you manually or programmatically explore potential event logs might be even better than trying to find all the move validation code in a traditional code base. Certainly, if the way you understand a program is with event logs, that would make answering reachability questions like "what happens when 'X' tries to move" or "what can prevent an 'X' move" straight-forward. The answer to the first is a printout, and the answer to the second is some representation of "if it's preceded by an 'X' move." I'm definitely imagining the best possible scenario, though. BP is on my reading list now, so perhaps I'll be brought back to earth soon. 🙂
k
@alltom It would be a great breakthrough if we discovered that decomposing problems into behaviors aided in creating verified models for them. I haven't seen this yet, but it seems worth exploring. Also, you're right that BP can seem quite useful if you imagine the best possible scenario. But that doesn't seem how it's being sold. It seems to be sold as a methodogical silver bullet. Just use it and all problems become easy. If they instead said, "if used well it will help" I'd be much more amenable. Exhibit A "What if we could make changes or understand how complex systems work without having to read and maintain an artifact." (OP, 3:34) This seems wildly overblown. You still have to maintain the artifact. Especially if you want the desired model to be a linear/affine combination of the input behaviors. Because there are certainly many more combinations of behaviors that lead to spaghetti than not. I had a conversation with Luca Matteis a month ago (https://twitter.com/lmatteis/status/1204862635537252352) where we chatted a bit about my layers as compared to BP. I'm careful not to claim my layers always help. It takes taste to decompose programs into layers the right way. Even so, BP seems to lack one thing layers provide: intermediate combinations are useful, and functionality grows in a monotonic way. What happens to a BP program if you take out one behavior? Is it still legal? Easy to reason about? Useful? I haven't seen anybody answer these questions. Exhibit B Here's a less technical description of BP by the creator: http://www.wisdom.weizmann.ac.il/~harel/papers/LiberatingProgramming.pdf. It seems incredibly overblown. We discussed it back in Dec 2018, but that's way beyond the visible window of this forum. Here's a comment I wrote there about it. ---
...current methods for dealing with programming the dynamics of reactivity, however powerful and convenient, suffer from the same woes: We sit in front of a screen and write (or draw) programs that prescribe the behavior for each of the relevant parts of the system over time. Then we must check/test/verify that the combined behavior of all the parts satisfies a separately specified set of requirements or constraints. ... There is no need for separate specifications for the operational tasks and the requirements thereof. Anything that falls inside the total sum of what has been played-in will be a legal behavior of the system.
I'm still wrapping my head around this vision, but I think it's ignoring the essential complexity of programming. On a fundamental level programmers deal with non-linear building blocks; interactions between constraints can be hard to imagine ahead of time. How would you gain confidence that you've "played-in" a project sufficiently to work out possible constraints? Admittedly we have trouble doing this with existing systems. But surely we need to pay more attention to constraints, not less. Representing actions physically makes it more difficult to survey all actions entered so far, the scenarios they apply in, etc. Check/test/verify is the fundamental, irreducible core of programming. Trying to eliminate it is a fool's errand. LSC (the original Behavioral Programming system) introduces the notion of "play-in" to describe scenarios and how the system should react to them. So there'll be a natural tendency for the number of scenarios and handlers to grow. It's unclear to me how the opposite dynamic of generalizing scenarios happens. How does the system encourage noticing that two scenarios are special cases of a single one and may be coalesced? How does the programmer/user replace two played-in scenarios with a single new one? Without supporting this countervailing operation, the whole system will descend into monotonically complexifying spaghetti. --- Based on this quote, I don't think BP started out envisioning model-checking. If they've since started to do so, I'd appreciate recent papers.
There's an interesting connection betwen BP and http://people.csail.mit.edu/brooks/papers/AIM-864.pdf
Imperative programming is concerned with the order in which things happen. Functional programming tries to make things as atemporal as possible, robust to multiple orderings of operations. OP shows how to write unordered behaviors -- but rely on the operations happening in just the right order. (at time 9:30) I don't understand why this is a good thing! It seems to be the worst of both worlds. I still have to think about the order in which I want things to happen, but now I can't just describe the order directly. I have to arrange behaviors to make that order emergently occur. And now readers can't just read behaviors and understand their purpose. They have to simulate them to understand implications. Why is this an improvement on just regular much-maligned imperative programming? In fairness, OP is not by the creator of BP. But it doesn't seem like a strong case.
I'm at 12 minutes now, and OP is talking about modifying a program based on just reading an event trace. I love traces, and I want to nod along. But wait a minute, what if a program needed events to happen in different orders in two different scenarios? Most real-world programs have many many scenarios they need to work in. Never allowing ourselves to touch existing parts of the program seems like a bad way to reliably get the desired effect. It's not clear to me how this block means "block loadingAccount until adShown":
Copy code
yield {
  wait: 'adShown'
  block: 'loadingAccount'
}
Can somebody explain this? Is it assuming there's another b-thread somewhere pumping out 'loadingAccount' events ad infinitum?
The animations suggest that b-threads run in lock step, each
yield
in them taking equal time. Is this true? Certainly the examples here would have wildly different behavior for different relative timings. Being this sensitive to timing seems really bad. It gives me flashbacks to writing Verilog code and running into bugs from signals not getting to a latch in time for the next clock cycle. Ok, I'll stop spamming this thread. Summary: I have been slowly thinking about BP over 1.5 years now, and my opinion is slowly crystallizing to opposition. OP seems like a poor advertisement for BP. Either it's misunderstanding BP or it's making certain drawbacks very obvious but not self-aware of doing so.
Ah, I see that http://www.wisdom.weizmann.ac.il/~amarron/BP%20-%20CACM%20-%20Author%20version.pdf (the original BP paper?) admits the possibility of conflicting b-threads in Section 5.1. They point out that: * Conflicts can be resolved using priorities. Which absolutely requires new b-threads to be aware of what older b-threads exist. * A model-checker is required to warn programmers when conflict may arise. Using BP in React seems like a recipe for spaghetti until React gains a model-checker. Is there one in development somewhere?
I poked the author on Twitter with these concerns: https://twitter.com/akkartik/status/1227025035329703936