I've been thinking about how the state of a runnin...
# thinking-together
I've been thinking about how the state of a running program could be modeled as the definition of a program changing over time. This seems closely related to "image" based programming systems, right? Does anyone have thoughts or reading related to this? (I got there thinking about the branchable/forkable database trend which tries to tie program version to state version.)
I'm not sure I follow. Does the definition of a program change at runtime? Do you mean to include user inputs as part of the definition?
Would you consider something like Redux or Recoil and a React SPA as an image?
I do think about this a lot, particularly in the context of partial evaluation, where (in short) you can supply part of the arguments to a program and get a result program that has all the consequences of that argument baked in (as opposed to partial application, where you supplying an argument probably just builds a closure). I mean, a program changing over time is kind of how beta reduction is technically defined, right? Transformation from one lambda term to the next. The problem is, implementations using that strategy aren't (IMO) really feasible. They're not efficient, and there tends to be ambiguity as to the next step, where IIRC different choices determine whether the thing even converges. One of my goals is a computational formalism that efficiently supports partial evaluation and this sort of program -> program mindset in general. (Key point: you want to be able to output "data", therefore data is a program) I'm not precisely sure what you mean by the forkable database trend. Can you give an example? I feel like I've heard of a couple different things that answer that description in very different ways.
@Adriaan Leijnse I tweeted about this a while ago. At some point in the future, I'd like to develop a programming environment based on this principle.
Do you mean this sort of thing?


@greg kavanagh An image in this sense would contain both code and the state. It's like your editor is a whole OS that you can modify at runtime: https://en.wikipedia.org/wiki/Smalltalk#Image-based_persistence
I think the difficulty I have with this is that there's no real model of change; it happens in this uncontrolled imperative kind of way that makes it hard to do version control and the things @Nick Smith is talking about
Maybe you want event sourcing baked in to the environment? https://martinfowler.com/eaaDev/EventSourcing.html You could explicitly treat the input as an event stream, and treat the program as a reducer over that stream. It does get interesting when that input depends on program output that depends on previous input.
@Nick Smith love that thread!
@Andrew F yep! FRP models this dependency just fine, but to make history editing work nicely you need to know about causality. I.e. "what past event is responsible for allowing some future event to happen?"
In my structure editor tech it is possible to store both logic and data in the same interpretable data structure fairly efficiently (I've written a custom sequential tree). Because of this, creating clones/forks and storing snapshots/images of both state and logic is trivial. Diffing and merging should be possible too, but obviously come with their caveats. Currently one use case I'm thinking for this would be stateful cloud functions. It should be easy and fast to write the snapshot to a persistent storage and then load it when the function needs to run, and then write the possibly changed version back to wait for the next run. Keeping the snapshots in storage would work as a free history of the program state (and logic) that you'd be able to open up in the structure editor. Of course, it won't just scale to infinity size persistent data and handling concurrent requests would be an extra hurdle. However, it should work fairly well for what is usually required of cloud functions. It is also able to keep much of the interpreted intermediate results and only recompute what's necessary. Emitting some kind of "rewind" messages to external systems still wouldn't be easy, but could be possible by diffing the snapshots and then constructing the rewind message from the diff..
I hope to experiment with modeling user input in a begginers purely functional env as changing the program text. 100% vaporware, but I'm excited about it this idea for several reasons: (1) Smuggle some powerful techniques such as "event sourcing", React/redux, time-travel debugging without users having to learn anything new they don't learn anyway about their dev environment while editing code (e.g. the env should be able to show the state at every point of the code). (2) Dissolve the difference between user actions & programming. To explain this one, I need a side rant. What is the one most important idea in all programming? imho, Composing bigger programs out of smaller programs. How exactly you do that depends on the language, in C it's functions, in Java classes & methods etc... But how do we begin teaching mainstream languages, say C or Java?
Copy code
#include <stdio.h>                  // OS input/output facilities

int main(int argc, char **argv) {   // OS calling convention
  char *name = argv[1];             // OS calling convention (I skipped the error handling)
  printf("Hello, %s!\n", name);     // OS input/output facilities
  return 0;                         // OS calling convention
All the attention is on process<->OS interfaces 😦. Which would be appropriate if teaching a shell, where you compose processes, but is irrelevant noise for teaching modularity within (say) C. If you accept this argument, you must start teaching in a REPL (or better), where you don't build an OS executable; you define a function (or method etc.) and the user "interacts" with it by using the language's normal call syntax.
Copy code
>>> def greet(name):
        return "Hello, " + name + "!"

>>> greet("Alice")
"Hello, Alice!"
[Corollary: the 1st language mustn't be a compiled language.] /rant So, if you're writing a game where you move a piece, or attack, the MVP is appending
| move ...
| attack ...
to the program code! Which would take previous game state & compute next state. Well, not necessarily append at the end. At highest level a game is a
compute_inner_state | render
pipeline, and the place to append player actions is
where you work on inner state. [Game-travel debugging requires an env where you can stop at middle of
as if the rest is commented out and run
| render
from there.] • If you have multiple moving units/pieces, it's tempting to append code into their individual "state functions", instead of descending a data structure to modify the right one. [200% vaporware] • I'd hope collaborative editing can stretch this into MVP for multi-player turn-based games. [300% vaporware, requires tricky modeling of time and concurrency. I have some ideas around "wait until _T_" meaning "this function does early return if time < _T_" but entirely unproven...] Well, that's minimal but won't feel like a "normal game UI". Plus, need some constrained mode temporarily restricting coding to "legal actions in the game". OK, but I want "UI buider" which is explicitly about binding key presses or a clickable screen regions to be shorthands to code modification. Like Emacs, one should not code explicit "I/O". Just customize the environment's event loop. (3) Even more ideological: I want people exposed to the subversive idea that it's any action they take interacting with a computer (key press, click etc.) is a form of programming. And that they can always script bigger programs out of those.
[hmm, point 3 sounds like 2. I feel there is ideological distinction but don't know to articulate it at this time.]