Finally got some time to round up the prior art on...
# thinking-together
p
Finally got some time to round up the prior art on a largely forgotten idea in computing called managed time. I think this in some ways is what version control does to files, but think of it at a more granular level of variables and functions and the kinds of interactions that will spawn off. Got the hint initially from Alan Kay’s writings and I think there is a lot of fertile areas here to dig into here: https://prabros.com/readings-on-time Would love to hear your feedback if I have missed some related material in this post.
❤️ 11
p
Thanks. This is nicely put together. I don’t have much to add but I wish I did—there’s so much more thinking to be done in this space! I guess I would throw linear programming in there as a way to reason about time. I switch back and forth between Rust and Swift these days, and am more and more uncomfortable writing in languages with no way of saying “no one else can change, or ever refer to, this value after this point.” It’s just so much harder to reason about the behavior of an application. On an unrelated note, I’ve thrown my chips on the logic programming square here, following Bloom and others. I feel there’s a great deal of exploration to be done even just with the theory and application of CRDTs. Logic programming gives me huge flexibility to express new ideas in terms of a single “happened before” relation. I know that building in theorem proving a la Alloy or TLA+ would help a lot too, as would interactive tools for visualizing and debugging. If anyone else is exploring anything like this please let me know. I’ve only just begun.
❤️ 3
d
There was a project that did something like this (i.e. references to variables and functions are bound to whatever version of them existed at the time the programmer coded the reference). Am I thinking of Lambdu?
A related idea of mine: * Small set of datatypes (think JSON), all code is made of those too (think Lisp). Lexical scope (execution context) is also stored this way at runtime (think Scheme). * All operations (with few exceptions) in "code" therefore amounts to operating on that kind of data / structure. * If that's the whole universe of a running program, then user interaction also boils down to (directly or indirectly) causing the same kind of operations on the same kind of structure. * Coding involves editing a live structure, rather than writing text. * "Code" is actually stored as a list (or DAG, actually) of operations to apply, in some context. * Coding is done by recording (rather than just doing-and-forgetting) actions manually taken by the coder on live structure. The current state of "the code" is actually a pure function of actions taken. * Effectively, there's no difference between "code" and "programming", and both can be reviewed as a time- (or ordered)-sequence of actions, with "the past" edited directly to see what the result is in different cases, etc. Effectively it's the same kind of stuff could do with a GIT history. So the lines between code, data, programming, user interaction (with full undo/redo history tree), and source control, are all blurred.
👍 1
❤️ 1
s
Lamdu does do something like this IIRC. They 'copy' the type signature of the function into every call site. When you modify the function definition signature, they know both - the old signature a call site was bound to, and the new one of the definition. Of course because it is projectional editing, the 'copy' is usually hidden when you look the call site.
p
You maybe thinking of https://www.unisonweb.org
✔️ 2
Following on to my earlier reply, Nikolas Göbel wrote a nice, if terse, round-up of ideas relevant to time (and divergence) in databases here: https://www.nikolasgoebel.com/2019/12/30/perspectives-2019.html
❤️ 1
s
When I look at this space, I think of the discussions and approaches as within three vague categories. First is the 'essence' of the ideas - e.g. McCarthy's association of facts with pseudo/logical time (and possibly place). This is really a variant of the meta idea "make the implicit explicit". Second is 'data structure' oriented approaches. These are CRDTs, Datomic and such. We consider 'just data' flowing and persisted within the system and try to tag it with pseudo timestamps. We try to auto merge, query on a consistent 'view'. All data references have the psuedo timestamp. Even MVCC (and git) have this flavor - the git hash is the 'pseudo timestamp' of git data. Third is the 'system' oriented approaches - here it's not just about data but entire systems and subsystems have pseudo time. Examples are Reed's NAMOS, Jefferson's Virtual Time, the Croquet project, etc. The main difference I see here is all messages have the psuedo time attached. It's not just querying and modifying data, but any message between to nodes will have the pseudo time attached, which specifies which 'version' of the world the message comes from and refers to.
👍 1
❤️ 2
p
Sure. You could say: the development of concepts, the development of tools, and the coherent deployment of tools in the real world.
I’m interested in informing our concepts as well as we can by studying the way time is encoded, implicitly and explicitly, in our existing tools and practices. What concepts can you not encode directly in a system built on an append-only event log?
💯 1
d
Hmm, I'd say that any "mutable" log can be recreated from an initial state and an append-only sequence of the mutations. So nothing, right? I think this is also called "event sourcing"?
s
I think 'append only' is a sound idea. You can encode everything in it. But then you can encode everything in any database. The question is what exactly is the information recorded in the log and where does it exist? Time is encoded implicitly in many places - any API call to a library, or message going to a service (assume both are stateful) has implicit time. The state itself has implicit time. Any copy of information has implicit time (e.g. a cache in front of a database implicitly refers to a specific time in the database (per entity)).
Another often overlooked place is the 'code version' itself refers to some time. When you roll out a new version to a part of the system, the code there is newer (~time) than the other code. The data/information in dbs etc usually does not make the code time explicit - maybe it should.
👍 1
Any information displayed to the user on any screen has implicit time (it represents the info at a specific time) - sometimes this is explicit as a version number. Essentially any projection of information has associated time.
d
I suppose that's even true of the messages in this thread :)
s
Yeah. I think it's not just 'time' but also the information model. E.g. what is time associated with? Even pseudo time by itself it is pointless. What is useful is a 'fact' in some information schema. E.g. (property a, value b, time t) which is true at some time. Stuff usually put in the append only event logs. So what does the info model look like. Is it key/value with global keys? How does adding a new key fit in? WHere do agents/nodes fit in and how is intention encoded? E.g. how do we represent that the user clicked a button? They were looking at a version of some information (list of facts at time t) but there is lag and uncertainly until the fact is 'accepted' by the system. So I think we have to even make nodes explicit in our model. Then all messages become idempotent since everything is explicit (x said y at time t) - and can be replicated, distributed without issue.
1
p
@Dan Cook Could it be: https://www.expressionsofchange.org/videos/ by any chance?
@Dan Cook Also, your second comment reminded me of Operational Transformation: https://hackernoon.com/operational-transformation-the-real-time-collaborative-editing-algorithm-bf8756683f66
@shalabh That tripartite: Concept/Data Structures/Whole systems partition makes a lot of sense. This is really something I would like to explore, especially in the context of a design tool which I think holds a lot of promise. And the idea of making identity of nodes explicit is also very interesting. That would turn it into a conversation between agents and a coherent story emerging from it. Reminds me of Conversation Theory from Cybernetics: http://worrydream.com/refs/Bolt%20-%20Graphical%20Conversation%20Theory.pdf
❤️ 2
s
Lamdu and Unision are interesting but I don't see how they model time differently at runtime. They do model the 'code' differently though.
✔️ 1
o
@shalabh wrote:
Then all messages become idempotent since everything is explicit (x said y at time t) - and can be replicated, distributed without issue.
That is exactly this property that I find interesting to build offline first collaborative editing environments. And more complicated abstractions can be built on it. The article "Data Laced with History: Causal Trees & Operational CRDTs" by Archagon, made my understand all this.
❤️ 3