I'm thinking more about the sensation of time, as ...
# thinking-together
i
I'm thinking more about the sensation of time, as it pertains to the execution of code. (Yeah, back on my bullshit.) I see a spectrum here — a spectrum of different sensations of time for different ways we interact with computation. On one end of the spectrum, we have raw math. There's not supposed to be any sensation of time in the evaluation of math. A variable always has a specific value; relationships either exist or they don't. It might take you (or a computer) some time to crunch values and arrive at a result, but that's absolutely not supposed to be part of the aesthetic. Conal Elliot's Denotational Design is an application of this sort of thinking to software design. Lambda calculus, Curry-Howard, and some of the more hardcore FP languages all exist — infinitely, suspended frozen in midair — over here. Of course, no computer language is actually timeless (that violated physics, and this is addressed in Church-Turing and related work, and we all agree never to try that again), but the desired sensation — the aesthetic — is one in which time is not a constraint or concern. On the other end of the spectrum, we have mechanical computers. There's no avoiding the sensation of time when operating these clockwork beasts. You're required to think about the passage of time as you plan an execution, or else the result will be nonsense, or malfunction. Nothing is instant, nothing exists until you build it. Here we find the CAP theorem, Turing machines, and Rich Hickey's The Language of the System, all of them toiling, sweating, grinding, churning. [Aside: note that Functional Programming is orthogonal to this spectrum — it is not bound to the math side. On either extreme and anywhere in between, you can have programming that's about immutable values (or not), static vs dynamic binding, data & behaviour co-located (or not), for-each vs map, place-oriented vs value-oriented, and so forth.] I've spent all my time over in the mechanical labour camp — this is where Hest lives. So I don't have much insight at all into the crystal tower of pure evaluation. So beyond just suggesting "Hey, talk about this spectrum" (which, granted, I am suggesting), I'd also like to know what examples you can point to that obviously violate this common alignment of aesthetics. For example: what's the most I feel the passage of time in execution you can get to when working with something like Coq, or Haskell, or APL? Is there some step debugger for SML that really lets you feel the iterative progress through execution? Or on the other side, what's out there that takes a process rooted in time — like CAP — and makes it shake that feeling of temporality? Look at something like Erlang/OTP — they take Prolog (timeless) and reify the sensation of process ("let it fail"). Who else is doing that? Is anyone doing it in the other direction?
🤔 4
c
Potentially in this direction could be something like temporal.io , which is a workflow language that makes keeping value in the stack for weeks at a time, a comfortable endeavor. (The contrast is usually we put values into a database so they can survive an upgrade over the course of the weeks)
🤔 1
Also, on another side of this that may or may not bear any fruit, is https://www.liquidsoap.info/ , a language for programming media streams.
c
I don't really know enough about the Haskell end of this spectrum but my sense is that the units for it are probably something like "percent of the code that has some kind of invariance guarantee". If you take something multi paradigm like Java or C++ it is possible to write very mathsy code by leaning in heavily on const, interfaces, classes etc. and earn yourself very strong guarantees about invariance over time. Different languages force or encourage you to bake more guarantees into your code but it's something that is possible in any language. Even something like Python, in the real world, will probably come with a stack of Organisational Practices and socially enforced norms that allow to in practice make similar assumptions, even though you could in theory overwrite the + operator at runtime. Accepting these kinds of restrictions limits your expressivity but can superpower your understanding of a codebase because it lets you make massively compressing abstractions and draw very clean line_s ("Ok... this can't possibly affect this... It must have come from A, B or C..._"). Time invariance is an affordance that lets you better "play computer in your head", as Bret Victor would say. So basically I think the reason that Haskell programmers play down the actual execution of their code is they are backing themselves to have already run it accurately in their head beforehand.
I basically believe you should start from the opposite end. Computers are physical machines that literally, actually, move and change with the forward arrow of time as part of the physical universe. Because they change in very small ways and very quickly, they kind of fly below our intuition radar. They are also highly deterministic compared to naturally occurring phenomena. Still, they actually do perform an irreversible action just like a computer built out of metal ball bearings falling through chutes (https://www.turingtumble.com/). If you want to understand what happened, you should just record what happened and then interrogate that recording with a powerful suite of thinking tools.
💯 1
🤔 1
m
Temporal Logic of Actions sounds like an in between in your spectrum:
This paper introduces TLA, which I now believe is the best general formalism for describing and reasoning about concurrent systems. The new idea in TLA is that one can use actions–formulas with primed and unprimed variables–in temporal formulas. An action describes a state-transition relation. For example, the action x’=x+1 means approximately the same thing as the programming-language statement x := x+1.
https://www.microsoft.com/en-us/research/publication/the-temporal-logic-of-actions/
c
l
@Ivan Reese
Functional Programming is orthogonal to this spectrum
I don’t get this. While I would normally think of functional programming as being on the timeless end of the spectrum, you make a good point that it can also be on other end. But isn’t it the case that programming styles that contrast with FP often do have to be structured in time? To pick from your examples, how would you have mutable values without modeling time? Or with
forEach
don’t you need to have time to have side effects work?
@Chris Knott Are you taking an anti-abstraction stance that programmers should be be aware of time as it exists in the internals of the language/environment they’re using (because they’ll have to deal with it eventually anyway)? Or are you just saying it’s good for them to think about/deal with time in their code, but it’s OK if it’s abstracted into a different (likely more limited) form?
i
synchronous languages (Lucid, Esterel, Lustre) have a pretty qualitatively different experience of time
i
Great references and exploration of the topic — thanks everyone. @Cole
keeping value in the stack for weeks at a time
I love this idea. I'm so accustomed to non-live programming that I forget about the completely different sensation of time that you get from a live environment, where state is can be thought of as non-volatile by default. @Chris Knott — I like that your two comments are "here's why [existing thing] is what it is" and "here's what I want to exist". Both make sense to me! @Mariano Guerra — Queued! Though unlike Lamport's earlier work (you know, the hits), this one looks like it might fly over my head a bit :$ @curious_reader — Croquet sure is interesting, hey? Though I'm not sure how it relates here — I can imagine ways, but if you had something specific about it you wanted to highlight that'd be appreciated. @Luke Persola — You're right, I under-considered what I was saying there. I was focused on pointing out that you can have all that conventionally "functional" stuff within a programming system that does feel very mechanical / process-oriented. I didn't consider enough whether it's possible to have the non-FP stuff within a system that feels timeless. Good point, will have to think about this some more. @ibdknox — Nice pulls! I suppose the same applies to RT OSes, CAN Bus (etc), and perhaps even programming within a high-end game engine (eg: frame cadence, Carmack's preference for algorithms that are slower average-case with less variability, etc). Would love to just be able to just... order a study on this.
s
ok, stream of conciousness incoming... On the concept of math vs mechanics, I perceive math as this vast graph that always exists, but in imagination. Mechanics is about materializing parts of this graph in some physical form. The machine manifests a subset of this graph, and traverses to other nodes, pulling in more and more of this graph, from the imaginary to the physical, as physical time progresses. If you type
2 + 3
in a system, you've got a graph with three nodes (2, 3, +, connected in a nice tree) and after some time, you've got a 4th node (5, connected to the [2,3+] bundle of nodes), but in mathland all 4 nodes (and the edges and more) pre-existed. On the intersection of these ideas, CRDTs come to mind. You have the semi-lattice, which is very mathematical and static. However you have the actual values at different nodes, which correspond to one node in the lattice at any point in time, but they change over time and eventually walk up the math lattice to meet at the top. Croquet came to my mind as well when reading the prompt. On the surface it is full-mechanical. The machines moves, step by step, and is implemented in that style. However look between steps - each step is functional, it must be deterministic - that's what keeps all the different systems in sync. There is no logical time within a step (e.g. can be considered instantaneous.. the next input cannot interrupt a step).
c
@Luke Persola Basically the first. You should at least be aware of which end of this spectrum is the actual bedrock, and which is the potentially leaky abstraction. I've used this example before but CPython and Pypy implement
list.pop(0)
(popping first item from a list) differently. CPython bumps a pointer, Pypy moves the list. So it's O(1) vs O(n). There is literally no way to discover this "in system". This is the sort of thing that comes up in end-user programming like data science and will actually confuse users. It can make the difference between a visualisation being interactive or not. The weakness in my position is that in a lot of cases, you truly don't ever need to worry about how the abstraction is actually realised/implemented. Most actual use cases are just IFTTT style plumbing or CRUD apps with N in the hundreds.
🍰 1
c
Hi @Ivan Reese! I would point you to the concept of Virtual Time of VPRI for this I found this https://news.ycombinator.com/item?id=16655336 and to be more specific this: https://blog.acolyer.org/2015/08/20/virtual-time/ I hope this is relevant to you
a
Virtual Time is when I think about this spectrum the most. Most obvious in stream/tick-based languages like ChucK and frameworks like MediaPipe. They're basically for-loops over time increments, but having no control over their passage—only the choice of what to do every time your code is awoken—makes time visceral to me in a way that the for-loop wouldn't. They tend to de-emphasize anything that happens out of the steady march of time.
👍 1
Also, @Ivan Reese, is it a coincidence that you started this thread a day before Wolfram came at the same question from the physics side? https://writings.stephenwolfram.com/2021/09/even-beyond-physics-introducing-multicomputation-as-a-fourth-general-paradigm-for-theoretical-science/
It’s common in this paradigm to discuss time—but normally it’s just treated as a variable in the equations, and one hopes that to find out what will happen at some arbitrary time one can just substitute the appropriate value for that variable into some formula derived by solving the equations. … In other words, that the passage of time can be an irreducible process, and it can take an irreducible amount of computational work to predict what a system will do at some particular time in the future.
🍻 3
i
The table in this image here is really interesting.
1
In the ordinary computational paradigm, time in effect progresses in a linear way, corresponding to the successive computation of the next state of the system from the previous one. But in the multicomputational paradigm there is no longer just a single thread of time; instead one can think of every possible path through the multiway system as defining a different interwoven thread of time.
This "multicomputational" paradigm offers an interesting aesthetic of time to consider. What would a programming system look/feel like that not only gave you an explicit grasp on the passage of time in computation, but that allowed you to explore multiple/all paths that diverge based on (say) wiggle room in the determinism of the program. Here's a really dumb way to fumble this idea into something I could play with: What if in Hest, instead of a data point having a concrete value (say: 5), a data point could contain some sort of "open" value. As the "open" data point travelled through the program graph and encountered some conditional node, it would satisfy all sides of the conditional, and produce more "open" output points to travel along each outgoing path. Maybe the breadth of "open"ness for those output points could then be narrowed, so the "open" value would only represent something that could satisfy the branch it took (eg: if the conditional was "< 10", then the point on one side now knows that it needs to be a number that's between -∞ and 9.9…, and the point on the other side knows it's either not a number or a number between 10 and ∞). You could zoom out and see how far each of those "open" points could travel before they were narrowed into oblivion. Would couple nicely with the idea that the points leave a visible trail as they travel that fades out after a while.
i
That’s a fully realized data flow graph
c
It does seem like it would just be a data flow graph. I'm having trouble understanding the importance of "time" in this focus. Is it just the aspect of observability?
👍 1
(layered over static control flow analysis)?
👍 1
a
@Ivan Reese You may have read about it already, but if you take that execution model and rotate it 90º, you'd get Propagators / Truth Maintenance Systems, attempts by Sussman's students to do massively parallel computation in a future where "we'll eventually be buying processors by the bushel and laying them down like concrete." In Propagators, the computation to perform any particular answer is implemented redundantly by implementations of various speeds and qualities, and answers are represented as intervals that close as various branches finish. So computation doesn't proceed so much as settle, which seems like an interesting angle for this thread's time theme. I haven't found any practical uses for this model in today's world, but it sure is fun to play in! Anyway, I learned about it from

this lecture

which I summarized here. There's a paper on propagators here.
🍰 2
(Plenty of "But that's just…" on LTU already)
i
@ibdknox @Cole Ah, yes. Right. * eyes dart back and forth * What's the state of the art when it comes to visualizing data flow analysis?
@Cole Maybe the role of time isn't important. I don't know either. It's just sort of.. me looking at the toys and buckets I've got in my sandbox and wondering what happens if I build the sand castle on top of the Tonka truck.
c
Fair enough! If we're just talking about anything and everything, then I'd suggest mashing up what you're thinking about with music and rhythm as means for recall and intuitive understanding. For example, when you study to music, both the material and the music being listened to enable recall in both directions. I have been really interested in generative music that follows me through my tasks in the day—switching energy levels between activities.
🍰 1
i
More on the feeling of time in execution: One thing that looping (especially while-loops) with side effects lets you do is repeatedly apply some process to the same piece of data, over and over. You can't do that with map. Typically can't do that with reduce (depending on implementation). Vaguely recall that this is why transducers are stateful. It's a kind of repetition that is inherently about transformation through time, since it's irreducibly successive. Also, makes me think of using the descriptors timeful/timeless (compare with stateful/stateless).