FTR: Here is the demo video I presented earlier to...
# share-your-work
g
FTR: Here is the demo video I presented earlier today. I've added links, in the form of a Kinopio page, to the other technologies that I didn't demo. Exploring Techniques and Notations for Augmenting DX

https://youtu.be/zXmC3BVIVuQ

🍰 5
j
This looks really cool, Paul. Are these synchronous dataflow? Meaning nodes execute once when they have received data at all their input ports?
👍🏼 1
g
Thanks! This is opposite to "synchronous dataflow". Nodes execute once for every input. A node can implement "synchronous dataflow" if it wants to, but, this is not a fundamental requirement. When I used to design hardware, I found that I could do so much more reliably (e.g. 0 defects in the field, guarantees, deliver-once instead of continuous delivery, utterly asynchronous, etc.) than when I switched to designing software. I believe that overkill-synchronous-thinking is a major cause of bugs and I want to find ways to break out of that mindset.
k
Thanks for posting this video! Question: what's the level of granularity of your diagram notation? Put differently, how is the operation of each node defined? By another diagram, by traditional code, or yet something else?
g
There are 2 kinds of node. Containers are recursively defined - they can contain Containers and Leaves. Leaf nodes contain code and are not recursive. In analogy, this is much like Lisp lists. Lists can contain Lists or Atoms. Atoms are the bottom. Containers run/loop in multiple steps. A Container is "busy" if any of its children is "busy" (recursively). Leaves run in one gulp. A Container can inhale a single message from its input queue only when it is not busy. Routing of messages between children is performed by the Container, not the children. Children cannot know where their inputs come from nor where their outputs are sent to. A Container cannot know what kind of component each child is and may compose a mix of child components of various kinds. In analogy, Containers are like "main loops" in windowing systems, except that it's turtles all the way down - a "main loop" might contain other "main loops" and so on. In analogy, a Container is like a Unix command-line command. Containers have several stdins and several stdouts. You can't tell from the outside (nor do you care), if the command is a bash script or a lump of C code. But, it is done much more efficiently than using Unix processes (think: closures and OO queue objects).
In this way, you can structure a system in layers that elide details. The details are all still there, but the reader is not forced to understand every niggly detail unless the reader wants to dig deeply.
k
Thanks @guitarvydas, that sounds like a very reasonable design!
🎸 1
m
I was wondering why asynchronous dataflow leads to less bugs the synchronous dataflow? Can you elaborate on this @guitarvydas ? 😊
g
#1: Observation - I know how to build hardware and read schematics, and, I know how to write code. The observation which has perplexed me for decades is that when I build hardware, it is much more reliable than when I build software. Hardware producers would provide guarantees on their products, while software producers hide behind EULAs. (Why?) #2: Observation - hardware “programming language” (schematics) is much more concise than most software programming languages. For example, the game of Pong in 1972 fit on one piece of paper, long before Functional Programming and Type Checking hit the mainstream. The 1972 version of Pong doesn’t even have a CPU in it. #3: Observation - something in our software workflow is causing bloat. Apps are ridiculously huge today. Software has become ridiculously complicated. For example, I can build a new language much, much faster (10x?, 100x?, …) using t2t (OhmJS + my own nano-DSL “RWR”) than if I use LLVM and friends. I can finish the new language in less time than it takes me to RTFM and to learn LLVM. #4: Observation: in hardware, every component is - by default - asynchronous. In software, though, every component is - by default - synchronous. My guess, my gut feel: simplicity. Asychronousity allows me to use divide-and-conquer and to solve-problems-and-implement components in small pieces, whereas building software is like crafting an intricate Swiss watch with 100’s of tiny gears. If a tooth breaks in any of the synchronous gears, the whole thing doesn’t work. If an async component breaks, I can isolate it and focus on it and fix it. It ain’t inherently more reliable, but, I can fix things easier and better. The simplicity of asynchronous design is like using LEGO blocks - I can imagine and implement much more interesting (aka “complicated”) apps using software asynchronous blocks. [aside: today’s “code libraries” are not LEGO blocks, they must be used in a synchronous manner, it’s synchrony all the way down]. [aside, knowing hardware, I see function-based programming as an inefficient use of CPU power, requiring extra software to support the function-based paradigm (note the use of the term “function-based” which is a superset of what we call “functional programming” today).
j
Great points @guitarvydas. Do you have any experience with LabVIEW? It’s a very structured visual programming environment. Just in the last four or five years, they introduced asynchronous data flow wires. They started working on an asynchronous diagram, but that was part of a new, next generation platform that was mothballed.
g
I've looked at LabVIEW but haven't used it. Feel free to educate me. @Jim Kring
k
@guitarvydas While I agree with all your observations, I am not convinced by the explanation. My own speculative hypothesis for the relevant difference between hardware and software is Turing-completeness leading to chaotic dynamics and thus an infinity of failure modes (see here for a more detailed argumentation). But I am not that convinced of my own hypothesis either.
👀 1
j
@guitarvydas you can download the LabVIEW community edition for free. This is a really good book. The author is fantastic 😉 LabVIEW for Everyone: Graphical Programming Made Easy and Fun https://a.co/d/0fPloe9u
👀 1
g
@Konrad Hinsen Hmm, I sense that I should want to understand your argument, but, as it stands I don’t get it yet. I think that you’re saying that software can veer wildly off-course if nudged or perturbed, but, I don’t understand how this contrasts with hardware. These thoughts fire neurons regarding Dave Ackley’s “robust first computing” (a way of making software survive severe damage and perturbation). To prime the pump, since we’re in vastly different timezones, let me say that from my perspective, Smalltalk does not do “message passing” (!), it, instead, does synchronous method calling. If the Smalltalk class hierarchy were rewritten in an asynchronous manner all the way down, like I perceive hardware to be, then Smalltalk’s std hierarchy would change drastically. How does this perspective dovetail with your argument?
k
@guitarvydas Yes, Dave Ackley's "robust computing" is very much aligned with my arguments. For me, the difference between hardware engineering and software engineering is that the former avoids chaotic behavior, and thus unpredictable failure modes, at all levels of design, whereas the latter goes for the "full power" of Turing completeness and tacitly accepts the unreliability of large assemblies as a consequence. There is then, of course, the question of why the two disciplines took different paths. My (now highly speculative) ideas are: (1) Maturity. Hardware engineering grew slowly, software engineering grew explosively. (2) Cost constraints. Production costs make investment into good design more valuable. And from your arguments, I'd be happy to add (3) Synchronicity. Asynchronous evolution makes chaotic behavior problematic even at small scales, and thus provides the incentives to avoid it much earlies in the disciplines evolution.
g
@Konrad Hinsen Interesting arguments. Still mulling… Observation: I’m not sure that electronic hardware followed a more mature growth than software. Electronics took off somewhere roughly around 1900 with Tesla, Steinmetz, Heaviside, et al. Electronics “culminated” around 1950/1960 with the invention of chips and CPUs (hardware progress continues). By this reckoning, electronics was about 1 century old when it created a spin-off. At the moment, software is about the same age. Maybe the difference is something to do with grounding in reality? Electronics is constrained by physical principles (electrons obey the speed of light, the “9 inches in one nanosecond” thing), whereas software is based on written math which is basically FTL (faster than light) - you can perform a referential transparency replacement operation on paper instantaneously. In fact, I wonder if FTL-think isn’t the cause of a great many workarounds/epicycles in software. On paper, operations are instantaneous, whereas in a physical CPU, operations take a finite amount of realtime - it’s not a 1:1 mapping. Hmmm.
@Konrad Hinsen another data-point: I do not believe that Functional Programming IS programming. It’s only a KIND of programming. Programming is something larger than FP. Programming is about making electronic reprogrammable machines DO something. If all you want to DO is to use the hardware as a calculator, where only the results of the computation matter, then FP is appropriate. If you want to use the machine as a sequencer or to network machines together or to run robots (ie. to make physical actions?), then FP is not as appropriate and some other notation(s) should be used to program machines for those kinds of actions. [Unfortunately, for me, FP is currently considered to be the only way to program machines, and, I continue to flail at trying to express what I think the differences are].
k
@guitarvydas There are indeed arguments to add to my list. Physical constraints is a good one! And it makes me think of Chuck Moore, who made a career out of staying close to the metal with Forth, and occasionally crossing over into the hardware domain. Is his software more robust? I can't say. His software is also small, in terms of functionality, which may be the more important factor in robustness. FTL-think is a nice term. Definitely a problem in the branch of computer science that leans towards mathematics. There's also the issue of losing sight of the physical devices, as in data centers and cloud computing. I certainly agree that the distinction between abstract data transformation and actions in the physical world matters, but I doubt there's a consensus on "programming" referring only to the latter. Even FP people know about the distinction, which is why they invented monads and effects as FP concepts. Historical data point: Ken Iverson's 1962 book "A Programming Language" uses "programming" for describing the development of algorithms with pencil and paper.
j
@guitarvydas Thanks for sharing those thoughts... Having read your post, I wonder if you have any opinion on Forth and its concatenative / stack-based descendents...?
g
I'm a lisper and have less experience with Forth. Just this weekend, I was experimenting with languages for software architecture using OhmJS (and my RWR) and found myself re-inventing stack-based concatenation for bolting code snippets together after parsing phrases (blog post sitting in the wings). If I were to go deeply down that rabbit hole, I might be inclined to revisit Tunney's Binary Lambda Calculus (BLC - even smaller than Sector Lisp). Dunno...
@J. Ryan Stinnett have you explored concatenative languages? Do you have a recommendation on how to jump the queue and not read every paper on that path?
j
@guitarvydas Ah okay, it came to mind because of your discussion of subroutines, functions, etc. My concatenative language exploration is very early, so I'm not confident enough to give my own opinion yet. I'm currently reading Thinking Forth (https://thinking-forth.sourceforge.net/), and the philosophy chapter at the beginning of the book also touches on the same topics you've mentioned. Perhaps worth reading at least that chapter. 🙂
1
g
... aside - are you, also, aware of Chuck Moore's Green Arrays? ... @J. Ryan Stinnett
j
Ah hmm, I may have seen something about their chips a while ago... I'll go refresh my memory now. I assume their "many small processors" approach is well-aligned with your own thinking.
g
Maybe aligned. It's on my stack of wish-I-had-time-to-explore-this-more-deeply. I would prefer a chip with 1,000 8-bit cpu's on it rather than wasting chip real-estate on MMUs, caches, shared-memory multi-core and the like.
k
This blog post is also worth reading to understand the Forth philosophy.
👍 1
@guitarvydas Your subroutines-vs-functions discussion reminds me of Scheme. It has "procedures" rather than "functions", and the explanation I saw (don't remember where, but by Gerald Sussman) is exactly what you say.
g
ChatGPT says that SICP used the term "subroutine" in early editions of SICP. Does this sound like the place? @Konrad Hinsen
k
The current edition of SICP uses "procedure", like everyone else talking about Scheme. But I cannot find an explanation for this choice of term in SICP.