guitarvydas
01/27/2025, 3:01 AMJon Secchis
01/27/2025, 11:30 PMguitarvydas
01/28/2025, 3:00 AMA
produces a text string that looks like some sort of "language" (machine readable, not necessarily human-readable, kind of like assembler vs. "C"). A
sends the string - as data - through the pipe to B
. B
"parses" the string and calls functions based on what it finds in the parse. This might be called "syntax directed serialization". FP is hinting at this kind of thing, calling it "pattern matching". The key, again, is that A does not CALL B, it simply sends data to B. In UNIX, B already does something like this, but at a feeble level. Add PEG parsing to B's inhalation process to improve on its feeble parsing abilities. It's been done before to build non-trivial programs like compilers.Jon Secchis
01/28/2025, 4:14 PMguitarvydas
01/28/2025, 5:43 PMcode lots of small procedures, which IMO imposes some heavy cognitive loadThe cognitive loading does not come from the small-ness of the procedures, but, is due to the flat-ness of the namespace, i.e. the "infinite canvas" mentality. The UNIX shell gets around this problem by allowing layering. A shell script can invoke commands or other shell scripts to an infinite depth (as opposed to infinite breadth). The shell and functional programming, fail to restrict this concept. There should be one kind of part to choreograph parts and another kind of part to do the work. Functions allow you to do this, but, functions don't restrict you from doing something broader, too. It's kinda like the GOTO problem in the early days, you could write structured programs using GOTOs but the existence of GOTOs tempted one to break structuring. Here, it's the same, you can write layered programs, but, you tend not to. Just try to understand someone else's code, 90% of the time it's hard, only a few programmers actually shine through as being capable of "writing good code". Another thing to note: UNIX worker commands cannot directly invoke other commands [*]. The UNIX kernel provides a privileged routine called the Dispatcher which decides which command gets to run and when. Again, it is easy to do this with closures choreographed by connecting layers, but, one tends not to structure code this way due to lack of enforced structuring constraints. A piece of worker code that CALLs another piece of code breaks the UNIX-y data-flow (message-sending) paradigm. One must be careful of stack-based ideas. LIFO-based code works on single computer systems, but breaks down in distributed systems, due to under-the-hood coupling caused by the global-ness of the stack. Code that works on single computers, doesn't necessarily scale up when distributed across many computers. [Elsewhere, I argue that context-switching is a crutch that doesn't scale well. We'd be better off using closures and message-sending using FIFO queues. Using functions works on paper for mathematics, but, isn't such a good paradigm for distributed computing]. [*] Modulo tricky uses of system calls, etc.
Jon Secchis
01/28/2025, 9:30 PMguitarvydas
01/31/2025, 3:21 AMguitarvydas
01/31/2025, 3:22 AMJon Secchis
02/01/2025, 4:05 AM"complexity explosion"Surely you know that definitely assessing complexity is sort of impossible to do impartially, as it's always context-dependent. In the context of per-process queues, and particularly in the Erlang environment, I was using "complexity" as a proxy for the size of the domain ontology needed to establish the architecture. By embracing Erlang's model, you are bound to accept everything that comes within it, you gotta eat it all. Now, for something like tuple spaces, the vocabulary is significantly simpler. It becomes a tool you can integrate however you like, or rather it's more like a material, a foundational component, not the entire blueprint and framework. The complexity and solidity of what you build with it will depend on your ability to adapt it to your necessity, how well you employ it as an architect. The baseline complexity is low. That's my perspective, and I tend to like this trade-off better. I think it's also the preference of most people, that's why we don't have FP and full-blown actor systems as mainstream technologies. These things are binding, unforgiving, all-encompassing, long-term commitments. Turns out soundness is not that big of a deal; reality and the markets are more amenable to heterogeneity, even if it translates to a mess most of the time. The mess allows for moving faster and achieving good-enough quality at a fair price. Worse is better, as they say.
guitarvydas
02/01/2025, 4:54 AMJon Secchis
02/01/2025, 5:50 PMJon Secchis
02/01/2025, 5:56 PMguitarvydas
02/01/2025, 9:01 PMguitarvydas
02/01/2025, 9:03 PM