https://futureofcoding.org/ logo
#thinking-together
Title
# thinking-together
g

guitarvydas

01/22/2023, 2:06 PM
# Summary 2022 For me 2022 was: 1. 0D 2. transpiler pipelines Explanations below. There is nothing “new” here. I believe that our crop of programming languages subtly discourages certain kinds of thoughts. You can do these things with our programming languages, but, you don’t bother. [I wrote this on Jan 1/2023. Then I promptly got sick and found new ways to procrastinate. I will gladly remove this if it is inappropriate or too long...] # TL;DR ## 0D • 0D is part of traditional parallelism (zero-dependency, total decoupling) • breaking 0D away from parallelism enables other uses • 0D uses FIFOs, whereas functions use LIFOs (LIFOs are used by most modern programming languages, Python, Rust, etc. and stifle possible solutions) ## Transpiler Pipelines • “skip over” uninteresting bits of syntax, whereas CFG requires full language specification • leads to a different class of tools -> parser used for “quickie” matches instead of for building compilers ; different way of using parser DSLs ; like mathematical manipulation of notation • “skipping over” bits of syntax allows syntactic composition ; syntactic composition enables pipelines ; # 0D 0D is a short-hand for the phrase zero dependency. Total decoupling. Programmers already know how to write 0D code, but, they tangle this simple concept up with other concepts and call the result “parallelism”. At a very, very basic level, you can achieve 0D by using FIFOs instead of LIFOs (queues vs stacks). LIFOs - callstacks - are good for expressing synchronous code. LIFOs are less-good for expressing asynchronous code. Programmers often conflate nested, recursive functions with the notion of pipelines. If a component sends itself a message, the message is queued up in FIFO order and there is a “delay” before the message is processed, whereas if a component recursively calls itself, the function parameters are pushed onto a stack and the processing happens immediately, in LIFO order. This subtle difference in processing sequence manifests itself in design differences. For example, in electronics - where all components are asynchronous by default - you often see the use of “negative feedback”, say in Op-Amp designs. You rarely see this technique used in software design. In electronics, negative feedback is used by components to self-regulate, whereas in software, recursion is used as a form of divide and conquer. Feedback loops make it possible to be explicit about software design, whereas recursion hides the key element - the callstack - of the design. EEs had this issue sussed out, before the advent of the “everything must be synchronized” mentality. All components in an electronic circuit are asynchronous by default. Synchrony is judiciously, explicitly designed-in through the use of protocols. Synchrony is not designed-in everywhere by default and is explicitly designed in on an as needed basis. There is a reason - a subtle reason - why it is easy to draw diagrams of computer networks and not-so-easy to draw diagrams of synchronous code. In EE designs, concurrency is so cheap that you can’t help but use it. In software, concurrency implies difficulty and designers end up avoiding concurrency in their designs. This subtle difference has a trickle-down effect to end-user code. When it is difficult to draw diagrams of programs and to snap components together, programmers tend not to provide such features to end-users. Or, when they provide such features, they implement such features under duress. If DaS and snappable components were abundantly available, such features would naturally leak through to end-user apps. 0D can be implemented a lot more efficiently than by using operating system processes and IPCs. Most modern programming languages support closures (anonymous functions) and make it easy to build queue data structures. Stick one queue at the front of a closure - the “input queue” - and one queue at the tail of a closure - the “output queue” - and, you get 0D. Then, you need to write a wrapper component that routes “messages” from the output queue of one closure to the input queue of another closure. Can this concept be generalized? This ain’t rocket science. When you build 0D software components, does the order-of-operation of components matter? Nope. Can a 0D component create more than one result during its operation? Yep. Can a 0D component directly refer to another 0D component? Nope. The best you can do is to compose networks of 0D components inside of routing wrappers. # Transpiler Pipelines It would be nice to build up solutions using pipelines of many little solutions and syntaxes made expressly for those solutions. What do you need to be able to do this? 1) You need to be able to write grammars that are very, very small and that allow you to”ignore” bits of syntax that don’t pertain to a problem, e.g. kind-of like REGEX, but, better. 2) Total isolation of building blocks. ## Very Small Grammars That Ignore Uninteresting Items Ohm-JS - a derivative of PEG (Parsing Expression Grammars) - makes it possible to write grammars that skip over uninteresting bits of text. For example, if you want to write a quickie parser for C code, you might want to say:
... function-name (...) {...}
In Ohm-JS, you can say this, whereas in a CFG-based parser generator you need to over-specify all of the niggly bits of C syntax. In Ohm-JS, this results in a few minutes of work and only a few lines of code. The Ohm-Editor assists in developing the micro-grammar. In YACC and CFG-based approaches, though, you’re looking at a gargantuan job (days, weeks, months, ...) and you simply don’t bother to write such a quickie parser. You either don’t bother with the whole idea, or you use something like REGEX which fails on a number of edge-cases for this kind of thing. REGEX can’t search recursively for matching brackets, Ohm-JS can. Using REGEX, you might get away with a partial solution, or, the project might grow larger as you hit unexpected speed bumps. You either persevere or you just give up. For the record, the grammar plus the accompanying code fabricator specification for the above simple example are shown in the appendix. ### DaS Comes For Free When you can build totally isolated building blocks, you can draw sensible diagrams of how the building blocks should be snapped together to solve a problem. Later, you can steal (cut/copy/paste) chunks of previous solutions and use them as building blocks for new problems. DaS: Diagrams as Syntax. DaS is not diagrams as an Art Form. DaS is diagrams as programming languages. For example, instead of writing
{...}
, you draw a rectangle. Programming languages were created by culling the English language and by choosing only the words and phrases that could be compiled to executable code. Can we cull diagrams in the same way to invent new programming languages? EE’s have done this and they call the resulting diagrams “schematics”. Building construction engineers have done this and call the resulting diagrams “blueprints”. ## Don’t We Already Use Building Blocks? “Code Libraries” look like building blocks, but, contain subtle bits of coupling that discourage building-block-iness. For example, the very common idiom of a function call
f(x)
introduces at least 3 kinds of coupling: 1. The name
f
is hard-wired into the caller’s code. The calling code cannot be cut/copy/pasted into some other solution without also dragging in the called code, or, by futzing with the source code. 2. The function call
f(x)
waits for the callee to return a value. This is also known as blocking. Function call notation works fine on paper, where functions can be evaluated instantaneously. It’s different when you map function call syntax onto hardware that has propagation delays wherein functions take finite amounts of time to “run”. This subtle difference in behaviour leads to hidden gotchas. A glaring example of the impact of such a difference can be seen in the Mars Pathfinder disaster[^pathfinder]. 3. The function return
v = f(x)
hard-wires a routing decision into the callee’s code. The callee must direct its response back to the caller. This is called “returning a value”. Again, this doesn’t look like a problem when you just want to build fancier calculators, but, this hard-wired routing decision discourages simple solutions to non-calculator problems, like machine control. [^pathfinder]: https://www.rapitasystems.com/blog/what-really-happened-software-mars-pathfinder-spacecraft When you don’t have complete isolation, you don’t have building blocks. Imagine a LEGO® set where all the pieces are joined together with a single, long sewing thread glued to each LEGO® block. Or, you have two real-world objects, e.g. one apple and one orange. You cut the apple in half. What happens to the orange? As humans, we are used to the idea that objects are completely isolated. Programs don’t work that way. We have to stop and think hard when writing programs.
# Appendix If you want to play along with this experiment, the code is in https://github.com/guitarvydas/cfunc. ## c.ohm A quickie grammar that matches function declarations in a C file. Note that this grammar is longer than a REGEX, but, is significantly shorter than a CFG specification (LR(k), YACC, etc.) for the C programming language.
Copy code
Cfunctions {
  program = item+
  item =
    | comment
    | string
    | applySyntactic<FunctionDecl> -- decl
    | any -- other
  FunctionDecl = name "(" param+ ")" "{" block+ "}"

    param =
      | "(" param+ ")" -- nested
      | ~"(" ~")" any  -- flat

    block =
      | "{" block+ "}" -- nested
      | ~"{" ~"}" any  -- flat

      name = letter (alnum | "_")*
      comment =
        | "//" (~nl any)* nl
        | "/*" (~"*/" any)* "*/"
      string =
        | bqstring
        | dqstring
        | sqstring
      bqstring = "`" (qbq | (~"`" any))* "`"
      dqstring = "\"" (qdq | (~"\"" any))* "\""
      sqstring = "'" (qsq | (~"'" any))* "'"
      qbq = "\\" "`"
      qdq = "\\" "\""
      qsq = "\\" "'"
      nl = "\n"
      spaces += comment
}
Can this grammar be improved and optimized? Probably. But, why would you care? You would care only if you used this code in an end-user product. If you use this code in something like a batch-editing environment, “efficiency” takes on a different meaning. End-users don’t care about the efficiency of your code editor and its Find-and-Replace function. End-users don’t care how efficient your command line tools, like grep, are. When you treat Ohm-JS + Fab as batch editors for development, then, only development efficiency matters. I strongly believe that one shouldn’t write code. One should write code that writes code. From this perspective, “efficiency” breaks down into 2 camps: 1. developer efficiency 2. end-user efficiency. Note that traditional compilers are simply apps that write code. Developers use compilers. End-users don’t care if a developer created end-user app code by hand or by using a compiler. The only things that end-users care about is if the app is cheap and runs on cheap hardware. The final app is assembler, regardless of how it was created. Developers, on the other hand, do care about development time and effort. Hand-writing apps requires much more effort than using high-level language compilers to generate the final app code. Debugging apps is easier when using high-level languages with type-checkers. On the other hand, developers usually buy fancier hardware than that which is used by end-users. Developers can afford to burn CPU cycles on their fancy hardware to give themselves faster - and cheaper - development and debugging times. The final step in development is that of Production Engineering an app to make it cheap-enough to sell. Up until that point, the development workflow should consist of anything that speeds up and cheapens development time, for example, dynamic language environments and REPLs. For example, Rust is a Production Engineering language and needn’t be used until the last moment. ## c.fab A
.fab
file is a specification that creates strings based on the above grammar. Fab is an experimental transpiler tool that works with Ohm-JS. It generates JavaScript code required by Ohm-JS. This could all be done by off-the-shelf Ohm-JS. Fab simply reduces the amount of keyboarding needed for creating JavaScript “semantics” code required by Ohm-JS. Fab is written in Ohm-JS.
Copy code
Cfunctions {
  program [item+] = ‛«item»'
  item_decl [x] =  ‛«x»'
  item_other [x] =  ‛'
  FunctionDecl [name lp param+ rp lb block+ rb] = ‛\n«name»'
    param_nested [lp param+ rp] = ‛'
    param_flat [c] = ‛'
    block_nested [lp block+ rp] = ‛'
    block_flat [c] = ‛'
      name [letter c*] = ‛«letter»«c»'
      comment [begin cs end] = ‛'
      nl [c] =  ‛«c»'
      spaces [cs] =  ‛«cs»'
      bqstring [begin cs* end] = ‛'
      dqstring [begin cs* end] = ‛'
      sqstring [begin cs* end] = ‛'
      qbq [bslash c] = ‛'
      qdq [bslash c] = ‛'
      qsq [bslash c] = ‛'
}
## grep.c The above was tested against
grep.c
from the Gnu grep repo.
git clone <https://git.savannah.gnu.org/git/grep.git>
## Even Smaller I’m playing with the design of a new tool that I call bred (bracket editor). It’s like a super-simple batch editor that walks through text that contains bracketed constructs. The full specification consists of 2 strings 1. what to match 2. how to rewrite it. The above specifications might be re-expressed as:
Copy code
‛«name» («params») {«block»}'
‛«name»'
which reads as: 1. match, recursively, anything that looks like
«name» («params») {«block»}
2. then, throw away everything except the name Currently, my concepts have warts - what happens when a comment or a string or a character constant contains brackets, or, even worse, what happens if they contain unmatched brackets?
k

Kartik Agaram

01/22/2023, 6:44 PM
Nice ideas. Re 0D, my next question is: how to decide at what granularity to stop using function calls? Or are you suggesting eliminating them entirely? Re transpiler pipelines: I tried this for a while a few years ago. The conclusion I reached was that they were great for adding capabilities but they can't add restrictions. In first class languages often a lot of value comes from guarantees that certain events won't occur. An int won't be assigned to a string. There you need a single coherent grammar. Does this seem right?
v

Vijay Chakravarthy

01/22/2023, 10:39 PM
this talk is quite relevant —

https://youtu.be/JMZLBB_BFNg

g

guitarvydas

01/23/2023, 12:19 PM
... re: 0D ... ideal: use both, without letting language influence your thinking ideal: use both, but, remain aware of what each choice accomplishes ideal: 0D to be so cheap that it could be used on every line of code reality: 0D is entangled with Multiprocessing and the current grain size is “Process” alternate reality: 0D can be couched in terms of closures and FIFOs, hence, grain size is “function” (where closure is roughly equivalent to function) reality: CALL/RETURN and the callstack are hard-wired into CPUs (there used to be a time when CPUs didn’t have hard-wired callstacks) reality: 1950s IDEs for Programming were Programming Languages, but, in 2022++ IDEs include other stuff, like powerful programming editors CALL is used for 2 reasons: (1) compacting code size, (2) DRY (Don’t Repeat Yourself). There is no good reason to allow CALL/RETURN to leak into end-user code except for case (1) compacting code size [corollary: case (2) should be entirely optimized away at “compile time” and “edit time”] x.f(x) is syntax with the meaning “mutate the global callstack and mutate the IP to point at the method function x.f” (and “return” means “put the return value in a special place, then mutate the global callstack, then mutate the IP to point at the caller’s continuation code”) but, there is no popular builtin syntax for Send()ing to an output queue and passing the finalized output queue back up to the parent Container ... re: transpiler pipelines question ... thinking ...
... re: transpiler pipelines question, progress towards answering the question, WIP ... ... this doesn’t necessarily answer the question, but might show where my thinking is going, while I try to figure out what is really being asked ... ... I think of a PL as 2 issues: (1) data (2) control flow, i.e. (1) operands and (2) syntax ... ... I am playing with Orthogonal Programming Languages, where (1) is OO, (2) is syntax ; based on Cordy’s Orthogonal Code Generator ideas and based on RTL and based on dataless PLs (like Holt’s S/SL (used in PT Pascal, Concurrent Euclid, Turing, etc.)) ... ... I think that dataless languages boils down to 2 entities: (1) Things, (2) Lists of Things. Types are opaque and cannot be defined at the dataless language-level (Types are defined and manipulated in other layers, implemented in common PLs (e.g. Python, C, etc.)) # Src String s s <- ‘abc’ s <- 7 # Gather $defsynonym (‘s’, ⟨od, kind=var, type=“String”, key=‘s’⟩) s <- ‘abc’ s <- 7 # Normalize $defsynonym (‘s’, ⟨od-var, “String”, ‘s’⟩) $Assign s, ⟨od-lit, “String”, ‘abc’⟩ $Assign s, ⟨od-lit, “int”, 7⟩ ... same as ... $Assign ⟨od-var, “String”, ‘s’⟩, ⟨od-lit, “String”, ‘abc’⟩ $Assign ⟨od-var, “String”, ‘s’⟩, ⟨od-lit, “int”, 7⟩ # Semantic Check “String” == “String” --> OK “String” != “int” --> Error This looks like simple name equivalence. Lower layers are free to use structural equivalence instead (using names as keys to a map containing more detail for each type). [The goal here is to think of a compiler as a string of pearls on a pipeline instead of as a honking big tree]. [od - oh-D, not zero-D, means “object descriptor”]
Hmmm, is it valid to say that “0D is Combinators for impure languages”? Is that the appeal of /bin/sh pipelines? Combinators for C?
m

Marcel Weiher

01/30/2023, 8:46 AM
While I don’t believe 0D is possible, it certainly is true that our current dominant architectural style, call/return, couples way more than it should, and is largely mismatched with the majority of systems we build today. I talk about this in some detail in Can Programmers Escape the Gentle Tyranny of Call/Return.
As an example, it turns out that dataflow (in particular of the pipe/filter kind) is actually the more flexible / more basic style, because you can easily and generically implement call/return in terms of pipes/filters but not the other way around, at least not without sacrificing important performance properties of dataflow. Which was a bit of a surprising result to be honest.
w

wtaysom

01/30/2023, 4:09 PM
Curious to learn more, I listened to @Marcel Weiher over here

https://www.youtube.com/watch?v=Gel8ffr4pqw

. The Q&A has a few familiar faces. The idea, as I understand it, is that we often want to connect bits of data
y = f(x)
but call/return unnecessarily couples how you enforce the relation, namely, by fixing
y
based on
f
of an precomputed
x
. You may do it eagerly, you may do it lazily, but you're still committed. With Prolog you can leave variables unbound. With bidirectional transformations / lenses you can update
x
from changing
y
. And there are more possibilities. I've long been curious about decoupling relations over state from evaluation/update mechanisms. By the way Common Lisp's resumable exception handling mechanism is called the "Condition System."
g

guitarvydas

01/30/2023, 4:43 PM
I apologize if I’ve made this sound too complicated... 0D has been around for a long time. I didn’t invent it, I just drew a sloppy red circle around it and gave it a name that I like. Every “concurrent” program needs, first, to be 0D. UNIX pipes were invented in 1973 https://en.wikipedia.org/wiki/Pipeline_(Unix). Morrison invented Flow-Based Programming even earlier. Processes and IPCs have been around for a long time. All are 0D at the core. Anonymous functions (the precursors to closures) were invented around 1956 (Lisp 1.5). If there hadn’t been such a deep allergy to Lisp, it might have become obvious that “operating system processes” were just closures. One of the first CPUs that I programmed, didn’t have a callstack. You had to choose to implement CALL manually, or, choose to implement co-routines manually. (BALR instruction, IIRC). Or, to do something more ad-hoc and less-structured. Basic 0D consists of putting a queue at the front of a closure and another queue at the back of the same closure. Then, writing a wrapper that shepherds messages between queues. Stating the obvious - lists and callstacks are not queues. Recursion is not 0D. I will try to whip up an example in some example language ...
m

Marcel Weiher

02/05/2023, 1:31 PM
I think you did a great job describing it, and yes, Unix P/F, Morrison’s FBP etc. are all under-appreciated. Not sure the name “0 Dependencies” does the trick, though I understand where you are coming from with it. Part of the problem with dataflow systems being under-appreciated, I think, has to do with them usually suffering from packaging mismatch. So while in principle they are structurally simpler, their implementations tend to be difficult to integrate with. So-called “FRP” (or Rx) has done a somewhat better job of integrating with existing procedural/functional languages, and has thus seen fairly wide adoption, but at the cost of hiding the goodness of the dataflow underpinnings under some FP goobledygook that makes it much more difficult to (re-)use and compose. Polymorphic Write Streams do a slightly better job, IMHO of course, but still suffer, because in the end it is tricky to provide non-procedural abstractions when the abstraction mechanism itself is the procedure call. You simply can’t talk about what you’re doing with the linguistic means available, which is a bit of a bummer. One of the reasons I decided to create a new language…
g

guitarvydas

02/05/2023, 8:51 PM
feedback.png
I will try to whip up an example in some example language ...
!

simple example

Copy code
from leaf import Leaf

class A (Leaf):
    def __handler__ (self, message):
        self.send (xfrom=self, portname='out', data='v', cause=message)
        self.send (xfrom=self, portname='out', data='w', cause=message)
Copy code
from leaf import Leaf

class B (Leaf):
    def __handler__ (self, message):
        if (message.port == 'in'):
            self.send (xfrom=self, portname='out', data=message.data, cause=message)
            self.send (xfrom=self, portname='feedback', data='z', cause=message)
        elif (message.port == 'fb'):
            self.send (xfrom=self, portname='out', data=message.data, cause=message)
        else:
            raise Exception (f'internal error: unhandled message in C {message}')
Copy code
from sender import Sender
from receiver import Receiver
from up import Up
from down import Down
from across import Across
from container import Container

from a import A
from b import B

class Top (Container): 
  def __init__ (self, parent, name):
      a = A (self, f'{name}/a')
      b = B (self, f'{name}/b')
      self._children = [a,b]
      self._connections = [
          Down (Sender (self,'in'), Receiver (a,'in')),
          Across (Sender (a,'out'), Receiver (b,'in')),
          Across (Sender (b,'feedback'), Receiver (b,'fb')),
          Up (Sender (b,'out'), Receiver (self,'out'))
      ]
      super ().__init__ (parent, name, self._children, self._connections)
This example shows a small, 2-component feedback network. The code does nothing useful, but, it demonstrates message feedback. The problem statement: • When A gets a message on its pin ‘in’, it produces 2 messages ‘v’ and ‘w’ in that order. • When B gets a message on its pin ‘in’, it outputs the message on its pin ‘out’ AND it produces a ‘z’ message on its pin ‘feedback’. • When B gets a message on its pin ‘fb’, it outputs the message on its pin ‘out’ (only). The result of the system is 4 messages ‘v’, ‘w’, ‘z’, ‘z’ in that order (left to right). ... for more details, see https://github.com/guitarvydas/py0d/blob/feedback/README.md (note that this is the “feedback” branch of that repo) Feedback - why bother? In electronics, it is common to use feedback to self-regulate (“negative feedback”). In software, recursion (which only LOOKS like feedback) is used only as a form of divide-and-conquer. The difference between Recursion and Feedback is the delay imposed by queuing. Recursion is processed immediately in a LIFO manner, whereas Feedback messages are put into a queue in FIFO order, to be processed when their time comes. It’s like someone waiting patiently in a lineup versus someone jumping the queue and going to the front of the line. Stuff like this matters when you are building sequencers instead of calculators. The Architect can be very explicit in the design instead of having a certain semantics built into the lower-levels of the tool. Loops (not Recursion) become explicit messages-to-self. If the Architect really, really, really wants a Stack, the Architect builds it explicitly and gives it the desired semantics, instead of relying on the built-in call-stack to do the work implicitly.
… watching …

https://www.youtube.com/watch?v=DG5MtsMojgI

various comments that came to mind ... a) aside: are you aware of Paul Morrison’s Flow-Based Programming? (If not, I can supply more info, including a discord link) b) There is ONLY ONE thing that matters to end-users in the end: how inexpensive is the machine? Do end-users care if you used Emacs, VIM, VSCode, etc.? Nope. Do end-users care if you used functional programming, OOP, C, or raw assembler? Nope. Can the end-user waltz into WalMart and buy your hand-held game machine in the home furnishings department off-the-shelf like a toaster or can they run your product on an rPI or do they need to buy a full-blown laptop paying tax to Microsoft or Apple? Is the product guaranteed to work or does it need frequent updates? c) CALL/RETURN uses the call-stack - a LIFO. Queues use FIFOs. d) To be able to Architect software, you need to get rid of the concept of Loops and Recursion (these concepts are valid only in call-stack based code). I think in terms of messages being shepherded between Output and Input queues and explicit feedback. I’m not sure how to think about this in terms of streams. ? For example, “loop 2 times {print “hello”}” becomes “when input > 0 {print hello ; send self (input - 1)}” e) The “trick” is to think in terms of 2 kinds of Components - recursive Container components and Leaf components. I think that this corresponds to Packages and Wares, resp, in the paper. ? (Containers compose Components by joining them up via streams and messages, Leaves are just “code” as we know it with the ability to call functions AND the additional ability to send messages). f) Stepping stones ... Call/Return Spaghetti https://guitarvydas.github.io/2020/12/09/CALL-RETURN-Spaghetti.html, ALGOL bottleneck https://guitarvydas.github.io/2020/12/25/The-ALGOL-Bottleneck.html. g) Programming Languages were invented in the 1950's, operating systems came soon after. It is now 2023, (approx. 70 years later) and I just had to preventative-reboot my MacBook because I was beginning to get random, unexplained errors in apps that worked OK yesterday. What is wrong with this picture? Functional Programming will surely make this all better, right?
m

Marcel Weiher

02/07/2023, 10:15 AM
Re: ALGOL. My quip (which I get a lot of flak for, so good…) is that today’s “general purpose” languages are nothing of the sort. They are domain specific languages for the domain of algorithms. See ALGOL, the ALGOrithimic Language, which all of today’s mainstream languages are descendants of and largely indistinguishable from. That said, “algorithms” is a pretty good domain, and if I had to choose one and only on architectural style, call/return is the one I’d choose. And I think the idea that we have to get rid of the dominant style in order to overcome its limitations is one of they key obstacles to actually doing so. We need to generalize from it, not ditch it.
g

guitarvydas

02/07/2023, 10:46 AM
a) I consider IF-THEN-ELSE to be the root of many evils. It is “too general” and allows one to construct ad-hoc control flows. (McCarthy specified COND, which maps functions to values, but, was not meant to be a control flow concept). b) There used to be a time when CPUs gave equal weight to function calling and to co-routining. Function calling is an attempt to graft mathematics notations onto CPUs. One difference is that mathematics notation requires instantaneous textual replacement of functions (“referential transparency”). CPUs have propagation delays which make it “impossible” to graft 0-time concepts onto electronics. A lot of epicycles have been invented to get around this issue by completely ignoring it (e.g “operating systems”). [aside: “instantaneous textual replacement” without side-effects? Isn’t that what Microsoft Word “find-and-replace” does?] c) I am enjoying your comments / this perspective. Mathematical “algorithms” are not the same as “electronic machine algorithms”. Electronic machines have “mutation”. Period. “Mutation” is also known as “RAM” (and heaps, and caches, and registers, and ...). Models of mathematical-only algorithms using electronic machines are not the same thing as models of electronic machines in general. Using only functional notation to express electronic-machine-algorithms snips off a bunch of possible avenues of thought. d) Abstraction. “Lambda” is a way to lasso and abstract code. But, a rectangle drawn on a whiteboard is also a way to lasso and abstract code.
m

Marcel Weiher

02/09/2023, 8:07 AM
“A computer does not primarily compute in the sense of doing arithmetic. Strange. Although they call them computers, that’s not what they primarily do. They primarily are filing systems. People in the computer business say they’re not really computers, they are “data handlers”. All right. That’s nice. Data handlers would have been a better name because it gives a better idea of the idea of a filing system.” — Richard Feynman

https://www.youtube.com/watch?v=EKWGGDXe5MA&amp;t=278s

g

guitarvydas

02/13/2023, 8:48 AM
2023-02-11-Computers Dont Just Compute 2023-02-11 10.08.18.excalidraw.png
I agree. Thanks for the link. I find it fruitful to think of a “computer” as but a little machine... I tend to think of “computers” as “machines”. Small electronic machines, that take a bunch of low-power electrical inputs (3V-5V, milli-amps) and produce low-power electrical outputs, and, contain some “state” (aka RAM). The machines are controlled by scripts, instead of, say, mechanical gears, mechanical pulleys, mechanical ratchets, etc..
You can hook low-power generalized electronic machines to “amplifiers” to control higher-power dumb devices (110V, 220V, amps ; motors, steppers, etc.). For now, I will call “generalized electronic machines” GEMs instead of using the word “computers”. I think of a “computer” as a raisin-sized rPI (Raspberry PI, Arduino, etc.). I call that a GEM to emphasize the fact that it can do more than just compute. Can GEMs compute? Yep. Is that all that GEMs can do? Nope. You can write scripts by hand (“Assembler”). You can write scripts using other apps (“compilers”). “Scripts” are for GEMs. “Programming Languages” are for other humans. “Compilers” transpile “programming languages” to “scripts for GEMs” There are ways to write specifications for scripts in a way that allows you to reuse parts of scripts for new scripts and that allows you to have less trouble when building scripts (“DRY” (Don’t Repeat Yourself), type checking, so-called “computer science”, etc.). Currently, we are focused on writing specifications in call-stack-based, textual programming languages, but, that’s not the only way to write script specs. Call-stack-based programming languages have the side-effect that they restrict you to thinking only about calculation. If you want to use a GEM to build a sequencer, you are out of luck, or worse, you have to invent epicycles that allow you to use calculator-only methods to specify sequencers (e.g. “control theory”, “operating systems”) Aside: Control Theory was well documented in the 1950s, using things called “resistors” and “capacitors” and “inductors” and “Maxwell’s Equations”. Note that Maxwell’s Equations do not describe Electricity, they only describe a 2D subset of Electricity that we can use to calculate how to build a limited number of useful electronic things. Electricity is - at least - a 4D effect (x/y/z/t). Aside: Concurrency can be faked out through the use of Operating Systems. To have True Concurrency, all you need to do is to use multiple rPIs (GEMs), each running single-threaded apps, connected by packets (“messages”) sent to each other through wires. To keep out of trouble, you simply need to invent the equivalent of Structured Programming for messaging - I call that “Structured Message Passing” (it may surprise you that I have opinions about that, too :-).
m

Marcel Weiher

02/13/2023, 10:42 AM
“For the first time I thought of the whole as the entire computer and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divide it up into little computers, as time sharing was starting to? But not in dozens. Why not thousands of them, each simulating a useful structure?” — Alan Kay, The Early History of Smalltalk
And yes, it is really interesting that you cannot actually have concurrency in the call/return architectural style, you have to go to some outside mechanism like the operating system to fake it for you. Hence we get the “blocking call”. What on earth is a “blocking call”? It’s the OS allowing you to pretend that call/return is workable, because it would be really convenient if it were workable. Async/await, “FRP” and the so-called “reactive” UI frameworks are similar: they try to map things that aren’t call/return into call/return, because that makes them convenient. But they fall down because the mismatch is just too great. And why is it convenient? Because our languages are call/return. If we can map the problem onto a call/return problem, we can write it down directly. That is convenient and powerful. But it breaks down horribly when the things we want to do aren’t really call/return. What’s the alternative? Make it possible to directly write down non-call/return things.
g

guitarvydas

02/14/2023, 11:47 AM
Ironically, all of the Smalltalks that I know about implement so-called “message sending” using synchronous CALL/RETURN. What is needed is a Smalltalk that implements Objects as concurrent entities with input and output queues. McCarthy provided the basics for faking this kind of thing out in 1956 (anonymous functions, and, cons cells) Aside: I have a passing interest in Dave Ackley’s MFM. What I see there is “relativity”. Machines cannot address one another in absolute terms, but only in relative terms. Kinda like The Game of Life on steroids and hardware. The only valid use of CALL is to save space in a delivered executable. Using CALL - in end-user apps - for any other purpose is inefficient and abusive. Mapping “functional notation” onto CALLs in end-user code is expensive. If you wish to fake out “functions” during development, let your IDE do it for you, but, make sure that none of that fakery reaches the end-user. (Corollary - back-pedaling by inventing epicycles such as “inline code” is the opposite of how scripts for generalized electronic machines (aka “computers”) should be created. Operating systems fake out closures. Closures fake out a notation that is - sometimes - useful for developing scripts of Assembler.).
(I feel that I’m using up a lot of bandwidth here. Ah, but this is Slack Free which Garbage Collects by hiding anything that is over 3 months old).
Make it possible to directly write down non-call/return things.
number of inputs = m number of outputs = n where 0 <= m <= infinity 0 <= n <= infinity where one input is a block of data that arrives simultaneously (“at the same time”, regardless of how you wish to destructure it (e.g. in f(a,b,c), “a,b,c” is but one block of data which is destructured into 3 elements a,b,c (this is how parameters are implemented in Assembler))) and one output is a block of data that is sent simultaneously (note that “functions” imply m = 1, n = 1, where the input is totally synchronous and the output is totally synchronous) (note that daemons have m = 0 when in steady-state) (note that buffered text filters have n = 0 most of the time, and n = 1 when they want to say something) You can express these kinds of things in text, but the result is a mish-mosh, IMO. Diagrams express this kind of thing better. In my nomenclature: Thing = Component. FTR - parsing technical diagrams ain’t much harder than parsing technical text.
GLOBAL:_the environment of the parameter values_ SENDER:_the sender of the message_ RECEIVER:_the receiver of the message_ REPLY-STYLE:_wait, fork, ...?_ STATUS:_progress of the message_ REPLY:_eventual result (if any)_ OPERATION SELECTOR:_relative to the receiver_ \# OF PARAMETERS: P1: ...: Pn: Skimming. I notice that about 40% of the way down, he discusses the “Messenger Object”. I think that there is but one reply-style, no status and no reply. I guess that I adopt a very atomic perspective and try to not drape meaning onto lower-level constructs. If you consider electronic machines to be 1000's of small computers (possible now, using rPIs and Arduinos, but not thinkable in the 1950s), then there is only one communication mechanism - the wire. One-way (bi-directional wires are an optimization). Given that view, stuff like “wait/fork”, “eventual result”, “progress of message” are molecules built out of atoms.
m

Marcel Weiher

02/15/2023, 1:45 PM
Ironically, all of the Smalltalks that I know about implement so-called “message sending” using synchronous CALL/RETURN.
Yes. And in fact, Alan’s famous quip “I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind” was followed immediately with the far less quoted “The important thing here is I have many of the same feelings about Smalltalk”.

https://www.youtube.com/watch?v=oKg1hTOQXoY&amp;t=633s

6 Views