Completely orthogonal to Konrad’s comments, I perceive there to be a deep technical issue. Typically, a new technology first gets used in old-fashioned ways. The first use for electric motors was to pump water up hills to create streams that could be used to run water-wheel-based factories.
Our initial use of “computers” is riddled with old-fashioned concepts, like filing cabinets, desktops, and equations written in text on 2D paper - to run fancy calculators for moon-shots and military targeting. Modern uses for computers look entirely different. IMO, these are all asynchronous, distributed and massively parallel, like internet, robotics, gaming, GUIs, blockchain, etc.
Our IDEs for “programming” are overly-biased by 1950s beliefs, like using linear, sequential, synchronous, function-based language to control and to reprogram electronic machines.
Ironically, the programmers in the parable do not realize that they are quibbling about menial issues and not addressing the fact that all of their languages are essentially the same. Syntax is cheap, paradigms are important. From a paradigm perspective, all of their languages force them to address problems in only one way - using the synchronous, function-based paradigm.
“Coding” is usually built upon scaffolding for the synchronous, function-based paradigm. CPU subroutines are not functions. To erect the edifice of function-based programming, recursion and thread-safety on top of electronic, CPU-based machines, one needs to begin by adding extra software - often known as operating systems. This edifice requires millions of lines of “code” to exist before getting out of the starting gate. “Coding” is not equivalent to “programming” nor “reprogramming”, because only one paradigm is encouraged. Again, the interesting problems of the modern day mostly involve asynchrony.
Programmers need several new notations that do not force them to begin solving problems by using the synchronous, function-based paradigm.
Modern programmers only know how to code, not how to program.