I'm going to start an opinionated overflow thread for the previous discussion (
https://futureofcoding.slack.com/archives/C5T9GPWFL/p1599588394135900)
Why programmers shouldn't program for themselves (my editorializing)
Focusing on "quantity of programming" feels like the wrong frame. My ideal society of people educated in programming may not involve most people actually doing much programming most days. What matters is the
potential. Compounding advantages from programming for one day per year.
* Impulse to generalize is self-limiting (some maintenance burden may be irreducible). A good end-user computer needs to be extremely parsimonious in out-of-the-box capabilities, and leave lots of space for users to "pollute" it with what they care about. Give people space to make mistakes, raze things and start over. If it's too built-up, it discourages experimentation and customization.
* Baiting big to catch small. (
https://xkcd.com/1319) The long tail of manual tasks are not really economic to automate just for oneself.
* First-world problems. Until we get good sensors/actuators, programming is kiddie-pool stuff for the most part. "I wrote a script that lets me open projects easily so that I can write more scripts." There's more to life. (Not for me, but for most people π)
Why programmers don't program for themselves (snapshot summary of the previous thread)
* Interoperability limitations. Between any putative new script and other devices, platforms, programs.
* GUI limitations.
* Operational/maintenance burden (Ivan Illich). Keeping up with security advisories, for example (
https://mastodon.social/@akkartik/104790515855023278)
* Programming for employers sucking up all the oxygen. Building for oneself is economically invisible in the current paradigm. (Thanks
@Konrad Hinsen.)
* Long-term trend towards locked-down, consumption-oriented devices. Morlocks turning Eloi.
* Lack of DIY culture. Programming for others may be poor preparation for listening to one's own needs (e.g.
https://mastodon.social/@akkartik/103994830568601931). Perhaps the original sin was framing programming as driven by exernal "requirements"? But computers always had to start out expensive; hard to imagine how we could dodge that bullet..
* Fragmentation in incumbent programming models. High barrier to entry for exploring new programming models.
* Poor discoverability/unmemorability/anti-memorability.
(Bullets are not disjoint, just interlocking/overlapping frames I've been finding useful.)