Title
#share-your-work
i

Ivan Reese

10/30/2022, 2:02 AM
Future of Coding • Episode 59 Richard P. Gabriel • Worse is Better🔗 https://futureofcoding.org/episodes/059 Following our previous episode on Richard P. Gabriel's Incommensurability paper, we're back for round two with an analysis of what we've dubbed the Worse is Better family of thought products: 1. The Rise of Worse Is Better by Richard P. Gabriel 2. Worse is Better is Worse by Nickieben Bourbaki 3. Is Worse Really Better? by Richard P. Gabriel Next episode, we've got a recent work by a real up-and-comer in the field. While you may not have heard of him yet, he's a promising young lad who's sure to become a household name. • Magic Ink by Bret Victor I am usually really thorough in my editing of the show, but this one I sort of had to rush out the door because the month is rapidly drawing to a close. If anyone spots any weird edits, or anything that sounds out of place, let me know. In particular, the sponsors (which now come at the end of the episode) might be a little rough. Oh well — pays the bills, amirite?
Jim Meyer

Jim Meyer

10/30/2022, 4:44 AM
iLand 😂
p

Personal Dynamic Media

10/30/2022, 5:09 AM
Thank you, this was a fun listen. I also really appreciate that you have dropped the disrespectful nicknames. Thank you. When comparing the priorities of the authors of Unix versus the authors of ITS, I think it's worth remembering some of the technical differences between the hardware. Unix grew up on a mini computer, the PDP-11 (Yes, it started life on a PDP-7, but the real growth into the operating system we would recognize today occurred on the PDP 11.), where the kernel had to fit in 64K, and each program had to fit in its own 64K (later models let you use 64K for code and a separate 64K for data). This environment will naturally encourage one to prioritize performance and simplicity of implementation. On the other hand, ITS grew up on a mainframe, the PDP-6 (later PDP-10), which had a 36-bit word and 18-bit addressing, making it possible for a single address space to contain substantially more memory. It's much easier to put more complexity into your kernel in this environment. As a result, I'm not convinced that the differences in prioritization were fundamentally the results of the two cultures in question. I suspect the different priorities may have arisen partly from the technologies in question. With respect to the question of how well the Unix interface hides complexity, I would argue that many Unix tools provide a good, simple interface for utilizing some pretty deep complexity. In fact, I think the relevant comparison is not Lisp versus C, but Lisp versus sh. The ubiquitous data type in Lisp is the list, while the ubiquitous data type in sh is the file full of variable length one line records. Pipelines in sh fill the role of function composition in Lisp. The equivalent of C in ITS was the MIDAS assembler, which WAS really nice for an assembler. See https://wiki.c2.com/?SymbioticLanguages for some elaboration on where I'm coming from with this comparison. For example, make provides a relatively easy way to make use of a topological sort, without having to understand the implementation details or even what a topological sort is. The sort command allows you to sort files substantially larger than RAM, handling all issues of breaking files into chunks that can be sorted within RAM, merging those chunks into larger chunks, storing intermediate results on disc, etc. The diff command provides a simple interface for finding the longest common substring between two sets of lines. lex and yacc pack a lot of powerful computer science into a relatively straightforward interface for specifying tokens and grammars. join, comm, awk, dc, bc, and many other tools that were already available in v7 Unix also present simple interfaces for making use of powerful code. Speaking as a huge Smalltalk and Lisp fan, as well as a huge Unix fan, I think the question of how Unix won extends far beyond Gabriel's analysis, though I appreciate the factors that he identified.
Konrad Hinsen

Konrad Hinsen

10/31/2022, 6:22 AM
Interesting episode, once more! I had read these papers many years ago, with mixed feelings about the relevance of the "worse is better" idea. Your discussion framing it as "where does the complexity go" is illuminating here. But I agree with @Personal Dynamic Media that it's not so much "developer vs. user" but "where on the many layers of a real-life software system does the complexity go?" Unix at the shell programming level is indeed free from the low-level considerations that PG mentions. Which explains why I found the topic only moderately relevant since my own focus as a power user (rather than software developer) is on levels clearly above the Linux kernel APIs. For me, Lisp machine vs. Unix is about Lisp vs. shell as the layer that defines the coherence and accessability of system features. With Lisp clearly "winning" here, but at the cost of much higher resource usage.
w

wtaysom

10/31/2022, 7:00 AM
On Ruby, Matz put it this way, "Actually, I'm trying to make Ruby natural, not simple." Concrete example. The keyword
alias
is a for giving multiple names to the same method so that you can call the synonym that feels the most natural. Examples from the often used
Enumerable
module: •
include?
and
member?
to_a
and
entries
detect
and
find
select
and
filter
and
find_all
map
and
collect
flat_map
and
collect_concat
reduce
and
inject
The -ect names (
select
,
reject
,
detect
,
inject
) come from Smalltalk.
i

Ivan Reese

10/31/2022, 2:06 PM
@wtaysom I'm not sure what you're responding to. Something from a previous comment? Something from the episode? (Perhaps Jimmy's mention that Ruby is difficult to parse?)
w

wtaysom

11/01/2022, 2:15 AM
Yes, Jimmy's comment from the episode that Ruby is complex in implementation and complex in API. Yet somehow using it often feels good. How? Comes from the complexity being in service of a kind of naturalness. The syntax, for example, if enough people interested people think something should work eventually it does. Now does anyone actually know the syntax of the language? Not me! And I've been writing this language for twenty years. I was today days old when I learned you can use
::
for method calls, as in
Object.new::is_a?(Object)
.
Kartik Agaram

Kartik Agaram

11/19/2022, 6:28 AM
For the search engine: Who should own the complexity? --- Important context: the first "paper" was just a few pages in the middle of a much larger paper. (Then RPG went all MCU with the sequels.) --- Portability and performance are merely examples. I'd say the philosophy still holds in a general form:
To be adaptive, you can be worse along any axis where the competing approach will take 20 years to build.
On the other hand, most things don't get adoption. There's lots of bottlenecks to get through before life-or-death Worse vs Right trade-offs. So it's not very actionable. --- If you think, "oh, I failed at adoption because Worse is Better," that might be a symptom that you misunderstood the target audience for your product. It's the developers (who care about ease of porting) not the users. In the '60s you don't get users directly. You only get users at the end of a trade network mediated by developers/porters. It's a failure of positioning.