With 1000 programming languages to choose from, ho...
# thinking-together
e
With 1000 programming languages to choose from, how to rank them? One important quantity that can be measured is MTTR BSOTTA -Mean Time To Repair By Someone Other Than The Author. This is what killed APL, LISP and FORTH; high MTTR scores. https://www.e-dejong.com/blog/2019/8/4/a-very-important-quality-in-a-programming-language-mttr-bsotta
šŸ‘ 3
k
It's a useful metric, but it's misapplied to languages. I predict experiments to show that the variation within modern languages will exceed the variation between them for any codebases beyond the scale of a term project. Your language will be no exception. Why would this be? • Selection bias. If a language eliminates some defects they disappear entirely from this metric. As a C programmer I can find low hanging memory leaks in many codebases that are fairly easy to repair. In a GC language they don't exist anymore. The leaks that remain are much harder to track down. • Social factors. Bad/inexperienced programmers can convolute any codebase. Moderately fixable with some sort of apprenticeship process. Boss level: bad incentives cause even good programmers to behave like bad ones. Not fixable with apprenticeship, because even mentors aren't immune to incentives.
šŸ‘ 2
d
What I like about MTTR is that it is measurable. There is plenty of room for more quantitative, empirical studies to guide programming language design. However, like @Kartik Agaram, I perceive that there isn't a lot of difference between modern industrial languages, which are all text based programming languages with an Algol-like syntax. For these languages, I suspect productivity and MTTR will be more influenced by extrinsic factors, such as availability of libraries, tooling, and popularity. A large community means there are more resources for learning how to solve specific problems -- the stackexchange effect -- and increases the pool of people you can hire to maintain your software.
e
I have done measurements of MTTR and maintenance costs, with the same team, switching languages. That generates a very pure measurement of language effectiveness because you are only changing one factor at a time. I saw a 2:1 improvement from switching from C to Modula-2, and that was well worth the effort to change, which is considerable, because you have to retrain personnel, and disconnecting from the most popular language to one that had virtually no support materials and community. Cost of MTTR is not really about tooling; when you are doing maintenance you are making small changes to a program, and what trips you up are unseen dependencies and ripple effects. Some languages are very localized and have few dependencies, while others are like tangled spaghetti, where the tiniest change breaks things in baffling ways. LISP is one of the worst offenders, because of the nefarious S-expression data structures, which are typically addressed by position, and adding new data to a tree change the position of other items, thereby breaking existing code inadvertently. APL has different problems, but was actually more maintainable than LISP. I am not making any statement about the variation between programmers; in some of the literature they estimate that there is a 20:1 productivity range among programmers. Textual languages have a wide variety of MTTR, yet that factor is hardly ever mentioned in language discussions. You see lots of charts showing performance of various benchmarks, which is an obsession about run speed. Given how cheap and omnipresent computers are, the cost of the software development typically dwarfs any cost of execution. I estimate some languages are dozens of times worse in terms of readability which will impact MTTR. Programmers are a group are always thinking about how easy it was to build the project in the beginning; few are concerned with what management is going to spend to carry that program forward through several different people. Maintenance costs dominate the world of commercial programming, and it is high time that languages and toolchains which lower MTTR are given recognition. The large pool of people using a language is frankly of very little value much of the time for repair and improvement of a codebase. What help are outside people in updating some complex thing?
d
You see lots of charts showing performance of various benchmarks, which is an obsession about run speed. Given how cheap and omnipresent computers are, the cost of the software development typically dwarfs any cost of execution.
The suggestion here seems to be that developers are optimizing without reason. Which might be the case or it might not. But in cases where you need to, its often the source of bugs, not just because of the programming language, but because optimizations depend on a stronger depth of knowledge concerning the platform. I imagine we have more papers on speed benchmarks because they approach being verifiable, as where studies on human productivity are probably a lot harder to compare. On studies though, those that i have read have ranked Clojure very high. Higher then haskell in terms of having less bugs. As it's a lisp, i'm curious if you have any insight into how its doing so well given what i interpret as a somewhat bleak outlook on the LISP family.
The large pool of people using a language is frankly of very little value much of the time for repair and improvement of a codebase. What help are outside people in updating some complex thing?
The community produces libraries that general have more well thought out and understandable abstractions. Being able to leverage those libraries and community experience is very valuable too me.
e
I am not saying that a large development community isn't helpful. It is very much so, as it is well known that the more users of a tool the fewer bugs you will encounter. Tiny communities like Modula2 which had only a few hundred people in the USA (it was a swiss languages) meant that you had a single compiler vendor for a platform, and if you hit a snag you had to plead mercy to get the runtime fixed. Using Microsoft toolchains means you had hordes of people beating on it, so you have a higher degree of polish on the tools. But when the product is already built, and you have some minor changes to make, the hordes of people who haven't seen your code base are of little impact. As for Clojure, it is an interesting language. The recently defunct Eve project started in ClojureScript, which is a very productive language, and then switched to Rust if i am not mistaken. It would be interesting to hear from the team about their feelings about the switch. Rust is a much lower level language IMHO, more of a systems programming language compared to ClojureScript. I don't have any experience with it so i can't comment. However, there are plenty of blogs across the web expressing disillusionment with Clojure, so I would say that the problems of LISP remain. https://medium.com/@boxed/my-disillusionment-with-clojure-and-lisps-9eca38ab7f0c
d
* Spec enables better tooling then python has at probably less effort. Given the community size difference it tonight take time to get there though. * Positional arguments are a special case of destructing. * Having macros enables compile time solutions that would require arbitrary and needles core language changes otherwise. * The positional forms part is confusing to me. I'm assuming he is referring to something like (case :else "hi") in which case, this is done because else evals to false, the contract is that it can be any truthy form, so it's not a special case and what he is asking for one create his dreaded ( 2 ways to do a thing).
That makes sense assuming you read the article you linked, I'm always happy to poke at the rough edges of a tool I use because it helps me understand it better