I really appreciated the <latest FoC ep> about "Th...
# linking-together
j
I really appreciated the latest FoC ep about "The Structure of a Programming Language Revolution". I'm in The Academy, and I'm perpetually fascinated by what's happened to academic PL. This essay provided some missing links. Thanks @Ivan Reese & @Jimmy Miller! One fun aha I had afterwards: The systems vs. languages distinction helps clear up something I've always wondered about prototypal object-oriented programming. Namely, why use prototypes at all? Rather than writing:
Copy code
// Version A

function Dog () { }

Dog.prototype = {
  bark: function () { ... },
  fetch: function () { ... },
}
you could just write:
Copy code
// Version B

function bark () { ... }
function fetch () { ... }

function Dog () {
  this.bark = bark;
  this.fetch = fetch;
}
Sure, maybe that's a hair less ergonomic. There's a (extremely tiny bit) more memory used to store the slots. But that hardly seems worth inventing a whole new language feature and making such a big deal about it. I think part of the answer is that prototypal inheritance was born in Self, a programming system. If you want to modify a "prototype" in a programming language, the two approaches above are equivalent. You modify the (literal) prototype in version A, or you modify the constructor in version B, and then you re-run your code. But if you want to modify a "prototype" in a programming system, you'll discover you already have a bunch of instances of the class running around. Version B doesn't work, because it only changes how new instances are constructed, not how existing instances behave. A live connection between the instance and its prototype is required to enable live development in a living system. I feel like Dan Brown over here, finding secret traces of the past in the present. (Ominous hole in my theory: This doesn't explain why prototypes were brought into Javascript, where they ostensibly no longer had much use.)
g
Allow me to try to add some more context... [tl;dr] • Inheritance is about data structuring. The only difference between Prototype-based Inheritance and Class-based Inheritance consists of RULES about WHEN it is legal to structure and re-structure data. • ‘Self’ corralled prototypal inheritance. ‘Smalltalk’ corralled class-based inheritance. Both, borrowed from previous ideas. • All big inventions in programming stem from the use of dynamic languages. • Class-based inheritance is Waterfall Design. Protoype-based inheritance allows iterative design. • JavaScript was based on Lisp. JavaScript designs-in prototype-based inheritance better than Common Lisp, Scheme, Clojure, Racket, etc., etc. [background] In the beginning there was assembler. There are 2 types of assembler 1. line-oriented 2. tree-oriented Assembler is characterized by • ultra-simple syntax (usually prefix notation) • commands with operands. Line-oriented assembler is what we call “assembler”. Tree-oriented assembler is what we call “lisp”. Assembler gives you a toolbox of functionality, but, if you want to structure your data you have to do it manually. In assembler, nothing stops you from re-structuring your data on the fly. The result, though, can be hard to understand. This is part of what we call “readability”. The term “readability” is usually used to mean human readability (aside: but, there’s also machine readability ; note that textual “readability” does not mean the use of general English prose, but, only a well-defined, restricted subset of English) One of the tennets of Computer Science is Don’t Repeat Yourself (DRY). Inheritance is simply a way of structuring data in a DRY way. If you’ve structured your data, don’t do it again, copy the template. So-called “prototypal inheritance” is a way of structuring data that can allow changes to the structure at runtime. “Class-based inheritance” is an optimization of prototypal inheritance. In class-based inheritance, you separate your Red Smarties from the rest (a Smartie is an M&M, a “red Smartie” tastes “better”). If you apply the class-based optimization, then, you can compile-out data structuring. The compiler can help make your resulting code tighter and more check-able, but, you are allowed to structure your data ONLY at compile-time. And, the result is less-confusing. Nothing should change at runtime. Dynamic-anything is “bad” (aside: this is a fundamental problem with pub/sub). Optimization culls creativity, but, results in a notation that has certain “human readability” properties. Optimization is “bad” during Design (“premature optimization”), but Optimization is “good” during Production Engineering (what we mistakenly call “programming”). Self and Smalltalk are syntaxes draped over assembler. Self does not insist that you pre-define all data structures, Smalltalk does insist on pre-definition. The tools for inheritance always existed, but Self coined the phrase “prototype” and Smalltalk coined the phrase “class” (actually, Smalltalk borrowed the concept from a previous language, but we won’t go there now). JavaScript was originally based on Lisp. In trying to keep the flavour of a “dynamic language”, JS does not insist on pre-defining all data structures before runtime. IMO, JS does this better than the Common Lisp, the Scheme, the Racket, the Clojure variants of Lisp. In Common Lisp, the designers chose to jump directly to premature-optimization using classes and built that concept in as syntactic baubles (“defclass”), that cause programmers to think in a Certain Way, even though the tools for less-calcified data structuring are still present (but, generally ignored by class-indoctrinated programmers). Ideas calcified by compile-time optimizations cause programmers to think in Certain Ways and disallow other interesting forms of Creativity. There IS another way to optimize - as seen in JIT and 1970's compiler technologies (e.g. gcc, OCG) and linking loaders. Optimize at runtime. Treat optimizers as barnacles attacking only already-working programs. Premature optimization has led us to building and using compiler-appeasement languages and has snipped off other creative avenues of thought. Or, we could more simply divide programming into 2 camps (aka “divide and conquer”): (1) Design (2) Optimize. This division happens all of the time in more mature industries - products are released, and, only later, the products are cost-optimized. In fact, this division is so severe that it is given the name “Production Engineering”. In contrast, Software uses the single term “Engineering” to mean “Architecture” and “Engineering” and “Production Engineering” and “Test Engineering” and “Deployment Engineering”, and ...) If you Design and Optimize all in one go, you are involved in a Cottage Industry. In a Cottage Industry, the same person, or a group, wears all of the hats. Class-based inheritance is Waterfall Design. You must know everything about the data before you can write correct classes. Prototype inheritance can be iterative, you can change your mind later, you can incrementally alter the templates as you learn more about the problem space. Compilers can be built to compile programs if the programs obey the rules of class-based inheritance. Compilers cannot, in general, compile programs that do not follow the rules of class-based inheritance, e.g. compilers cannot compile prototype-based inheritance, but, can compile class-based inheritance. [conclusions] I would argue that Self did not invent prototypal inheritance, but corralled the ideas and created the name. Self’s contribution is the exploration of the space of data structuring and of making optimization be a continuum that can be applied at different times - not just at compile-time. This exploration ultimately led to the concept of JIT. JavaScript was designed to allow “dynamic” programming (whether it succeeded, can be argued) and was originally based on Lisp. Prototypal inheritance is less constraining than class-based inheritance, therefore, was made a part of the design of JavaScript. To me “system” means “dynamic language”. And “general purpose programming language” means “static language”. Each kind is “good” and each kind is “bad”. IMO, all big inventions in programming stem from the use of dynamic languages, and, ultimately from the use of assembler. GC, first-class-functions, Haskell, REPLs, etc., etc. The only difference between class-based inheritance and prototype-based inheritance is when it happens. Both, class-based and prototype-based inheritance can structure data at compile time. In prototype inheritance, though, data can, also, be structured and re-structured at runtime. Runtime restructuring is not allowed in class-based inheritance. Class-based inheritance is Waterfall Design. Protoype inheritance allows iterative design. miscellaneous: 3 OO-ish data structuring techniques 1. classes 2. prototypes 3. mixins 2 is like 1, with some of the compile-time-only restrictions removed. The mixin idea, 3, looks like OOP, but is very different. In class-oriented OOP, you have a vector of operations associated with each class. In mixins, the reverse is true - you have a class without operations. Operations “decide” on-the-fly if they can be applied to a cross-product of parameters. All three methods get rid of explicit “if-then-else” based on type (which is the really big win - “if-then-else” is just bad). Mixins go beyond simple class-based inheritance - you can specialize operations on /value/ instead of type, you can create :before and :after methods. Capability depends on the method (“operation”) and not on the class. For a simple, nonsensical example, you can write a “plus” method that works on {int x int} and another “plus” method that works on {int x string} and another “plus” method that works only on {nil x int} • question to self: can you remove defclass from Lisp and leave only mixins? lisp already has deftype, why do you need defclass? • lisp isn’t a programming language, it’s a soup • mixins ≣ assembler for constructing various kinds of class / prototype / whatever-based languages. I like to use the term “atoms” when discussing fundamental building blocks. In my view, “mixins” are “atoms” whereas classes and prototypes are “molecules” constructed out of “atoms”.
r
I think of prototypes as a more fundamental mechanism of inheritance in OOP than classes, given that class systems can be simulated in prototypes, whether Smalltalk and Java in Self (https://bluishcoder.co.nz/self/substrate.pdf), NewtonScript (https://beepdf.com/wp-content/uploads/newton/Class-based%20NewtonScript%20Programming.pdf), or many different cases in JavaScript (Crockford - https://web.archive.org/web/20110805161336/http://crockford.com/javascript/inheritance.html). I've made close to 10 in the past year in JS on an exploration of its potential. Prototypes are a lot easier to optimize than classes - the Smalltalk in Self project noted that their implementation was faster than contemporary commercial products, and V8, a descendant of the Self VM, is a marvel of dynamic languages. Yet this requires a complex VM that wasn't really possible at PARC. On the other hand, prototypes are awkward for programming with, which is why prototypical systems tend to develop class systems, whether formalized or not. Even Self made use of traits that started to smell like classes. For the most part, the individual behavior of objects doesn't need to be specialized. JavaScript prototypes are especially awkward to use and hostile for beginners - the lanugage's saving grace are its first-class functions, with which the whole prototypical system can be ignored (besides in base types).
w
When I did serious JS 10+ years ago, I always found myself adding shims because JS's prototype/constructor constructs were remarkably awkward. To me JS feels best when I make liberal use of closures. But then there wasn't a good way to get a reference to the closure scope. Has that changed? I mean given something like
f = (()=>{var x = 4; return ()=>++x;})()
, can you say something like
f.closureScope.x
?