The topic of the interplay between the "expected" and "emergent" behavior of the system, reminds me a lot of Dijkstra's famous GOTO essay. The point he made in that essay was that GOTO was harmful because it prevented you from making inferences about the dynamic behavior of the program from its static syntax, whereas structured programming allowed you to relate the dynamic state of the system to a "coordinate system" derived from the control structures.I think that appeals to human intuition are often misplaced when it comes to this aspect of code specifically, because code complexity experiences exponential blowup. No matter how well you represent a boolean expression in terms of naming, code formatting, or organization, for instance, there is the fundamental fact that boolean satisfiability is NP-complete; at some point your brain will be unable to scale that cliff. This remark doesn't apply to code that isn't of an NP-complete flavor, like say a flat list of drawing instructions.I think "composability" is what is important for code, but "composability" is a somewhat vague word. More precisely, there should be a way to determine specific properties of the system by systematic logical deduction from the components of the system, and without simulating the whole dynamic trace of the system's execution. Composability can only be defined relative to the properties you are interested in: SQL is "composable" if you want to know if an element is in the relation defined by the SQL statement, but is not so "composable" if you want to know how long the statement will take to run.
Anyone who thinks the future of coding involves advances in cognitive science. (at least @jonathoda, but I’m sure there are others here.) I’m curious to hear your perspective on what aspect of programming you think cognitive science research has the most promise to shape, and why you think that hasn’t happened already.
We've published another research report that might be of interest to this audience. It's an exploration of what "local-first" software might look like, why it's important, and some of our explorations therein. https://www.inkandswitch.com/local-first.html
@Dan Cook There are many possible usage of the term abstraction that it is probably best to specify what one means before making a point. I accept all the following definitions for example:1. Abstraction as naming (what you call indirection). This is a tool to manage complexity. This is how Sussman and Abelson uses the term in SICP. This is an abstraction because it hides the implementation details.
2. Abstraction as "abstract" (or not realized). This is the definition used for ADTs and abstract classes for example.
3. Abstraction as uniform interface. Some people are uncomfortable using abstraction for everything in (1) and (2) and prefer to reserve usage of the term for "good abstractions". Abstractions obtained by applying the process of abstraction and that are generic enough to be useful in multiple places.
4. Abstraction as model. This is your definition I believe. It is about removing the details of reality you don't care about.
Have there been any discussions on newer memory models like rusts borrowing opposed to RC/GC?
3 years ago
Can somebody remind me why software must be maintained forever and it's never ready?Why users cannot compose different existing solutions and instead every application must solve every distantly related problem (poorly)?
Are there principles, such that, if they were followed, they would yield software that is guaranteed to be easy to understand and maintain, at least to some minimal degree or level?What if each module was limited to a small number of inter-related conceptual elements, and written in care carefully English (or whatever human language it uses) in such a way that someone could get to understand and grasp it within a few minutes?