<@UEH6T3RJB>'s <https://github.com/amb26/papers/ra...
# thinking-together
k
@Bosmon's https://github.com/amb26/papers/raw/master/onward-2016/onward-2016.pdf [1] really buries the lede. Here's what I think the primary proposal is, from deep inside section 6, 80% of the way down. The basic recommendation:
...all design elements be exposed in a structured space of publicly addressible names...
In more detail:
We propose an aggressive program to assimilate the functions of traditional programming languages and their component systems, by stratifying them vertically into two parts:
* On top, an integration domain which encodes not only relations between runtime values, but also the structure of any adaptations expressed in the “virtual class” idiom seen in section 4.3, using the selector structure described in section 6.1.1 to predicate the addresses of these relations and the targets of these adaptations. In order to retain the symmetry implied by our “algebra of differences”, all component structure is expressed within the integration domain, that is, it encodes all classes as well as virtual classes.
* Below the integration domain, then, remains a highly impoverished language dialect that just consists of free functions which express any remnant computation that could not be effectively expressed in the integration domain. The free functions in the impoverished language are addressible through stable names in their own global namespace, and each of them obeys the Law of Demeter strongly, in that they are pure functions of their immediate arguments.
...assimilating the power of reference to the addresses of state into an integration domain which is incapable of computation...
[1] Promoted from https://futureprogramming.slack.com/archives/C5T9GPWFL/p1543845454059000?thread_ts=1543729043.042700&amp;cid=C5T9GPWFL
2
b
Thanks for the digging, @Kartik Agaram. Sadly I am not a particularly skilled writer and this paper has already been through 11 reviewers and 3 rounds of rejection even before it got where it did : P I await the results of better writers and thinkers than me in getting the message out in a way it can be digested.
❤️ 1
k
(After a couple of days reading through a few citations..) A problem shared by myself as well. I can totally relate, particularly since one of the big reasons I left academia was that I found rejection hard to take 🙂 You're a better sophont than me for continuing to plug away. Upon further reflection, I think it depends on the audience this is intended for. Perhaps you're writing for people who think reuse is a) great, b) attainable, and c) just a matter of sufficient foresight. In which case this is a pretty nice way to take them carefully through their own belief system and point out the holes in it. Whereas when I read it, I started out seeing words like 'reuse' and rolling my eyes, then saw reuse defined as "the capacity of a design to empower others to continue the design process via extension or adaption" and did a double-take, and gradually (and with lots of breaks) grokked where you were going. It's possible I'm still misunderstanding part of the paper, but I found sections 2, 3 and 4 to be an extended sidebar before you returned to your original definition of OAP. Perhaps there are two separate papers here? Then again, I may not have read the whole thing if you hadn't already built up currency with me with your previous paper 🙂 and splitting the solution off into a separate paper would risk the audience never getting to it. It's a hard problem, introducing strange ideas to people. That's why most research stays comfortably in the shallows of ideas its audience is used to.
A couple more detailed comments/questions: 1. While the way you cast seemingly comforting terms like 'reuse' in new settings is a nice linguistic hack, it is a bit double-edged. For example, I gradually grew aware that I think of 'reuse' as the assumption that people can't see inside encapsulated layers like classes. That would be like a C++
.h
file that doesn't include implementations, or a Java
.class
file distributed in binary form with just APIdoc-generated comments. Whereas you're assuming that you can see inside classes, you just can't (or choose not to) modify them directly. Which leads naturally to the DOM model of hanging listeners and overrides on the skeleton of a given set of classes. I wonder if this distinction is worth explicitly drawing out. Otherwise you're bringing in distracting associations for your reader. (I've had this issue with past papers as well, now that I think about it.) This sort of reasoning is why I lately try to avoid the terms 'reuse' and 'abstraction' (and a few others). The statement of the principle at the start didn't immediately paint a crisp picture in my mind. It wasn't obvious why "fire up your text editor and modify a few lines of code" was not following the principle. (https://xkcd.com/722 came to mind.) At the same time, I could imagine other readers with a different background seeing it and thinking, "this is why we have late binding/dynamic dispatch/polymorphism/inheritance/..." So depending on the reader, the principle as stated will seem either trivially satisfied or a nice ivory-tower goal but not very actionable. By the time you return to it in section 5, my sense of your meaning is a lot more rich. But perhaps you should start with a problem rather than the principle? Hmm, which arguably is what you're doing in sections 2-4, but I didn't realize that at the time. Maybe you need to condense the problem down a bit? Speaking of that, I found Table 2 quite confusing. Coming from a Lisp background, I found myself wondering if you were just looking for some maximally-flexible way to specify lexical and dynamic scope. It was only when I got to section 3.3.2 that I realized that
β
may be deep inside the implementation of
α
. The colons obfuscate that because they make the pattern seem like a superficial use of an existing type. 2. It's hard to discuss a general term like 'reuse', particularly after bringing up economic considerations, without causing me to wonder precisely what is difficult about modifying some code directly rather than adapting or extending it. Might the paper benefit from a focus in a specific language's operational eco-system? For example, if you were speaking in terms of Python you'd be able to say something concrete like, "it's useful not to modify alpha directly because it comes from a package downloaded from pypi and you don't want to fork the upstream package and be responsible for merging in all future changes." Newspeak or Beta may have different reasons, and I'm not familiar with their eco-system. Why is it not an option to modify these classes directly? What exactly causes the network horizon in these situations? (I have my own answers to these questions, but I'm not sure if my answers match yours.) 3. A slightly less subtle way to beat my own drum:
...“available for use” meant that a module’s content should not be modifiable by its consumers, promoting uses such as caching, verification, etc.
Can we verify use even in the presence of modification, based on type checks, contracts, ahem tests, etc.?
b
Thanks so much for these insights and comments, @Kartik Agaram - I'm always looking for ways to improve this framing and getting the message across more clearly, and this narrative of how the paper's language threw you off repeatedly is gold dust. The focus on the misleading use of "reuse" is really interesting, and I also had a similar kind of trajectory when trying to explain the points to Luke Church - he immediately outraged me by saying that "In industry, the problem of reuse is overblown, in my opinion". In the end we realised that we did understand each other after all, since he was referring to "the phenomenon that currently goes by the name of reuse", which we might describe, following your analysis, as "a concept of reuse that imagines we can get away with respecting encapsulation" - rather than what we could call "unbounded reuse" which is simply reuse that meets whatever the economics of society require. Unfortunately we have a big problem with terminology. A big "fork in the road" is always whether to neologise and create some deliberately unrecognisable term to avoid misunderstanding - and Infusion certainly does a lot of this - or whether to try to consciously overload and extend some widely recognised term in the hopes of establishing a shared landmark. I felt that there was enough unrecognisable terminology in play and that the reader would prefer the landmark - even to the extent that the first submitted version of this paper was called "A New Open-Closed Principle", hoping that presenting the new principle as directly in the lineage of Meyer at al would make it feel more acceptable. The result was that the reviewers were horribly confused and unable to take on any of the message at all. It seems that my riffing on "reuse" has been similarly problematic, but it seems like such a widely-accepted term, especially in wider society (everyone understands what it means to "reuse" a bottle or a chair) that it feels a big loss to give up on it. Your 2. point is pretty interesting - in that we can't understand these issues without bringing in the ecosystem. I think you've stated the reasons, operationally, in the way I see them, but I think the reasons we need to be able to adapt code without modifying it are essentially the same in every ecosystem - it is that we can't claim ownership over another community's expressions, without paying all the costs to enter and maintain a relationship with that community. This implies that, economically, there has to be another option on the table, to get our work done without having to pay these costs, and we should try to make this other option as cheap as possible, in order to support communities as fine-grained as society needs them to be. So I wasn't sure that exploring the incarnation of this reason wrt a specific language would add much - but perhaps it might. I'd be interested in your own answers and any daylight between the ones we have. In the light of point 3. - I guess there are some interesting semantics in terms of what counts as "modification" and what simply counts as "variation". One would expect to say that any techniques we had for verifying use should have comparable economics in the two cases - in that the techniques for verifying use themselves should have all the beneficial scalings under the OAP as do the base artefacts themselves.
The question of "what audience the paper was intended for" is also a fascinating open question. In fact, you're the first person I'm aware of who has come to this material "in the wild" and tried to make sense of it without any prior connection to the community behind it, so anything you say about problems in its presentation carries a huge weight. There's a huge "chicken and egg" problem with the ideas, which I think you're drawing out - you say "Perhaps you're writing for people who think reuse is a) great, b) attainable, and c) just a matter of sufficient foresight". But I think as you and I well know, no such person could ever possibly be convinced by material like this - and in fact would never read to the end of it. It's only because you had a substantial resonance with the ideas already and had independently constructed your own incarnations of several of them that you had the patience to wade through it. I think we have to say that in these cases we are writing for a form of "nonexistent ideal reader" - that is, what Lakatos would call the inhabitant of a "rational reconstruction of history". From the vantage point of the next century, when these ideas are universally accepted (!) some historian of science would try to reconstruct the intellectual trajectory that some wholly imaginary "central" member of the field took in order to navigate from the old ideas to the new. And this paper, I guess, is addressed to that imaginary member. And perhaps whilst there's no single person to whom the entire connected narrative is useful, the paper could be useful as a "grab bag" of arguments - that is, if one finds people in the wild who believe X, Y and not Z, one could then direct them to the bit of the paper where Z is established. And finally, the paper has to be useful to ourselves in terms of even articulating to ourselves what our arguments are. It's fantastically easy to get lost and forget how on earth it is that ones means connect to ones ends, if the chain of links is as long and as diffuse as it is. And after finally - the paper is useful as a "venue for conversations". One particularly marvellous result was that when I presented the paper, a couple of the the (about three or four) people in the audience who seemed to get the material started a conversation and came to realise that there were connections between their own research programmes that were indirectly suggested by it.
k
Thanks for those responses. Super empathetic to all of them. Just to quickly address one point: I totally agree that your point is valid everywhere. But focusing on a single stack may help get your message across.