Trying to name a concept: The cognitive surface of...
# thinking-together
k
Trying to name a concept: The cognitive surface of software https://science-in-the-digital-era.khinsen.net/#The%20cognitive%20surface%20of%20software Has something similar already been discussed, or at least introduced?
❤️ 5
s
I quite like your distinction between cognitive surface/bulk. It reminds me of affordance after Gibson: > An affordance cuts across the dichotomy of subjective-objective and helps us to understand its inadequacy. It is equally a fact of the environment and a fact of behavior. It is both physical and psychical, yet neither. An affordance points both ways, to the environment and to the observer. But I wouldn’t necessarily say affordance is a better word for what you’re after; it just seems to be a useful concept to be familiar with to better understand what you describe. You also point towards the relational character of it, which I find the most important and interesting aspect, and which is a big step outside of what we like to deal with as programmers. Because it introduces context-dependence and requires specific adaptation — which I’m sure you are very open towards, given your involvement with moldable development. I wrote a lot lately about intelligibility (your comprehensibility) of systems in a series about simplicity, which seems extremely related to what you think about. I tried to break out of that one-dimensional simple-complex spectrum, which leads to minimalism or reduction of complexity as the only solutions to avoid it, by pulling it apart into two separate dimensions of mechanical and experiential complexity/simplicity, with the former related to the structure of the thing you’re looking at and the latter related to your cognitive processing of that structure, which is for instance highly influenced by your familiarity with it. That’s actually what my essay for Onward! is about. Anyway, I don’t have a better word for you, but if you want to discuss this further, I’m available.
d
In "The design of Everyday Things" by Don Norman, he refers to a similar concept called "Discoverability", which refers to whether it possible to determine what actions are possible and the current state of the device." This is is in order to build a "System Image", or a conceptual model of the system you are using, i.e. the the parts and their state, how they relate, how it might change, etc.
I like the extension though in thinking about the degree to which a system communicates it's state and how it works*, though it is dependent on the users knowledge and ability to pick up on affordances. But perhaps you can refer to what a system is attempting to communicate independent of an observers ability to read it. And you can specific where that communication occurs, especially whether it's apparent through a course of usage or whether it requires digging deeper into things like documentation for purely informative purposes.
k
@Stefan @Dennis Hansen Thanks a lot for your feedback! Gibson's affordances are definitely a concept I should refer to. There's a difference, however, which matters to me: my notion of comprehensibility includes not only "what can this tool do for me" but also "does this tool do anything behind my back that I am not aware of". Spyware would be the obvious case but I have also seen many well-meant small automatisations in computational science that turned out to be in contradiction to the expectations of some users. The relationality of the concept is very much intended. It may well be interesting as well to judge the amount of information that a system makes accessible independently of what the observer can make of it, but for now I prefer to concentrate on the human-computer interaction aspect. @Stefan I just re-read the essays you referred to. Good observations, there's definitely a "sweet spot" in complexity, rather than "the less the better". On the other hand, for the specific case of symbolic reasoning (mathematics, software), I can't come up with an example for "too boring because not complex enough". Elegance in mathematics has always been about minimalism, even though different people value different aspects of it differently (e.g. size of a symbolic expression vs. number/complexity of the definitions and theorems required to support it).
s
@Konrad Hinsen This is super interesting to me, sorry if I take this somewhere weird or not that interesting from your perspective: When I think of intuition in mathematics, I imagine things like understanding the term rewriting steps of the proof of say Pythagoras’ Theorem through symbol manipulation as an analytical/mechanical understanding. Whereas I’d say understanding a geometric proof of the same theorem is at least utilizing some of our intuitive/experiential understanding. While I’m at it, let me add another example from another domain I know little about. 🙂 The way I understand moldable development I would assume that you relatively quickly notice that the same data structure with the same data in it in different visualizations has a vast range of how easy/difficult the data and its structure can be interpreted and understood. So in your words, if I understand correctly, some of those representations have a larger cognitive surface than others. I just wonder, and would love to hear your thoughts on this: In mathematics, is it really the nature of mathematics, or more specifically the nature of formal systems and reasoning, that it is just about mechanical/analytical complexity, or have we just settled on a default representation that primarily affords mechanical/analytical reasoning and understanding? And as a consequence we (are led to) ignore (overlook?) intuitive aspects of it? Asking for a tech philosopher who is worried that in software development we are doing exactly the same…
k
@Stefan Lots of good questions... Easy one first: yes, I do believe that different visualizations of the same data structure have different cognitive surfaces, but this depends on the application context and not just on the data structure. If you talk to professional mathematicians, they regularly point out the importance of intuition in their work. In particular the discovery of interesting relations is mostly a matter of intuition. Formal systems have two roles: (1) supporting intuition for filling in the details, and (2) constructing proofs. Proofs are extremely important because they allow to convert the intuition of individuals into collectively accepted knowledge. That's also why the mathematical literature heavily emphasizes formal approaches. Computing has inherited this tradition, via Church, Turing, and others. It's not just a matter of symbolic representation affording this type of reasoning, I'd say it's baked even into our hardware. There are lots of different ways to process information. Biological organisms process information contextually, meaning informally. With AI we are making first steps into technological information processing that is contextual rather then formal. I guess we will see more of that. But I also believe that symbolic reasoning in the tradition of mathematics is here to stay. It's just too useful.
s
Oh, yes, it’s not either-or. Symbolic reasoning isn’t bad or wrong, it is obviously useful. It’s more like a “too much of a good thing…” situation, as we neglect other important things. Polarization also comes primarily from analytic thinking: If there are two polar opposites, it must mean either or, as if we can pick only one. But (1) in complex systems polar opposites are just a model, an oversimplified approximation of what’s really going on, and (2) it’s about… well… some people would use the word balance, you used the term “sweet spot”, but that’s still too static. It’s a dynamic context-dependent balance that can be different based on context, so we need more than just intelligence to know what we’re talking about, but also the wisdom to decide how much of each is needed in which situation. Context, however, is what we usually try to avoid; in science in favor of discovering context-independent universal rules, and in programming in favor of reuse and scale. I’m optimistic though, as it seems that we slowly wake up to the limitations of trying to brute-force everything with pure logical reasoning and as little context as we get away with. Maybe that’s just because I’m reading this at the moment…
Oh, and if that wasn’t clear, I’m also optimistic because I read your post pointing out the importance of context, the relational nature of comprehensibility, and potential issues with epistemic opacity. Did you just add that? Miraculously, I hadn’t heard this term before, otherwise I probably would’ve used it in half of my writing.
Fascinating! The segregation between scientifically so closely related domains is mind-boggling. Turns out there is a subset of computer science that reinvents a subset of cognitive science. And judging from the list of references they have never heard of each other, which explains the different terms used. Also explains why I never came across any of these interesting sounding papers. This is wild! I now believe in parallel universes.
k
By now I could compile a book listing parallel universes in research. Everyone talks about "interdisciplinary", and yet the dividing lines between disciplines are stronger than ever before. It looks like we pretty much agree on the importance of context and the limitations of formal reasoning. And my impression is that that's where many disciplines are heading, each at its own pace. It goes well with another slow movement I am seeing towards perspectival realism as a philosophical basis for science.
@Stefan Just read your "Meaning-ful Design". The final three points/questions sound very much like Illich's "convivial tools", and that was the starting point for my journey that lead me to my tentative "cognitive surface" concept. Unintellegible automation is a major obstacle to conviviality. BTW, the first of your three questions is roughly the topic of my submission to Onward! Not exactly the same perspective but the same direction of inquiry.
s
@Konrad Hinsen Thanks for reading and taking the time to comment! I’m aware of Illich, of course, but haven’t (yet) read Tools for Conviviality. What I wrote about in that post is my synthesis of cognitive science research and it’s not the first time that common themes surface across relatively separate disciplines. It does look a lot like clever people in different domains keep discovering similar patterns. I ended up here ultimately through George Lakoff (linguistics) and Christopher Alexander (architecture/design theory). What I began looking into ~5 years ago seemed to me like an earnest interdisciplinary effort between at least psychology, linguistics, anthropology, and philosophy. That’s how I found my way jumping between different disciplines. There were connections to computer science and AI research too, but I’ve been suspecting those disciplines are much more fragmented since you introduced me to the “_epistemic opacity_ bubble”. There’s some irony here, being a discipline that I like to accuse of having scaled complexity through hiding so far that nobody has a complete overview anymore — seems appropriate that the research in that domain is also more fragmented… Anyway, I’m looking forward to reading your Onward! submission. I’d love to learn more about your different perspective and I can only think it’s wonderful that we come at it from such different origins.
👍 1
k
I see the link between these converging directions of thought in postmodernism. Modernism is the quest for dominance of humanity over nature through rationality. Postmodernism is the critique of dominance and control. The conflict between the two is the cause of the meaning crisis. What's missing so far is the way out: critique shows problems, but offers no solutions. In a way, this forum is about postmodernism applied to information technology.
💯 1
In my Onward! submission, I take another step back (as a side note in the conclusion), observing that daoism has pointed out millenia ago that the human taste for dominance is a source of problems, and proposed "wuwei", effortless or minimalist action, as the answer.
s
If this was a cocktail party, we’d be two guys in the corner, having a great conversation, and every now and then a third person comes up, listens for a few seconds, then rolls their eyes and walks away. 😄 At least that is how it feels to me most of the time. Maybe I haven’t been to the right cocktail party yet? I have read Illich’s Tools for Conviviality now and I can see the parallels. He’s influenced by Erich Fromm and my argument is based on Fromm’s existential modes. Alexander must’ve also been influenced by Illich. I’m happy that my framework integrates well into the larger picture, but I do hope that my conclusions are already a little more specific today, and will be a lot more specific tomorrow. At least for people who wish to create better stuff in this world that is impacted by the issues Fromm, Illich, Alexander, and all the others describe. Now, how do you see this kind of philosophical discourse; why did you write a paper about this? Is this just because you’re personally interested? Was this just a personal challenge? Do you think it will have some impact of some kind? I’m still struggling with closing the loop from having a robust framework that integrates well with the ideas of bigger picture thinkers to getting software people excited about this. Ironically, I don’t think until we present utility and convenience/efficiencies that can be gained from such a different world view or approach to design, few will spend any brain cycles on this. Do you see the same issue, or do you have reason to be more optimistic?
k
There's a place in the software universe for convenience and efficiency, but there is also a place for "power user" technology that takes time to master but empowers its users. Such ideas floated around in the 1960s to 1980s (Iverson, Engelbart, Kay, ...) but were abandoned when software became an industry, with professional developers making products for end users, with epistemic opacity becoming a desirable feature. But the convivial power user systems of the past (Lisp, APL, Smalltalk) are still alive, which is why I am cautiously optimistic. They represent small niches, which makes it hard to evolve them to better meet today's needs. The angle of attack I am pursuing personally is trying to convince my professional environment, computational science, that industrial software is by construction unsuitable for us. When talking to my colleagues about this, I see a lot of agreement in principle but hardly anyone who believes it can be done. I hope to be able to prove them wrong. Maybe this will not work out, but as long as I am having fun along the way, I don't care!
💯 3
❤️ 2
b
Love the "surface/bulk" distinction you are making. One unfortunate aspect of cognitive surface term: closest terms that come to mind are API surface, attack surface where smaller=better, and my initial assumption seeing the title was "how much you need to grok to use it" (again less=better). Whereas you mean "how much of it is easy to grok" (more=better!). But I don't have any better term. Hmm direction of "better" is a false dichotomy, one can desire both meanings. Ideal software has "low floor high ceiling", both functionally and cognitively. Sounds like "surface/bulk" is not enough to capture that, you'd need at least 3 terms? Or more generally draw a learning curve?
The old joke about learning curves (many variations exist):
More seriously, I tried to map these concepts to a curve, and it looks like "low floor" wants to be measured in learning effort, while "high ceiling" in reachable ability? At least that fits how I usually understood that phrase... Your division of cognitive surface/bulk then becomes a 2nd (possibly much further to the right) rise in ability unlocked with "specialist knowledge, specialist tools, or with significant effort". (framing I'm learning from reading Nardi: It's not that users can't learn it, it's more than it deals with CS stuff unrelated to the problems they want to solve and is less attractive to invest time in.)
Are there more things to measure on a typical curve? Must any software curve have these parts (in varying proportions)?? • "free software" ideals strive to pave over that 2nd plateau, giving you non-negligible gains that justify gradually growing into a developer. • "end user programming" ideals recognize users may not care for that; instead they create many smaller earlier plateaus to climb. Instead of "user/developer" there are more levels like "user that knows VLOOKUP()"; "user that learn pivot tables"; "user that can record a macro" etc... [I drew this new curve here in the blank space above as if it's strict improvement on non-EUP software, but that's not necessarily true.] • To reuse the Emacs example, a central goal of embedding an interpreter was so that almost no one needs to ever touch the compiled C parts. If you'd count ELisp as "surface" and C as "bulk", then it pulled off being 90% surface! Well obviously that's too crude a division; the ELisp one puts in .emacs (in mode M, bind key K & configure var V) is nothing like say font-lock.el which while looking like well-commented ELisp is deeply optimized & coupled to the C... It also, explicitly, used the strategy of adding intermediate levels of proficiency like "customize keys & variables".
Obviously a curve can't fully capture "What is and what isn't comprehensible depends on the person". It's a 1D reduction of multi-dimensional skills. (For developers too! Given 2 software of similar functionality, I might prefer the one written in python in more functional style, you might prefer the smalltalk one in more OOP style etc.) But it's a useful simplification.
k
Thanks @Beni Cherniavsky-Paskin for turning my question into a dissertation! I hadn't thought of "surface" being seen as something one might intuitively want to minimize, but that's of course a reasonable point of view as well. And my cognitive surface is inevitably linked to attack surfaces and other undesirables. What's easier to grok is also easier to attack. That's in fact an aspect that I find missing from many discussions of EUP: the more you can adapt the code, the more responsibility you have to take for it. Learning curves are indeed relevant to my topic, but not quite the same. The learning curve shows user ability, which is more than understanding the tool. It's also learning how to profit from a tool's affordances in diverse contexts. A learning curve can grow to infinity (in theory). A cognitive surface is always finite. At some point you know all about te software, even if you can still develop your ability to use it well.
👍 2