A thought that’s been crystallizing for me is that...
# thinking-together
d
A thought that’s been crystallizing for me is that the essence of ‘coding’ is modeling & simulation (not e.g. data and functions). These themes show up all the time in FoC contexts, but as far as I can tell they’re rarely the ROOT metaphors of a system. Of course there are plenty of examples in “real engineering,” what Alan Kay refers to as CAD<->SIM->FAB system. Do you know of examples of ‘convivial computing’ projects where modeling and simulation are the main event, or readings on the topic? What do you think of this premise? Here’s a recent Quora answer for more context: Does Alan Kay see any new ideas in computing?
“New” is not what I look for. “Ideas that make a qualitative difference over past techniques” are what I’d like to see.
Years ago, I’m fairly sure I was aware of pretty much everything regarding computing that was going on in the world. Today, I’m definitely not aware of everything, so it’s reasonably likely that if there was something really great being done somewhere that I wouldn’t know about it.
I would be most interested in learning about “qualitatively more expressive” programming that is more in line with top-level engineering practices of the CAD<->SIM->FAB systems found in serious engineering of large complex systems in the physical worlds of civil, electrical, automotive, aeronautical, biological, etc. engineering.
In the CAD<->SIM part I’d like to see the designs understandable at the level of visualizable semantic requirements and specifications that can be automatically simulated (on supercomputers if necessary) in real-time, and then safely optimized in various ways for many targets.
Isolating semantics in the CAD<->SIM part implies that what is represented here is a felicitous combination of “compact and understandable”.
The FAB-part pragmatics are very interesting in their own right, and besides efficiencies, should be able to deal with enormous scaling and various kinds of latencies and errors, etc.
The above would be the minimal visions and goals that I think systems designers within computing and software engineering should be aiming for.
I’m not aware of something like this being worked on at present, but these days this could be just because I haven’t come across it.
👍 1
i
Simulink, yakindu, and the lifecycle modeling language (LML) folks all come to mind.
And of course SysML
Though I think LML is a better approach
d
Thanks Chris! I guess I still see those tools in the CAD/SIM/FAB camp, at least Simulink and yakindu, though I prob need to make another category for “business processes” (which I still don't think the UML family has done a great job solving). For me 'convivial computing’ intends something I might use to build/edit my own tools (notes, todos, etc) in a live environment.
Huh i hadn't taken a close look at LML though, that does look worth knowing
k
Also Netlogo which is getting discussed at https://futureofcoding.slack.com/archives/C5U3SEW6A/p1635772261029400?thread_ts=1635772261.029400&amp;cid=C5U3SEW6A Dalton, are you suggesting using simulation metaphors for things like programming email filters? 🤔 Would HyperCard and HyperTalk fit the bill?
d
this will probably sound overly philosophical but i’m not trying to be.. just what seems to happen when i try to coherently define terms. first off, science & engineering is almost always based on the assumption that we share a common ‘reality,’ which we come to understand through our band-limited, unreliable faculties of sensing and cognition. i will adopt that assumption throughout. studies and experience show our mental models to be wildly inconsistent, both internally (‘verification’), measured against reality (‘validation’) and compared to other people’s (‘coordination’), but they form the basis of pretty much every decision we make. my best working def of a model is ‘something that represents a partial world state’ - ‘something’ = has to exist in some form to be useful, whether encoded in a tangible object, a pencil & paper sketch, computer memory, neural circuitry, etc. - ‘represents’ = ultimately in the eye of the beholder; requires some pre-shared bootstrapping model/implementation to be useful (e.g. among humans we have near-universal experiential primitives like ‘dark/light’, ‘hot/cold,’ etc) - ‘world’ = all of reality - ‘partial world’ = some subset of reality - ‘state’ = some configuration of that subset of reality (with implicit or explicit precision/likelihood) representation is kind of subtle even beyond coordination (alan kay’s ‘communicating with aliens’)... perhaps the most basic representation is a simple ‘reference’, which still encodes the assumption that there’s a ‘something’ that’s persistent and recognizable on the other end (‘object permanence’). i don’t see a flaw in saying ‘pointers’ are the most basic form of stateful cognition (and subject to the same foibles as c pointers... is the referent still there? can it still do the same things? does it still have the same properties? has it been replaced by an evil twin?) models can serve a bunch of different roles. e.g. a model can communicate an observation of what i claim the current world state to be, an instruction representing the world state i want a system to produce, or an imagined scenario to reason about. ‘simulation’ is a bit slippery; to me the useful primitive to start with is ‘an ordered sequence of models,’ in which case simulation is something like ‘an ordered sequence of models representing the time evolution of a particular model according to some update rule’ one could think of science as ‘a process for finding models that best represent the world and finding rules for updating them that predict future world states’, design as ‘a process for defining models of how we want the world to be,’ and engineering as ‘a process for implementing rules (science) in order to achieve desired world states (design) based on current world states (science).’ ‘programming’ tends to muddle all 3 together, whether explicitly or (usually) implicitly. all that said, yeah, @Kartik Agaram programming email filters is squarely in the realm of what i’m talking about. you start with a model of reality that also includes your computing environment - your machine, its OS, your browser, the mail server, etc etc, which you can always drill down from whatever abstraction your dealing with if needed. you define a model for what unfiltered email is like (‘science’), a model of what you want your filtered email to be like (‘design’), and come up with (science) & implement (engineering) rules you think will achieve that. simulation is a powerful tool to aid in the ‘coming up with and implementing rules’ part (’what happens if i project this rule on this inbox model over time?’). to close the loop you also want some nice tools to see if it’s working how you want it to. netlogo enables some of this in a very ‘science-y’ not ‘user-y’ context. hypercard gives you some nice tools for very ad hoc experimentation. wavelength check?
d
written before I saw the above Absolutely! For me personally, "the essence of ‘coding’ is modeling & simulation" is exactly where I've come from in everything I've done in this space. I wouldn't just cross out "e.g. data and functions" as a result though - you can't exactly throw those out! Even when in the most modelly and simulatey world, you'll need to present data to the user to hold current states. And behaviours of those states is basically going to boil down to something like functions, even if presented more abstractly in pretty graphics. I mean, even Excel - the financial modeller/simulator - has those! For me, programming is creating new realities or simulating existing ones. I often refer back to the early Macs which introduced to the world a wordprocessor program that made the page actually look like the printed page you'd end up with, instead of glowing text floating on a black background that bore no relation to it. This is modelling or simulation of printed paper. In social media or chat you're modelling or simulating the relationships between people: their social graph. You're simulating them talking (or perhaps passing little paper notes to each other!) Of course, 3D virtual worlds and Augmented Reality are the extreme of this position, as is the programming of IoT devices.
👍 1
d
i’m tracking. to try out my terminology (i know you wrote this before reading), i’m def not throwing out data and functions but kind of putting them in their place. ‘data’ is the encoding of models in some reconstructable form. ‘functions’ are rules for transforming models (very useful in simulation or representation). representation is a separate issue.. can be whatever makes the most sense in context: text, diagrams, interactive widgets, spatial audio. a missing concept here is ‘linking models together.’ all good examples of modeling and simulation, though i wouldn’t say 3D VR is necessarily the extreme.. e.g. you can have rich representation capabilities but very poor modeling & simulation capabilities. this is partly why a lot of MMOs despite flashy graphics struggled to achieve the immersive quality of text-based MUDs. IMO it’s getting easier & easier to hop around these these layers while programming and it can be hard to keep track of where the lines are drawn, to the detriment of comprehension.
d
On the new post, I agree with everything up to the email filter bit, which I'm still digesting but it's not resonating as yet!
s
engineering as ‘a process for implementing rules (science) in order to achieve desired world states (design) based on current world states (science).’
You could have engineering without science, based purely on empirical data, no? Talking about models, you might find this write-up interesting: https://blog.khinsen.net/posts/2020/12/10/the-structure-and-interpretation-of-scientific-models/. It talks about empirical vs explanatory models.
design as ‘a process for defining models of how we want the world to be,’
Do you mean something like "designing a car" or "designing a language/medium". One slippery thing about models is that the model definition language (~medium) is also designed.
💯 1
d
@daltonb It’s unclear how useful it is to define, assign, world to be “all of reality” - What is not “all of reality?” If you are talking about “perceived reality” perhaps… but one of the goals of science (and art for that matter) is to expand our perception. So you get into some possible contradictions there, depending on which way you go defining things. In the end you can define a model as something like a category of morphisms. And the reason you might want to have a model is to understand the structure of interaction of the underlying objects i.e. the ones you are mapping between. (Why Smalltalk was really about messages not the objects for example…) On the engineering and science side… engineering can, and did, flourish without science. One of the great triumphs of the 20th century is the marriage of science and engineering, which is why modeling (not necessary scientific) is so prevalent in ‘trying things out.’
d
thanks @shalabh, good read. he’s definitely grappling with the same things & i agree with a lot of the intuitions there. i think the decision to make ‘observation’ & ‘modeling’ the two pillars of science leads to a bit of a muddled discussion of ‘empirical’ vs ‘explanatory’ models (where the ‘prediction’ part gets lumped in). i think it’s more clear to talk about three pillars - ‘coming up with representations for what’s out there’ (modeling), ‘checking how well a given model matches up with what’s actually out there’ (observation), and ‘finding rules for keeping models in sync with reality’ (prediction). to be a bit more explicit i was building up a sort of ‘DSL’ by making my own definitions of ‘science,’ ‘engineering,’ and ‘design’ in terms of modeling, since all three are pretty overloaded and lack canonical definitions. by that definition once you’ve decided what outcomes you want (‘design’), then ‘engineering’ is about picking the most relevant models & rules from ‘science’ to achieve that outcome with your available resources, and bridging the gap between those models/rules and the more complex reality they describe, (which can of course feed back into science, leading better models & rules). sometimes you have to make do without much help from science (if existing models/rules aren’t useful), and perhaps brute force your outcome through ‘guess & check’ (science is still responsible for the ‘empirical data’ in the ‘check’ part though, and hopefully gives some intuitions to constrain the state space on the ‘guess’). as for model definition languages being designed - that’s indeed the crux of a lot of the confusion in our tools, and part of why this feels like an important thing to focus on. ultimately that’s true for all of them EXCEPT the one baked into our brains; everything else is a translation layer.  @Daniel Krasner good point i introduced some unnecessary indirection with ‘reality’/‘world’ vs ‘partial world’... could have just started with ‘model = something that represents a partial state of reality,’ then use ‘world’ when modeling to mean some (inclusive) subset of reality covered by the model. btw i think the scare quotes on ‘reality’ may have been confusing too... overall i just wanted to acknowledge that there are other conceivable priors besides the ‘objective shared reality’ one, but it’s the one i’m adopting here along with the rest of ‘science’.  the general extrapolation science makes from there is that if we’re all perceiving the same ‘reality’ through tiny/distorted keyholes (‘perceived reality’), we can come to a better consensus of what’s out there by testing hypotheses and sharing notes, which i think is best formalized as modeling, simulation (prediction), and observation. as to whatever mathematical formalisms you pick to structure your models, in the end they also have to be ‘hooked on’ to reality somewhere (which is why a model’s structure can give insight about the structure of a ‘hooked on’ referent). my current perspective is that ‘OO was really about messages’ was a kind of ‘v1’ explanation from Kay & a bit confusing in the end (for me at least.. e.g. does a hammer pass messages to a nail? it gets awkward for ‘direct’ sensing & manipulation), and he got better at communicating the deeper issues after that. i guess now i see Smalltalk’s ‘message passing’ as more of an implementation detail for ‘decoupling enough’ within a ‘unified enough’ medium. the real sauce is components that are themselves capable of ‘universal structure’ (the ‘art of the wrap’), while also able to ‘hook on’ to other components as desired (not just a matter of message passing but also ‘alien communication’). the ‘metamedium’
s
that’s indeed the crux of a lot of the confusion in our tools, and part of why this feels like an important thing to focus on
I agree. When programming, I'm not sure I can clearly distinguish between "using a modelling language to create a model" and "creating a new model definition language". If we consider a programming language or system as a model definition language, any library or framework written in that system now is a new model definition language. It shares some aspect with the underlying language, but introduces new abstract concepts that can be instantiated to express a new kind of model. You don't even need a rich library for this. A trivial example is if I implement a Graph class in any OO language, and express various concepts in my programs as graphs, I've now created an ad-hoc modeling language where "graphs" can be expressed and processed. So, in any final system that exhibits behavior, where is the boundary between creation of models vs creation of new modelling languages? Hard to tell. Is it possible that in other fields of science and engineering, we don't usually create model definition languages so easily? Maybe. I agree 💯 on your original point that coding is not about data and functions. It's really about crafting systems via representations (~models). I'm not a fan of even the usual connotations of a program where you write it then run it. Rather, there are other ways to think about the system-representation correspondence that we should explore. On the topic of modelling, another paper I really like is Winograd's Beyond Programming Languages in which he talks about the "Three Domains of Description" - essentially three perspectives of such representations that are desirable. https://john.cs.olemiss.edu/~dwilkins/Seminar/S05/winogradPL.pdf
💯 1
d
@shalabh good paper there - thanks! @daltonb The Kay explanation was probably reflecting what he was really thinking at the time when working on Smalltalk. A biological metaphor of how cells communicate, how the system stays stable, etc. So it was more about message than about objects; the objects part came from Simula, which was a direct influence on Smalltalk. And I would guess that he had a lot of Simula in his head and so named the thing OO, instead of Messages. I’ll try to remember to ask AK about that… The other part which I think is important is that all these objets were supposed to be fully fledged machines (virtual presumable). I’ve heard him say multiple times that “if you want to have something powerful you can’t have anything less than the machine itself” or something to that effect. Kind of like a cell is a self-contained and complete biological entity… On the modeling side, I think we need to be able to model things like mathematical objects, or branches of math. In some sense category is about that. It’s pretty critical to be able to translate/model one domain in/on another - this elucidates structure. What much of math has to do with ‘reality’ depends on the definition of the latter; if you reality includes mathematical constructs then the answer is everything, otherwise not much… But maybe your definition of model needs to exclude these kinds of artificial abstractions, which is worth thinking about. I suppose I think of a simulation close to what you think of as a model.
❤️ 1
b
Interesting thread @daltonb. I am currently on a neuroscience dive that started with thinking about modeling & simulation. It's a bit tangential but sharing since my journey so far has led to more philosophical questions and perspectives on reality and the mind which some of your posts also bring up. I wrote my first multi-agent sim language/engine this year. Once I had that hammer, I started looking for nails and at some point wondered "can we model the brain with a multi-agent sim"? Googling this quickly turned up Minsky's "Society of Mind" from 1988. Doubly interesting not just for the ideas but because Minsky was writing this after helping invent the category of sim languages, (I had been referencing Logo/NetLogo for my own). So now I'm buried in reading SoM and AThousandBrains and cognitive modules, et cetera.
d
@Breck Yunits I’d add The Emotion Machine by Minsky to the list; it’s a later, in some sense more rounded, discussion of the sorts of things you see in SoM
🙏 1
k
@daltonb Note that my second pillar is not "modeling" but "models". And "observations" refers to the collected information, not the act. In other words, my two pillars are not activities, but items in a (hypothetical) knowledge database. That said, the "pillar" stuff is not what I care most about in this story. The (for me) important part is the depth (in the sense of Bennett's "logical depth", https://en.wikipedia.org/wiki/Logical_depth) of models. My claim is that the most important progress in science comes from deeper models, not from the larger coverage of available observations, which seems to be the main goal of today's data science.
@shalabh There is no solid distinction between models and modelling languages, just like there is no solid distinction between code and data, or between languages, libraries, and programs. Nor between tools and "substance", or whatever you want to call the stuff that tools work on. There's just bits and interpretations of bits, with every collection of bits allowing multiple interpretations. In practice, we prefer interpretations/designs consisting of multiple layers, with "language" (programming or modelling) and "data formats" below "library", "program", "model", "dataset" etc. The biggest mess is the "library" layer. It's the thickest one, and it permits multiple decompositions into thinner layers depending on one's perspective. The layers really make sense only if they can structure dependency relations, but that works less and less well as we continue constructing balls of mud.
👍 1
👍🏽 1