Inspired by
https://futureofcoding.slack.com/archives/C5U3SEW6A/p1672756060085649 and
https://futureofcoding.slack.com/archives/C5U3SEW6A/p1672816690782479, plus my daily work with a Smalltalk system, I started thinking about high-level architectures of information processing systems.
Spreadsheets are two-layer systems, with a data grid on top and a code grid below it. That's a good architecture for dealing with heterogeneous grid-shaped data and shallow computation. For homogeneous grid-shaped data (arrays, data frames) you'd prefer to compute with the grid as a whole, and for complex/deep computation, you want a more code-centric architecture. You can of course prepare the complex code in some other architecture and just call it from a spreadsheet. High-level architectures can be composed.
Dataflow graphs, of which Data Rabbit is an amazing implementation, have nodes containing code and data flowing through the edges. They can deal with irregularly shaped data, even messy data, but, like spreadsheets, they are limited to shallow computation.
A Smalltalk image is a code database built on top of a an unstructured "object lake". It's great for dealing with complex code, but has no high-level structure for data. You can, and have to, roll your own. From this point of view, a Smalltalk image is the perfect complement to a spreadsheet or a data flow graph, having opposite strengths and weaknesses.
So... are they more such high-level structures that have proven useful in practice? Is there just a small set whose elements can be combined, or should we expect a large number of unrelated architectures being good for specific purposes? Note that I am thinking about "good for", not "applicable to". All Turing-complete systems are equivalent in principle, but that doesn't make them good tools for all purposes. My question is ultimately one of human-computer interaction.