Just found this gem while cleaning up my Zetelkast...
# thinking-together
c
Just found this gem while cleaning up my Zetelkasten a bit

https://youtu.be/MlK0IQmqmCQ?t=2432

"40:41 "using objects at the lowest level in the system and combining the language with this idea of co-routines so I realized that if you had co-routines and persistence you didn't need files because you basically just paused every structure that you wanted to stay around and just let it stay active there's a way of getting away from data structures so a lot of interesting ideas in this thing and on each iteration of these languages.."
"I'll just say one word that the more you can think biology when you're trying to understand objects the better off you are the more you think C or E or indeed any programming language with objects grafted on the further away you are from these things..."
k
On the other hand, this is a great critique of this mindset:

https://www.youtube.com/watch?v=6YbK8o9rZfI

If you build your software out of little computers, every program becomes a distributed program. That seems like terrible complexity.
c
Haha you found a nice advocate for the functional programming idea. The sad thing I think it is a misunderstanding. And maybe a lot of misframing. Have you seen the Talk between Joe Armstrong and Alan Kay? 🙂 Also there exist a HN discussion between Alan Kay and Rich Hikey how the idea of "data" could be a bad idea. Its sometimes, strang , humbling and frightening but Alan comes from a time and computing culture we only tell now stories about at our campfire here at the slack. Since most of us lack experience in these contexts its difficult to realte. But still Interesting I think.
That said I talked with a friend today, which prompted me in the Alan Kay direction in the first place, as genius as the ideas of his computing culture ( and all the people involved) are they failed to reach broader society, they failed to create a real social impact and THAT is something to consider , to think about and to change IMHO.
Consider this talk for example:

https://www.youtube.com/watch?v=Vt8jyPqsmxE

in which alan mentions: "I don't think our main goal is writing millions of lines of code what we want it is for to do something.."
to which i noted:
his reads painfully reminding me of dijkstras effort to bring some methods to the software industry, so for reasons people now DO actually write millions and millions lines of code, reminds me also of Joe Armstrongs (were in a mess talk): there was a time when there was too little , just about enough and now its just too much software
yes the thing we now call or percieve as "functional programming" - -- works in the context as a evolutionary thing, it evolved to be the best or a very good strategy for the computer industry... but it this what we wanted?
try to imagine a computer industry and a society with it where the computer software is 20000 or 30000 lines of code not more. Maybe some individual user preferences expressed as recursive fractal patterns but thats it. How much else could all these people do. 😉
p
I don't think Alan Kay's ideas "failed to reach broader society." I would argue that they were suppressed by Jobs and Gates because Kay's ideas made users more self sufficient and less dependant upon computer companies.
💯 3
Every time Apple started to create something that was creeping towards the power of Smalltalk, it got killed. Newton Script. Open doc. And of course, Hypercard. Visual basic was originally intended as an end user tool to be distributed with the OS, but it was retargeted for "developers."
👆 1
🤔 1
💡 1
k
If you build your software out of little computers, every program becomes a distributed program. That seems like terrible complexity.
It depends. If you take the idea of following biology really seriously, you end up with layers of organization in which each layer's complexity does not mess up the layers above it, because they work in terms of emerging phenomena. This approach has never been tried in computing, as far as I know. All our computing technology is based on the idea of assembling ever more complex formal systems from ever more rigid components. The biological approch would aim for informal systems, systems that adapt to context. If that approach works, which we don't know, it will probably solve different problems than today's formal-system computing. So I think there is a place for both in our future.
🙏 1
💡 1
👆🏽 1
👆 3
My crystal ball tells me that machine learning has the best chances of developing into this direction. It doesn't really require Turing-style computing machines at the lowest level, so I expect very different hardware in the future. But the result will be something different from today's computers. Not better, not worse, different.
c
Thank you @Konrad Hinsen for explaining the complexity issue and for the perspective of a different style of computing 👌 I agree with the idea of machine learning being a potential tool to reveal these patterns. I think it is because we have created so much software it becomes so difficult on the individual human level to see the patterns by themselves.
I would even say that these two things: the formal and the informal approach to computing, they exist in a kind dialectic relationship. You need both! Thesis and Antithesis which form Synthesis which then becomes the next Thesis. I have a strong hunch that thus spiral will lead some very interesting insights and things. 🙂
k
💯 We need both, and in fact we have always had both, but the dialectic relationship has never been properly equilibrated. If you look at computers+users as socio-technical systems, it's humans that do most of the informal work. That's what I expect to change with machine learning. There are also interesting hybrids, such as differentiable programming.
n
There's also not much work in combining ML approaches with formal reasoning. Alan Kay has talked about Daniel Kahneman's two modalities, a fast System 1 and a slow System 2 - that might serve as a good analogy for combining a large multibillion parameter ML model (system 1) with something like OpenCYC (System 2) for reasoning
k
I am aware of two approaches for combining ML with formal reasoning: differential programming (quite active), and the construction of symbolic modes by ML (seems rare). There is certainly room for more.
n
I think i've confused differential with differentiable (ie using formal techniques to optimize gradient finding in large ML models)
Do you have any good refs on the first approach?
k
A nice example is this: https://github.com/SciML/DiffEqFlux.jl It implements a combination of differential equations (formal) with neural networks (informal).
r
If you build your software out of little computers, every program becomes a distributed program. That seems like terrible complexity.
This is what ends up happening anyways, and every language that is focused on the cute case of working on one machine becomes a complexity nightmare, communicating text over ossified protocols. Clearly, there needs to be some coordination mechanisms that make distributed programs a simple extension of single ones, like Linda or Jini, alongside some notion of pseudotime. As for the ML side, handing the keys to software to black boxes that supplant our agency seems like a bad idea in the long run, but also where things are headed if we don't get our acts together. I think a more congenial approach, along this dialectic, is Lanier's concept of Phenotropic Programming (https://www.cl.cam.ac.uk/~mcm79/ppig/files/2018-PPIG-29th-lewis.pdf), as developed in VR. But on our way to that, I'm pretty sure Kay's object idea will be the basis.
👆 1
👀 1
💡 2
c
@Riley Stewart thank you for mentioning Jaron Laniers Perspektive, I will take a Look
k
Thanks @Riley Stewart for the pointer to Lanier's ideas! One aspect he criticizes, brittleness, is indeed related to formal systems that have grown too large. Another aspect, catastrophic failure, is also due to computation (in the Turin sense) being chaotic (see https://hal.archives-ouvertes.fr/hal-02071770 for the details). Chaotic dynamics is something that engineers carefully stay away from in all designs other than computers. Formal systems do not have to be chaotic. Finite state machines are not, for example.
❤️ 1
r
Nice paper! It seems chaos quickly permeates computer systems as opposed to engineered ones because there is no sense of tolerance - how could there be in a world of discrete logic? On one computer our formal methods might keep it in check, but once a network is involved the amount of failure states blossom. Maybe at that point, chaos has to be differentiated from undefined behavior, 'unwelcome chaos', that is not formalized into the system and handled appropriately. Recently, I've taken to the idea that software "engineering" is moreso tinkering, so no wonder chaos is rife in our systems.
k
No sense of tolerance is one important point. Related to the more fundamental issue that there is no notion of similarity in bit patterns. Similarity belongs to the application domain, so its application-specific and outside of software engineering. Avoiding chaos requires a principle like "small perturbations must have bounded effects", which doesn't have a useful analog for a Turing machine.
🤔 1
d
Lot of good stuff here. I’m starting to understand that the magic happens when this dialectic is hierarchically recursive, specifically in a multi-scale setting (3D metaphor, eg cells/organs/etc) as opposed to flat layers (note that 3d concentric membranes are 2d layers ‘locally’). I still find the ARC challenge to be the most charming illustration of the missing technical piece of the dialectic https://github.com/fchollet/ARC. I think many promising approaches fall under “program synthesis/inductive logic programming”; see e.g. Josh Tenenbaum’s group at MIT, Vicarious AI’s visual cognitive computer architecture, DeepMind’s Apperception Engine.