Alan Kay’s talk at UCLA (2/21/2024): <https://news...
# linking-together
s
Alan Kay’s talk at UCLA (2/21/2024): https://news.ycombinator.com/item?id=39612799
🙏 5
🍰 1
💯 1
❤️ 2
s
So much wisdom in there! At first sight it might seem as just another regurgitation of what he has been preaching for more than two decades already. But he’s always trying to evolve how he tries to make his points. I hope he’ll be with us for a few more decades and keeps reminding us of these things. Although I can’t really understand how he is still willing to put up with us, as we have clearly decided to go in a very different direction.
❤️ 2
k
@Stefan In the words of Joseph Heller, "What else is there?" 🙂
s
Came across this transcript of another one of his talks. This one’s very different, but together they draw a pretty good picture of what he’s trying to do and — more importantly — why. https://tinlizzie.org/VPRIPapers/m2004002_center.pdf
The problem with AI (and technology in general) in one slide:
💯 3
“Wisdom requires a lot more context than building.” — Alan Kay
c
There are a couple of things going on with “context”. As a technologist it’s a bit too easy to get too exited about science and the abstract. But talking all the time about technology and potential can lead to misunderstandings.
s
@curious_reader Please elaborate!
c
It will be very difficult but I can try. Maybe I should participate in one of those Christopher Alexander Events you linked before. I need to find that link. I can only write from my experience. So first of all a bit about my context about Alan Kay, here is a link to my Main Zettel File about Alans work: https://andreass.gitlab.io/2022/AlanKay.html . So Although alan has tried to create some form of context for Computer Technology it did not go to places where it may be should have gone. When I first encountered the STEPS project I was deeply impressed with its technical ambition. But what happend with the final report? There was no coherent source code release of the work. Some researchers did put out some code in some form but it wasn't nothing like the system Alan showed, Why wasn't that put out to the public? Was it because of funding issues that led to troubles with research goals and communicating the whole thing? That still feels very unsettling, on the one hand Alan participated in xerox parc or he talks about Martin luther but then he and his research group fail to think about ways how to actually communicate their technology which could provide paradigm shift potential. You can see some of the strange in this github issue comment thread: https://github.com/damelang/nile/issues/3 Over the course of many years (!) people asked about ways to understand even this graphic sub system, let alone the bigger systems philosophy of the STEPS project. So at some point the idea what research means or that you can have a idea on the level of paradigm shift and be able to share it. Some aspects of it you can see here where alan gives a talk but then there happens some actual discussion after that:

https://www.youtube.com/watch?v=PFc379hu--8

the video is 1:20 , the relevant discussion happens about 1:17. Where you can see how confused alan is about what the students perception is what he is doing at university - could be or should be doing. I think with the years after the STEPS project Alan slowly realized that it is not only difficult nowadays to create something novel and hope or work towards a unfolding of that paradigm shift even talking about becomes difficult such that it enters the teritory of cargo cults (see Alans Conversation with Gardner Campell for context about that) I think this can be seen in this talk for example:

https://www.youtube.com/watch?v=j9ZGFaIHegE

. In this talk from 2019 Alan talks on a conference for circular economy for the alan macarthur foundation. He is basically asked to explain how he as part of xerox research community did achieve that, how did they initiate a paradigm shift ( from pre-personal computing to personal computing). ALan starts of by presenting a list of question and basically asks for a participation on these questions. But after his talk the moderator seems confused that alan did not "simply" deliver "the answer". While all, at least , to some part part alan was trying to was pointing to that even the structures to talk about a problem ( here working with paradigm shifts) matter and that giving a "talk" can not do that, it misses the point. All of this was pre covid and the culture wars have made cultural understanding even more complicatedon so many levels. I think in some sense its almost impossible to work on technology without work on better understanding of ideas and I do not mean like: oh you "just" have to build a better interface which is more intuitevly. Its more work in terms of relationships. See the video from alan called: Portable Portrait: Alan Kay (1990) from my file above to get a feeling about it. I'll stop here for now.
❤️ 1
s
@curious_reader Thanks for sharing that! Looks like you’ve been following him for quite a while. Impressive. I thought I had watched pretty much every video with him, but you easily pulled several videos out of your hat I hadn’t seen before. I’d be delighted to have you join the discussion about simplicity next Thursday. It sounds like your criticism is primarily based on not making research publicly available, or at least not doing that properly, like in the case of STEPS. Is that a fair conclusion? I am as frustrated about that as you are. I don’t know the exact details of what happened to STEPS, I’m more familiar with the funding issues around Bret Victor’s Dynamicland, which also doesn’t exactly share much of what’s going on there in the form of public artifacts. However, if you were lucky enough to visit and talk to Bret and all the other researchers, they couldn’t be more open about what they do, how they do it, and what they are trying to achieve (at least in 2017, when I got to visit a few times). There seemed to be similar dynamics at play in both projects. I’m not sure I understand your later point. Is Alan not doing something that he should be doing? Or is he doing something he shouldn't be? Are there expectations that he doesn’t meet? Or is this more a big picture point about culture that you’re making?
c
Yes funding is a major part of the problem. Like described in the nile issue https://github.com/damelang/nile/issues/3 It comes basically to the point where dan amelang says he would need more then 4k per month to work on the project, weather that means only coding or also explaining things he leaves it open. It was arround that time that I became aware of the whole crypto bubble. Humongous amounts of money got "spent" there. Ethereum also had for some time the label - programmable money. It seems to me as a real tragedy that a few thousands of dollars could not be gathered. Is it a marketing problem? Is it a problem of norms and values? Is society unable to fund its own sustainable progress or computing culture? One of the projects that did things in a similar scope and that actually succeeded is urbit. 15 years of evolution :

https://www.youtube.com/watch?v=DFeN-TKZor0

or this resource https://urbit.org/blog/hoon-4-lispers They apparently found funding but also managed local user groups which maintained a kind of local "hacking" culture.
s
Interesting that crypto is your prime example for misspending of money. As I understand him, one of Alan’s main points is that governments/academia messed up funding after ARPA, and then companies messed it up after PARC, because they began demanding at least rough ideas of how such “investments” will be recouped. And now “humongous amounts” of money are “misspent” in the form of government grants, academic funding, and venture capital that makes all the money ever converted into crypto still look like pocket change. Isn’t Urbit also somehow connected to Peter Thiel? That would probably explain why it worked for them but that probably means there are some skeletons hiding in the closet… Clearly, money shouldn’t really be the problem to fund good research. The problem is that all the people who have that kind of money want even more of it, and as close to a good story you can come up with of how you will make them more money if they lend you theirs. And that is incompatible with what researchers need to do good work. And that is essentially the main story I get out of pretty much every Alan Kay presentation in the last two decades. So I think he would agree that, yes, as a society we are unable to fund this. For pretty stupid reasons. (Christopher Alexander tells an analogous story about contemporary architecture, which also has chosen to follow different norms and values and to no longer care about what Alexander thinks should be the priority — designing spaces that serve the communities that live in them. Instead, architects care about how their skyscraper helps define the skyline of a city and makes them and their architecture firm famous and prosperous.)
c
I just read through the HN, some people in the thread claim to have a STEPS system running: https://news.ycombinator.com/item?id=39636074 it’s almost tempting to try to bring this back to a bigger audience.
s
Yes, that would be great! And it should be possible. I read a lot of the VPRI papers and even if they are more abstract and there’s little source code available, the concepts are pretty solid and there are enough clever people out there who could certainly “fill in the blanks”. What I would like to see even more is that thinking that motivated STEPS, the economy of creating something incredibly useful with orders of magnitude less complexity, and not being afraid to reinvent some of the foundations, spread to more projects. Instead of just taking what we are given, in the form of existing platforms, libraries and frameworks, and exclusively trying to innovate on top of them.
o
I watched huge number of Alan Kay talks and this one is just repetition of what he said before…
👍 1
s
Hmm, is this new? Framing/phrasing may, points are the old ones? Otherwise, there's @curious_reader's personal notes, great he already liked these above. Then there's the old abandoned/lost wiki for discussing Kay where the main guy locked everybody else out, taking it into a silo (Notion, then too behind a login wall). I started a little bit of some wiki-ish commentary on Kay videos of my own, Yoshiki Ohshima being so kind to contribute his list too under a libre-free license (but not taking comments/discussion, because obviously lots of people will have lots of opinions, including me and all of us). Also put on Weco if I remember correctly. And there's now this Reddit thread and this one in Slack. I'm wondering, no interest in systematic study? To maybe do some practical projects? Like, "future of coding", would you people say the computer revolution has at this point already happened, or not? If not, what is it and how to get there? Everybody going in their own directions and having some answers, maybe getting there by random chance eventually, or simply by waiting? If people say they're working on it, cool, then one can wait till the result/solution is made available.
👋 1
Hmm, Slack not supporting replies and going for flat lists? Fine, then be it so. A few points re: https://futureofcoding.slack.com/archives/C5U3SEW6A/p1709909385363469?thread_ts=1709732691.993869&cid=C5U3SEW6A, don't worry I'll be brief 🙂 I didn't look too much into STEPS (because...why? how?), but my impression is that they "attacked" the various parts separately, which makes sense (for size/scope, research results), but left out integrating the parts into a system. Too, various parts may have been only half-completed, maybe rushed to produce results or make progress with the burn rate eating up the funding. Could be that results were not prepared or really made to be shared with the public, but to present or sell or continue, not with/for the public. The graphics renderer may not be that interesting - like, we have lots of graphics for a long time, amazing graphics. Per the mainstream model of pushing lots of polygons from scene composition, highly optimized floating-point calculations, shaders and what not into graphics card memory. That there's a DSL, OK, maybe more of an interface benefit. But using a different model of transformers or something, that may be nice, except it's for the goal of using less code (regular people may not need/care that much about it?) and it may not provide more/faster graphics. So I wonder, wouldn't some regular rendering, canvas/graphics do as well? Why/how is the STEPS one better, other than less LOC? Doing some image post-rendering/optimization from geometry, instead of redrawing everything for the frame rate, using system/context knowledge? Not sure if other graphics do/try that too, or if that's even significant/noticeable/worth-it in performance. Now people doing "AI" post-render (specifically ignoring/predicting structure). There's a thing with some of the Kay people, subsequently Ingalls, etc.: there's a big focus on graphics, GUI, runtime images, these sorts of things. Personal desktop, and so on. Like Andreas Kling with SerenityOS, so you can do a windowing framework, all in graphics programming, but that's primarily the desktop manager only. (KDE, Gnome, GTK, X11, AWT/Swing, Win32/MFC/WinForms/WPF, whichever). Wayland. Does not need to boot/run natively operating a CPU, memory management, device drivers, these being the main task/purpose for an OS. There can be potentials in doing better GUI/controls, but what do these people want/need to use it for? In comparison to the GUIs/frameworks that get made for games (often each coming with their own, custom)? With the challenge that can't get system/OS support against Windows/Apple or get into a distro, or on the WWW it's either only the browser or per-server/server-side, and then people don't install a Smalltalk OS natively nor extra apps. OK, maybe could make a window toolkit to put it into various apps (to not reinvent, but then, every graphics guy wants to reinvent another different, personal one). At the same time, how much effort goes into say, the OO or programming part? The "technology" cargo cult - highly promoted by productization, fetishisation (Apple), which is for the appearance of "technology" but only selling locked-down passive consumer goods, not educating or providing access to the actual tech. So people are clueless about the tech, and have a hard time to learn about it. Too, the Robert C. Martin thing of number of developers doubling every 5 years since inception (till peak), means less than 50% have more than 5 years of experience, and all the teachers are busy or retired. So ofc it's not popular even among "programmers" (who only use a lang, as a mere tool) to know or learn much about Alan's points, systems, architecture, paradigms. And why would one? May not be in their job description nor help to profit (potentially to the contrary).
https://futureofcoding.slack.com/archives/C5U3SEW6A/p1709993457030099?thread_ts=1709732691.993869&amp;cid=C5U3SEW6A seems to be curious-reader's view, that some money need to be spent. It goes for the salaries/expenses of these developers, to buy/complete whatever they were doing (which may remain somewhat unknown, or of questionable use/benefit, or it's great). OK, there's of course options, like: saving the money to then buy this person's time, or crowdfund to collect the sum together, or one could spend his own time and make a similar new thing, or as there's lots of programmers around, could make such a thing (or study the existing code to complete it). I wonder what the rationale and benefit is to spending the money/time? To me sounds like: not spending my own/time money, but insistent that someone else's money should be spent, to get this thing completed (if ever, which is another question - in what state it's left/in, cost vs. benefit/gain). Same question for lots of other code and projects, of which there's a whole lot. Programmers seem to highly prefer writing new (their own new!) code than looking at or investing into anyone else's.
https://futureofcoding.slack.com/archives/C5U3SEW6A/p1710001649220709?thread_ts=1709732691.993869&amp;cid=C5U3SEW6A Kay's message being: without money/funding, the research can't/won't be done. They stopped doing/continuing when the money ran out. People don't start the work/research if not paid. Means, there's nothing to do or some chance or benefit until some money drops, and when it's all spent/burnt and ran out, you drop the research again, scrap/abandon the results, and maybe look for the next "investor".
https://futureofcoding.slack.com/archives/C5U3SEW6A/p1710103708401079?thread_ts=1709732691.993869&amp;cid=C5U3SEW6A Yes, this means, if you don't have or can't get a lot of money, what are you guys here even doing? "Future of coding"? Best of luck! See you in the better future, once there's some new big funding again?
s
Like, "future of coding", would you people say the computer revolution has at this point already happened, or not? If not, what is it and how to get there?
Not sure if that’s clear to everyone, but in the context of Alan Kay, “The [Real] Computer Revolution Hasn’t Happened Yet” refers to specific talks or papers of him. Reviewing those should make it obvious that Alan Kay surely still believes that it still hasn’t happened, and he would probably also say that we’re not making any progress either. Here’s my attempt at “speedrunning” those who don’t want to spent several hours watching and reading through it, to maybe get a sense of what he might mean: 1. I recommend

the last five minutes of his 1997 OOPSLA talk

, where he talks about how “we don’t know how to design systems yet” and how Smalltalk was never about its capabilities but about how it transformed from version to version and was able to get rid of itself to bootstrap the next system. He uses “point of view” here, and if you’ve watched any of his talks you must have heard him say “point of view is worth 80 IQ points” and that hints also at why he called it Viewpoints Research Institute. The artifact — the designed system that falls out of the process — surely is not what is important to him, 2. Understanding that part will help explain why he was so interested in education. But he also talks about that a lot. And he writes about it in this document. Again, the important bit is in the last two pages, the last paragraph being (highlights mine): Though the world today is far from peaceful, there are now examples of much larger groups of people living peacefully and prospering for many more generations than ever before in history. The enlightenment of some has led to communities of outlook, knowledge, wealth, commerce, and energy that help the less enlightened behave better. It is not at all a coincidence that the first part of this real revolution in society was powered by the printing press. The next revolutions in thought – such as whole systems thinking and planning leading to major new changes in outlook – will be powered by the real computer revolution – and it could come just in time to win over catastrophe. 3. In the STEPS Proposal to NSF it says (again, highlights mine): creating a practical working system that is also its own model – a whole system from the end-users to the metal that could be extremely compact (we think under 20,000 lines of code) yet practical enough to serve both as a highly useful end-user system and a “system to learn about systems”. I.e. the system could be compact, comprehensive, clear, high-level, and understandable enough to be an “Exploratorium of itself”. The 20,000 LOC thing was only there as a crude measure to make it “fit inside the head of an individual”. It was all about understanding the whole system fully, so you can take what you have figured out designing it, throw the artifact away but take your insights and design a better system. We have stopped designing most infrastructure parts and just take them as given. And we have painted ourselves into a corner where it’s difficult to go back and redesign everything from scratch, because we rely on so many things that we don’t want to give up. And so we are stuck with a paradigm, or a “point of view” as Alan would say, preventing us from leaving our pink plane and discovering the next true level of abstraction that enables us to get close to the biological ideal of scalability through an analogous step like the invention of the arch in architecture. (If that sounds like a lot of weird metaphors in one sentence, watch the whole 1997 talk I linked above.) Most people looking through the scraps of STEPS seem fascinated by the artifacts: how the GUI was implemented, perhaps OMeta, or just what they accomplished to pack into 20K LOC. But the real gems require reading between the lines and looking at their design process. Personally, I’d recommend studying all of Ian Piumarta’s VPRI papers in order of publishing date, even (especially) the ones with extra weird titles. That will give you a deeper appreciation for why Alan loves page 13 in the LISP 1.5 Programmers Manual so much, and why he thinks of it as “the Maxwell equations for computing”. They describe a whole universe. Even better, they give you the power to design a whole universe by yourself.
❤️ 2
s
This is a good overview! A few brief replies/points: 1. one thing is, we got Smalltalk, both 80 and earlier. Nothing stops people from making new versions or other similar things (or what does?). What was wrong or missing with Smalltalk (both early and 80)? Point could be that we're not making these systems continuously or a good one (while some here/elsewhere may say they do). OK, obviously, the general trend is to throw/add more code at it, making it more complex/convoluted, to add the desired feature, ending up with another mess. But anyone could go and work on small, meta-recursive systems. And try to somehow address the Smalltalk-80 problem of "succeeding" (Goldberg, failure is easy success is hard), once there's investments in it, other stuff depending on the system. Like, idk, to me Kay is a weaker immitation of Engelbart's system work, just with the addition of computer graphics (which is what attracts the people and they remain puzzled/unaware about the systems/programming things). 2. Point of view, Lisp eval, I mean it's not that hard to get the idea (just need to cut away the jargon and clutter), that you save LOC by not writing lots and lots of new, specific, custom code, explicitly expressing lots of possible paths and cases, and instead feed some configuration/input through the same core loop. Like, the lang should not be a generator of more complexity for the programmers to type out, but instead need one/system which covers a lot of it combinatorially, just from a few well-chosen elements. 3. Agreed, it's not really literally about the 20K LOC, these are just a shorthand/indicator for a good design that does not need to explode in size. May take it as a metric/limit, a force function, to not become sloppy/wasteful/excessive. 4. Yes, it is probably more about system designing principles and great architecture implementation practice, as the universal meta-model across many systems that are made like that (vs. the ones that are not). But OK historically we see that there's not much interest in this, nor do (new, old too) programmers know or care, it also may always be easier to just type out more "naive"/straightforward code than to spend all the time seriously thinking about it, and too one may get paid more fixing/debugging a highly convoluted mess someone produced, than to make something small, neat, elegant, qualitatively better (because once this exists, it's too easy? or maybe too dense to maintain or adapt?) 5. given Kay's work produced Smalltalk (both early and 80, not sure if the later revivals and immitations/variants are seen as "progress" or improvements, or just as repeating/reviving the old state of the art, not getting much better), how can/would one improve on that, and come up with a "smaller"/better design, in other words, what was/is missing, wrong or lacking in Smalltalk (old and 80) that today someone could do better? Straighforward questions like this.
Like, look: this Lisp page 13 Maxwell's equations thing, it's not that there's anything complicated or highly sophisticated going on there. It's just a very confused, convoluted, cluttered, obscured, obfuscated statement of the basis of all computation (more or less, known in other formulations also by/to other people), and it does not help much to only promote it in old Lisp notation nor some abstract mathematical descriptions, but you me and anyone can easily just use these in designs. Same with the "Meta" language thing (not to be confused, there's probably 3 langs by the name of "Meta", if not more). It's just old stuff in old langs, no modern programmer has heard of or can read, so of course these concepts are largely lost or unknown. Granted, there's contemporary books, materials and what not, but these may not be that interested in system work either. Update: oh no, there's another "Meta language" on page 9, I don't mean that one but the other one Kay once referenced 😞 like, I mean, whatever, who can possibly keep track, it's just a big mess that's different words and notations for essentially the very same thing(s).
d
I think the goal of a system that can fit inside the head of an individual is described by Dan Ingalls as:
If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual.
👆 3
🔥 1