<https://gbracha.blogspot.com/2020/01/the-build-is...
# thinking-together
k
It's not so much a rebuttal as an alternative worldview. I don't see a concrete advantage given for this approach that merits the word 'broken'. On the other hand, there is a thread of research on the advantages of creating software to constantly exercise disaster recovery: https://en.wikipedia.org/wiki/Crash-only_software It speaks particularly to me ever since I watched Jurassic Park at a formative age 😄
d
I value the benefits of live programming, so I mostly agree. However, Bracha takes the absolutist position that "the real problem is that the very concept of the build is broken." and "It's high time we build a new, brave, build-free world." This is mistaken, because live programming doesn't always work. There are some situations where the live programming environment's ability to update the running system state to match the new code breaks down. It can happen when you change an important data structure that a lot of currently running code depends on. It's then that you need the ability to "tear down and reconstruct the skyscraper". So we need to preserve the ability to rebuild the world from scratch, and exercise it frequently enough so that it doesn't become irretrievably broken.
❤️ 2
n
I don't think it's a given that it breaks down. You change an important data structure and the old code still uses the previous layout until it has been changed to the new one. It breaks down as things currently stand but you could completely re-engineer a new world where it doesn't. What the trade offs would be for that to happen, that's a better question.
👍 1
d
I can prove this is impossible by case analysis. • Case 1, the executable machine-code file that you must run in order to start the live programming IDE (the "kernel") is built from immutable source that cannot be live edited from inside the live programming environment. This decision simplifies the design of the IDE. However, it means that you need a traditional build system to build the kernel. All of the Smalltalks have this property, by the way. You can't live edit the Smalltalk VM from inside of Smalltalk. • Case 2, the source code for the kernel executable is live editable from inside the IDE. Nobody has ever done this. It might be impossible: there is probably always some irreducible kernel that cannot be live edited. If it is possible (not proven), then it's probably very complicated, and there will be bugs. Nobody has demonstrated the ability to write complex software that is guaranteed bug free. Bugs have been found in software that has been "proven" to be correct. These IDE bugs can lead to the executable that you must run to start the kernel getting out of sync with the source code. So you need a traditional build system as a backup to recover from this situation, and if you don't have it, then you are screwed. If you have a traditional build system for the kernel, but you don't continually test it as the software changes, then when you finally have an emergency situation, the build system won't work, and you are screwed.
❤️ 1
s
There will always be rebuilds and restarts, whether you have 'build systems' or not. The first question is: at what granularity do we rebuild and restart? What if every change meant you had to rebuild the entire system image (kernel+userspace) and reboot? Too coarse? With present day systems we can rebuild a binary and restart the OS process. State external to the process is preserved (BTW, the backward compatibility problem of data structure still exists). But what if you change one function? Can we just build and re-integrate just the function into the running process? The second question is: how manual is the rebuild and restart process? Most build systems need to be manually invoked. Sometimes we wrap them with file-watchers to rebuild whenever something changes. This is similar to re-running a script vs updating a cell in Excel. Why not have the continuous rebuild be always available with, say, controlled snapshots? Emitting machine code for optimization or kernel etc. is completely separate. Any program can be written to emit machine code and store it.
👍 1
Bracha doesn't talk about this but I think the build system idea is connected to the the idea of binaries/processes/apps: i.e. the output of the build. The pattern of using this is build -> output artifact -> run. If you look at systems where: a) the 'runnable unit' of the OS isn't Unix process sized, but something smaller, and b) the running/not-running dichotomy isn't primary (e.g. you have auto-persisted system image), then the build system model doesn't seem particularly interesting. There's the recent twitter thread about Lisp machines, if you're looking for a specific example: https://twitter.com/RainerJoswig/status/1213484071952752640 Since one of the problems of 'editing live' is that you can crash running things pretty badly, I think snapshotting and rollback become critical in this mode of operation.
e
@Doug Moen One day i was visiting Project Mac, the operating system team at MIT. Bob Frankston (the future co-inventor of the spreadsheet) was one of the team members, and while in his cubicle how was demonstrating how the MULTICS operating system (the competitor to UNIX, and superior in most technical aspects) had dynamic linking, which meant that at any moment you could swap out kernel modules. He was experimenting with a new memory manager, and turned it on, while dozens of other people were using that Machine (this was time sharing days). It crashed immediately, and all the people in the other cubicles yelled out in anguish as the system was now down. Dynamic linking is a supremely powerful technique, but rather dangerous as this example shows.
😂 2
w
Same deal with Smalltalk. "Guess the image is ruined, better revert to yesterday's." The real solution would be sand-boxed changes (only want to modify part of the system right now) and good revision control.
d
Just wait until you put a breakpoint into the method in the abstract window class that redraws windows…. Nobody would ever do that in Smalltalk… 😳
💡 1
I do think though, that the image-based approach is a great way to learn programming - and it’s easier to think concretely and then level up gradually with abstractions. It’s also good for modelling things - have your interactive model be directly manipulatable - a la the naked objects approach.
s
There's a difference between system-in-use and system-under-development. There's a difference between SUD-liveness and the SUD and SIU being the same system. When you have liveness you could just live edit your system in use, but you don't have to. E.g. you could spin up a 'nested smalltalk' and put a breakpoint in that one.
☝️ 1
k
Sure, but it's interesting to point out seams where composability breaks down. Smalltalk lets you open any class, and set a breakpoint anywhere in it. Why do some combinations of those actions not work? How many such combinations exist? The whole argument of Smalltalk is that you don't need to worry about the distinction between inside and outside, SIU and SUD. To me it seems analogous to your point about sort in shell scripts vs C.
s
Yes, I see the point in the first paragraph. Don't see the analogy (yet). Perhaps a slightly better distinction than SIU/SUD is calling these kernel-like parts and non-kernel parts, of the system? I see this similar to how you'd probably use a sandbox when developing a kernel module, even if your kernel allows dynamically reloading kernel modules. Seems like this space needs more exploration - like how do you identify the parts that you shouldn't live reload, and how you pin these.. This reminds me of a nice essay on designing in Erlang (which supports live reloading), which talked about think about Erlang processes as different 'rings', e.g. ring0 is the kernel - most critical processes - and so on. Unfortunately, I can't find it now.
d
If we want a system that is live editable "all the way down", then the situation is more complicated than just SUD vs SIU, or kernel vs user-space. A full system has many layers. If we are live editing the code at layer N, then the SUD and SIU can share layers 0..N-1, but we fork layers N and above. If we are live editing the GPU driver, we need 2 physical GPUs, one connected to the SUD and one connected to the SIU. We can crash the GPU connected to the driver we are live editing without losing our development environment. If we are live editing the window manager, we don't need two GPUs, the SUD and SIU can share the same GPU driver instance. If we are live editing a declarative description of a new window theme, then, as long as the window theme API is "safe", we don't need to fork the window system to apply the new theme.
👍 3
k
@shalabh, the analogy: * Unix is all about reuse, about doing one thing and doing it well. Except that doesn't always work. * Smalltalk is all about being able to modify the environment from within the environment. Except that doesn't always work.
Perhaps a slightly better distinction than SIU/SUD is calling these kernel-like parts and non-kernel parts, of the system? I see this similar to how you'd probably use a sandbox when developing a kernel module, even if your kernel allows dynamically reloading kernel modules. Seems like this space needs more exploration - like how do you identify the parts that you shouldn't live reload, and how you pin these..
Can I make a similar claim for Unix? 🙂 We know how to identify the parts that can't reuse things: processes that don't use the same shared libraries. I think both are reasonable points, but they're bolted on to the underlying uniformity. So they provide apology but don't really address the two criticisms above.
Here's a relevant paragraph from the paper I've been working on, that I wrote before this thread. I'm curious if anybody here would quibble with it.
Mu's strategies borrow much from past work. For example, Forth systems emphasize parsimonious dependencies but give up on safety in the process. Smalltalk systems emphasize safety while exposing a large fraction of their internals. However, there usually remains a kernel that requires exiting Smalltalk to modify. Lisp Machines built up all the way from custom hardware while remaining safe. Lisp, Forth and Smalltalk all emphasize uniform notation, though they also have strong and divergent opinions on what that notation should be. While they all expose their internals to modification in various structured ways, it seems easy for small modifications to their internals to cause regressions both subtle and catastrophic. Modification requires expertise of all the scenarios their environments are designed to handle, expertise that can only be obtained out of band from the tools themselves.
s
@Kartik Agaram re analogy - ah I see! Agree that 'more is needed'. Protections around Smalltalk's powerful meta-features has definitely been brought up but never really fixed afaik.