> I posit that a truly comprehensible programmi...
# linking-together
k
I posit that a truly comprehensible programming environment - one forever and by design devoid of dark corners and mysterious, voodoo-encouraging subtle malfunctions - must obey this rule: the programmer is expected to inhabit the bedrock abstraction level. And thus, the latter must be habitable.
http://www.loper-os.org/?p=55 (inline link mine)
❤️ 4
👍 1
a
I predict that if you actually try to build a machine with an instruction set isomorphic to a high level language, it will never reach the reliability we demand from hardware. Most likely, it will be implemented in something like microcode and we'll immediately land right back where we started. I'd rather see a focus on formal verification, probably of higher-level virtual machines built on simple and therefore easy-to-model hardware. I think the reference to "atomic operations" is quite deep. A layer of abstraction that provides truly atomic operations is indistinguishable from a bedrock layer to anything built on it. A lot of my thinking on how to layer things is built on this idea...
r
I'm curious what reliability you are referring to. An instruction set isomorphic language can be made reliable in the sense of 1) consistent execution 2) predictable execution every bit as much as we count on the hardware in those two ways. It probably won't look exactly like what we think of as a high level language today. In fact, it may be quite different. You can only allow certain "clearly understandable" abstractions and maintain the isomorphism. I think this is what @Kartik Agaram is attempting to explore with Mu and why there is so much focus on the hardware, the general 1:1 relationship between language statements and translation into hardware instructions and other design constraints. My question is can such a isomorphic language be made in such a way that it would be high level enough and useful enough to be commonly used within a well understood domain?
j
d
bedrock abstraction level is found in every man-made system.  No recoverable failure, no matter how catastrophic, will ever demand intelligent intervention below it.
My experience with programming early 8-bit microprocessors is that, when programming in assembly language, you did indeed have access to a bedrock abstraction level, as defined above. There is no accessible bedrock abstraction level in modern computers. Machine code programming on a modern Intel based motherboard happens at an abstraction level far above the bedrock, and below you are many dark corners and mysterious voodoo-encouraging subtle malfunctions. The UEFI is stealing cycles from the OS to do who knows what, the firmware for the microcode and the mysterious intel management engine are encrypted, and security flaws like Spectre and Meltdown require intelligent intervention at a level that is inaccessible to the owner of the computer. I think the author agrees with this. I don't agree that the invention of compilers in the 1950's was a mistake. Twenty years later, in the 1970's, CPU instruction set architectures were still being designed with the needs of assembly language programmers in mind. A key design goal was "orthogonality" <https://en.wikipedia.org/wiki/Orthogonal_instruction_set>. The existence of compilers didn't prevent architectures like the PDP-11 from being designed. I think the author agrees, since they mention RISC as the beginning of "braindead architectures". But RISC wasn't primarily about compilers, it was primarily about making CPUs faster and more efficient, and prioritizing that goal above the goal of making the ISA comfortable for assembly programmers. So here's my question. Suppose we start over, and build a new computer architecture from scratch. Is there not a fundamental tradeoff between making the new system as fast as an Apple M1, vs providing a bedrock abstraction level that is both accessible to the programmer, and habitable?
k
@Doug Moen:
Suppose we start over, and build a new computer architecture from scratch. Is there not a fundamental tradeoff between making the new system as fast as an Apple M1, vs providing a bedrock abstraction level that is both accessible to the programmer, and habitable?
Probably. For me the inescapable implication is: think about habitability (and safety), and don't focus on performance to the exclusion of all else. I don't understand why people get so excited about performance and forget Wirth's Law:
software is getting slower more rapidly than hardware is becoming faster.
You think the M1 is fast? Just wait a couple of years! A substrate that will run so fast that you don't have to think about what you run on it is the very definition of an externality. Exponential curves consume all slack. No matter how large the supply of Buffalo is, it's finite. Thinking of a resource as infinite makes no sense. That way lies religion and the Singularity. Has Apple said anything about how they've tried to mitigate side-channel attacks on hardware optimizations? If they've just focused on making everything faster like everyone else, they're likely open to similar attacks? "Reality is that which, when you stop believing in it, doesn't go away." -- Philip K Dick
k
Wondering if the notion of "bedrock" still makes sense in a world where most computers are virtual to some degree. From the exchanges above, I'd conclude that the bedrock level is the first programmable level of abstraction just above non-programmable hardware. In some contexts (e.g. cybersecurity), that's relevant. For many others, it isn't. I'd be perfectly happy to fully inhabit a higher level of abstraction, and leave the lower programmable levels to other species of inhabitants. I see the main problem with today's platforms in the unclear borderlines between levels and in the intentional obfuscation of lower levels.
💯 1
d
@Konrad Hinsen I'm thinking about a version of my language that runs in WASM, using (some subset of) WASI to interface to the hardware and OS. In that context, WASM and WASI are the "bedrock" abstraction level, since you can't go any lower.
k
@Doug Moen Exactly. WASM/WASI is your "virtual bedrock", like JVM, KVM, or many others are elsewhere. Given your experience, do you consider WASM/WASI habitable? Is it a reasonable level to do debugging, for example?
d
I'm only thinking about WASM, I haven't written the code yet. WASM still looks unfinished to me. I'm building a programming environment and a virtual machine using C++. The fastest known interpreter design is no longer a byte code interpreter, it's to use tail-threaded code, which relies on tail-call elimination in gcc and clang. This won't work in WASM yet (maybe in a year or two?). JITing to machine code is also problematic. You can't JIT to WASM then execute the WASM in the same module. The most impressive work on trying to make a fast VM run under WASM is the CheerpX project, and the complexity of their workarounds to WASM's deficiencies are rather scary. I'll just have to accept that my VM will be slow. In my language, the "bedrock" experienced by my users will be my VM, which will be simpler and higher level than the substrates I'm implementing on top of. Loosely inspired by Lisp and Smalltalk. One big differentiator is that my VM allows the same code to run on the CPU or the GPU, and allows you to mostly not care about the distinction. Low level GPU programming, using existing APIs like Vulkan, is a kind of hell. It is frankly too complicated for me to deal with for any kind of application level programming that I want to do, so I want a high level, habitable VM that abstracts away much of the nonsense. The kind of system I want to build can't be "truly comprehensible" since GPUs are far worse in the voodoo department than CPUs. But I don't see an alternative if I want to do interactive 3D graphics. More generally, it's hard to build simple, easy to use, high level abstractions for 3D graphics and 3D modelling that aren't leaky abstractions, and that don't have performance cliffs that you can fall off of. Tail threaded interpreters: "Parsing Protobuf at 2+GB/s: How I Learned To Love Tail Calls in C" blog.reverberate.org/2021/04/21/musttail-efficient-interpreters.html CheerpX: * https://medium.com/leaningtech/extreme-webassembly-1-pushing-browsers-to-their-absolute-limits-56a393435323 * https://medium.com/leaningtech/extreme-webassembly-2-the-sad-state-of-webassembly-tail-calls-f5d48ef82a87
k
@Konrad Hinsen:
the bedrock level is the first programmable level of abstraction just above non-programmable hardware. In some contexts (e.g. cybersecurity), that's relevant. For many others, it isn't.
I've been thinking about this, and the more I think about it the less I understand it. In what context is security not relevant? We today don't know how to write secure software. Therefore every layer under you adds to your risk and multiplies to the frequency of required upgrades. That should fairly directly lead to a force pushing you to "go low". There is no dichotomy between contexts where the bedrock level is relevant and ones where it isn't. Staying close always reduces your risk, moving away always increases your risk. How much risk you want to tolerate is up to you. I understand that we all have things we want to do, and it's hard to give up our desires. I suffer from that as much as anyone else; @Doug Moen caring about 3D feels particularly difficult. I just try to stay cognizant of the risks I'm taking on a daily basis. They're ever-present.
k
@Kartik Agaram I don't want to suggest that security is not relevant, only that having it in one's habitability bubble may not be relevant to many of us. If you cannot evaluate security yourself, for lack of competence or lack of time, it becomes a matter of trusting others. Like for so many other technological artefacts. I can't evaluate the security of my car, for example. Once you accept that you can't check everything yourself, what you care about is that others can do it on your behalf. Independent others, not the producer. But they can safely belong to somebody else's habitat.
k
I agree with all of that -- except for the word "safely" in the final sentence. It seems pretty clear that in the world we live in, you cannot rely on others to check for you. You can rely on others to do their best, yes. In some situations, if you know who they are, they don't pass on maintainership willy-nilly, etc., etc., with a million details. But you're still hanging in the breeze to a great extent. The things we rely on often do, in practice, belong to somebody else's habitat. But this situation is in no way, shape or form "safe".
having security in one's habitability bubble may not be relevant to many of us
Instead of "not relevant to many of us", I'd say the relationship between security and many of us is more akin to that between an ostrich with its head in the sand and an approaching lion.
r
The graphics environment question is particularly interesting. Modern graphics cards inherently violate the "inhabit the bedrock" principal because they are solely concerned with performance. Pushing as high a resolution at as many frames per second with as much realism you can muster is basically the only goal, at least as far as how modern graphics cards work. Tractability has long been an afterthought, so it's not surprising that such a terrible and uninhabitable space developed. However, the graphics environments of the early 8-bit computers were fairly inhabitable as far as these systems go and I wonder if we can take a different approach that might focus on inhabitability over performance and get somewhere.
k
@Konrad Hinsen: Oh sorry, I missed your "independent others, not the producer" when I wrote my previous comment. Did you mean stuff like CVE infrastructure?
k
@Kartik Agaram Yes, CVE is a good example. It's certainly not safe to rely on any one person or institution to take care of an infrastructure, but there is no way around on relying on some collective. At least for all those who are not software professionals.
👍 1
Relevant quote from a seminar this morning on reproducible computations in science: "For compiling software, I trust Debian developers more than I trust myself." Said by a CS researcher.
k
Yeah, I totally agree with 90% of what you say. How could I not? It's self-evident! All I'm adding to it is the absolute scale. Yes, trust them more than yourself. But still. Remember you still can't trust them too much. To summarize my argument: In software today we have some security, but not perfect security. Minimize levels of abstraction because they multiply your risks. Yes, it's fine to rely on others to check things for you. But they're necessarily imperfect in their oversight, so control the level of reliance. Choosing a bedrock level and staying close to it is always a useful thing to do. That awareness of how high up you are is something I believe everyone should keep in their habitat bubble.
k
We can definitely agree on that! I'd even say awareness is the key point. Which is why I see the main enemy in obfuscation.
👍 1