Some pretty interesting speculative design here: <...
# thinking-together
s
Some pretty interesting speculative design here: https://gavinhoward.com/2020/02/computing-is-broken-and-how-to-fix-it/
👍 1
He doesn't seem to talk about low level latency stuff that comes with that kind of hw design, NUMA, etc, which seems like an oversight. It's interesting to imagine what programming such a system would look like though. Also, he says he wants C interop, but that he will only use safe languages, which is kinda odd.
e
He has some mistaken ideas. speculative execution is not a problem for hardly anyone. Is invisible to programmers. The chips are working perfectly well.
i
Speculative execution is the root cause of Spectre... https://meltdownattack.com/
e
And to imagine everything is a file is a major mistake, files are pain in the ass. They are unstructured arrays of bytes and one is constantly having to decode and encode files. Apple used to use a thing called a resource fork which was a secondary structured section of a file, in which to store indexed meta information. it was wonderful, tut because the Internet was based on stupid old unix and couldn't transfer files easily, they dropped it. And Microsoft had taken 10 years to copy that feature in a version of their file system called WinFS, was just about to release it and then they dropped it too, pushing us back to the 60s.
💯 2
c
NTFS actually supports arbitrary meta data (called "alternate data streams") but they don't advertise it much because it confuses people that the info is lost when emailed or put on FAT32 usb stick
👍 1
s
Yeah I'm not much of a file fan either
I find it interesting how some of those ideas have already made it to the mainstream though
For example, his stuff about managing message passing with shared memory (ring) buffers will look familiar to anyone keeping up with linux kernel development as the recent io_uring facility
👋 1
w
Operating Systems are not my thing friends (and honestly for my work, the innards of the CPU could be microscopic hampsters for all it matter), but it seems there has been relatively little progress since Oryx/Pecos was more or less shelved https://en.wikipedia.org/wiki/Oryx/Pecos. What have the OS people been up to? Virtualization? Containers? Web Browsers? My primary pain point is the lack of compelling IPC. I poke my nose out every so often to get a whiff of what's out there, but it has been a while.
e
The Linux community whiles away their hours shuffling around different combinations of the 1000 components that form each distribution. And instead of fixing the shared library version problem, the community evolved around Docker which allows you to run in a perfectly compatible set of modules, with all the bug fixes not present... absolute garbage technology under the hood there.
s
@Edward de Jong / Beads Project I'm curious as to what about the "shared library version" problem on Linux you think isn't fixed. Docker et al is abused, true, but you are free not to use it, along with whatever other hairy mud jenga towers of complexity you may want to avoid cough systemd cough. I find that generally, among people I meet, the people the most enthusiastic about docker, etc are the ones that are the least comfortable with their package manager. You can have much simpler systems by just using packages and services, rather than containers. Of course, this is complicated by the fact that it's been a trend among programming languages to have utterly unholy build processes, whose brittleness make it hard to package things—but if you can't build the software in question, you can't patch it, and you probably shouldn't be using it in a container either. This issue isn't really the fault of any system in particular but rather of programmer trends, with programs like web browsers setting trends (and records) with their compilation resource usage and dependency chains.
s
IME, the complexity and brittleness of the build process is a largely a product of bad and inconsistent dynamic library loading designs. I would have preferred to only support static builds for Io, but so many useful open source libraries only supported DLLs and getting them to support (and maintaining that support) cross platform static builds was just too much work.
s
While static builds to simplify some things, they don't solve the versioning problem—they just let you keep using software that's out of date, which you don't want to be doing anyway. Package managers fix this issue since good ones will track library dependencies and rebuild as necessary. I'm not sure what to blame for build system bloat really, it seems more of a cultural issue than anything—I understand that google has buildfarms for stuff like chromium so the core devs don't have to worry about it, I generally have trouble reproducing builds for any sufficiently complicated c++ program, and I'm not too sure why. For an interesting approach to package management (actually just turning package management into an extension of the build step) see https://github.com/oasislinux/oasis by michael forney of cproc fame
👍 1
x
Re: Shared library version problem: Check out the Nix Package Manager and NixOS: https://nixos.org/ They solve it very nicely!