I was curious this morning: my naïve view of compi...
# present-company
d
I was curious this morning: my naïve view of compiler history is that they used to be very small (due to performance constraints) and have gotten very complicated over the years in order to support multiple platforms and in order to employ more and more optimizations. Is that generally a fair take? What are the big changes to compiler architectures from the early days? Does something like LLVM produce enough better code to justify its complexity? Are there any blog post- or paper-length history of compilers articles out there? (I'm not quite so curious as to be ready to read a whole book, but if you've got a good recommendation…)
m
One change is that companies don't need to build their own complete compiler anymore. Previously, each company might buy a front end (e.g. EDG) for parsing, but the rest of the compiler they had to build in-house (or go complete open and extend gcc). In addition to hardware performance constraints, the compiler size (and complexity) was limited to what size compiler team a company was willing to fund. With LLVM, companies can focus on the pieces specific to their needs.
j
The question “is it worth it” is pretty hard to answer. There can be different assessments of the cost of LLVM being such a big project, and there are debates about how much the optimizations in LLVM matter, compared to a smaller compiler (see the discussion around Daniel Bernstein’s “the death of optimizing compilers”). Throat clearing done, Bernstein is wrong, and the answer is “yes, it’s worth it.”