All Programming Languages are Wrong: <http://users...
# thinking-together
m
e
an excellent summary of some of the major incorrect decisions made that led to the current OOP mess.
n
The problems mentioned in the post are real but they're not the biggest ones. Concurrency doesn't even get a mention, nor does program representation.
The theme/title would appeal to us here, but I don't think it would do a good job of convincing someone.
e
When the author says "all languages", he is referring to all the languages he knows about and has used, which would like be the top 10 or 20 languages on the popularity list. Some of the FoC projects solve the listed problems, and i think his article can serve as a reference point for whether or not critical issues have been fixed.
d
I think the title reads like clickbait, and the premise is wrong. There is no single universal programming language that is best for all domains. Although I am sympathetic to the issues he raises, they don't apply equally in all domains.
m
at this stage any new non system programming language that has integers with a size limit, decimal type that is a float or string that are array of bytes for me is wrong
that should be like "dropping down to assembly" is in C, something you do in very specific cases
d
For example,
certain data types such as numbers are a special case which does not fit well into the general type system of the language, and hardware details such as the number of bits supported by an integer add instruction show through in the language semantics
He's claiming that all programming languages should have numeric types that don't expose the details of hardware representations, when in fact, this is a requirement for certain domains of programming. It's a requirement for systems programming, obviously. It's also a requirement for graphics programming, where you can be manipulating huge arrays containing millions of pixels or triangles, or hundreds of millions of voxels. In order to fit these huge arrays into memory, every bit counts, so you must have precise control over the bit-level representation of the numbers stored in the array elements.
👍 2
This has become an issue for the Curv programming language. Curv was always meant to be a very high level language, simpler and higher level than Python or Javascript. There is only a single numeric type, so you don't worry about which number type to use, or should I write '0' or '0.0' in this context. But I'm adding support for huge arrays, and in that corner of the language, I need a DSL for describing sized numeric types.
m
it's ok to support sized numeric types, but default numbers should behave like math numbers, not like cpu friendly numbers
the same way you should have list/vector types, but also an efficient array type, but the default should be the list/vector, you only need arrays when you need them
d
In Curv, I want default numbers to behave like math numbers, but it's theoretically impossible. Math numbers require infinite storage (in the worst case) and infinite computation for arithmetic and relational operators (in the worst case). You have to compromise somewhere. My compromise is to use 64 bit IEEE floating point numbers (minus the NaN) as my number type.
m
python and erlang have math numbers (for ints) and i've never heard of someone running out of ram
I fear the worst case of floating points more than the worst case of math numbers
and it happens much more often
d
It's not hard to make integers that behave like math numbers. The problems start with rational numbers, and become intractable with transcendental numbers. That's why I said you have to compromise somewhere. The best choice of number representation depends on your domain. For graphics, which depend heavily on transcendental operations (like trigonometry and sqrt), binary floating point is a good choice. Curv programs run on the GPU, where binary floating point is the only choice. If your domain includes financial computations, like spreadsheets, then decimal floating point is better.
j
If jon blow saw the start of this article he'd have an aneurysm
b
I do agree with the author that programmers are too enamored of the “right” way of modularizing code. Like all of use I have run into problems where code was so cleverly written that it felt impossible to change anything and understand its full implications (e.g., in large Rails apps). And I have run into problems where a change was made in one place but similar changes should have been in related places, because there was code duplication (which might have been refactored, but wasn’t.) In my experience the first kind of problem is much, much more often disruptive than the second kind of problem, in practice. This is one of my great reservations about FoC’s consensus about the direction languages should go. I’m an apologist for procedural programming!