Fixed-point is often a good alternative, but rarel...
# thinking-together
k
Fixed-point is often a good alternative, but rarely well supported in languages. But there are important cases where it fails, and it's those you need to look at. Example: Generate random points in a sphere or a square, and sum 1/d^2 over all pairs of points, where d is the distance between the points in a pair. That's a cartoon version of computing gravitational or electrostatic interactions in physics. 1/d^2 is very large for short distances but very small for long ones, which however make up the majority of pairs. Doing this with rationals is prohibitively expensive.
💡 1
n
This is one of those cases where my language should force a round-off to occur, but I would still want to preserve the associativity of addition. I think that's still possible: instead of computing the product (or least common multiple) of all denominators here (yielding an exact result), I would pick a "sufficiently large denominator" (2^63 would suffice), and normalize each fraction to have that denominator (rounding the numerator to the nearest integer). Would that suffice for a precise and performant fixed-point solution?
Such cases (iterative/recursive summation of fractions with different denominators) would just need to be statically detected and have such an algorithm applied. The user can pick the denominator or it can be auto-suggested.
c
Yes I think that would work. The key design distinction is at what point you do the rounding step, as this is what breaks associativity. If you round all numbers to something computationally tractable before your SUM() then this will work I think. Another method would be to say that all arithmetic operations are performed in a particular order. e.g. sort the set before SUM(), sort a, b before a + b. I would go with this probably because I reckon it would simplify implementation a lot.
n
Yeah I've thought about the sorting approach before too! But I've decided I don't want to go down that route, because it affects the time complexity of basic operations (which can cause unacceptable slowdown, as an aside), and I'd have to explain how all that works to the user. I want them to know what the time complexity of their code is, and the rounding behaviour of the operations performed, and I want the explanation to be intuitive. If we're just normalizing the denominators, it's all intuitive: you can only sum a set of numbers if the numbers have the same denominator. If the denominators aren't the same you have to pick a denominator to normalize them to.
Thinking further: The technique I mentioned maintains the associativity of an approximated SUM(), but I'm not sure if there's any smart way to maintain the associativity of an approximated PRODUCT().
s
Has anyone here experimented with interval arithmetic and built in error analysis in a language? It's something I've wanted to do for a while but haven't got around to yet
n
I've looked into interval arithmetic before but concluded that it's pretty useless since the error bounds grow quickly with each operation, for most operations. Also, most apps don't care about error bounds, they just want a single number. It would take a new spin on the topic to change my mind.
s
Hmm, yeah I can see where you are coming from, it's not something I've investigated in depth
l
I'm planning to have precision/error info as a core part of the type info, would be happy to hear any reason to why that could be not amazing! Usually: input with precision(s) -> calculation -> output with different precision(s) The input is provided, the calculation/logic is defined, then an output is requested at a certain min precision, at which point the computation is optimized to reach it. Thus, the logic may be precision-agnostic/generic. Having the precision info+requirements would not only allow for many optimizations, but also allow for more robust software and inspection (eg. everything from game collision errors bc of edge case velocity/frame rate combination, to fatal money software compounding issues).
❤️ 1
k
@Nick Smith What you propose is basically compulsory conversion to fixed point before summing more than two numbers. That would make algorithms like my example a bit more cumbersome to write, but since it also forces what would be a good habit anyway (arranging the sum by orders of magnitude), it looks acceptable.
👍 1
@S.M Mukarram Nainar Interval arithmetic is a tool for analyzing the robustness of numerical algorithms. It’s almost useless for production runs. The same holds for exact real arithmetic.
💡 1
l
What if the info is used to select the "best suited" representation during compilation? Eg. if you make interest rate calc, you might want output to be correct +/- 0.4 cent, after 0-200yrs (+/- 0.4). With eg. js, I have to do a bit of digging to be sure, same for C (to figure out correct data type etc), but here, I could just write the plain formula, and specify the input/output error intervals (or just confirm the defaults) and be done and confident with it.
I feel like "precision" and "robustness" is quite lacking in the software field I've been exposed to. Would be happy to be shown otherwise.
e
Mathematica is a Very commonly use language that does not use floating-point. It is a symbolic language and can generate computations to arbitrate precision. Others symbolic math languages like maple can do the same thing. The old language SPITBOL Which was a continuation of Griswolds SNOBOL Also had arbitrary precision, Not sure if the final version which is called icon offered that same feature. myself I am planning to useDEC64 arithmetic as outlined by the very smart Douglas Crockford. IEEE Floating-point is indeed fraught with problems, And I believe it is one of the reasons COBOL still exists is that it offered BCD arithmetic With specified Percision so that you could control the rounding very deliberately. It’s actually one of the only features in COBOL That Fortran didn’t have. There was a long ago battle Between those two languages and COBOL won (unfortunately)
❤️ 1
👍 1
k
@Leonard Pauli That’s how exact real arithmetic works. Arithmetic operations return functions that you can call with the desired precision as its argument.
k
@Konrad Hinsen regarding interval arithmetic in production, what do you think of Unums? https://en.wikipedia.org/wiki/Unum_(number_format)
n
I went down the unum rabbit hole once. They just end up being "slightly better floats", but require hardware support to be efficient. One piece of hardware that's proposed is an accumulator (called a "quire" there, probably to seem more novel), which would certainly be helpful for adding integers of different magnitudes together efficiently.
k
@Kartik Agaram Hard to say. It's the work of a rogue expert in the business. Someone whose expertise is recognized, but also has a reputation for overselling his work. He has been working on this alone, without sollicitating feedback from anyone else. As far as I know, nobody has ever used unums for real-life large-scale applications. Also because there is an efficiency problem as long as unums are not implemented in hardware. So for now, this is an idea waiting for someone willing to make a big evaluation effort.