Has anyone ever seen an implementation of arbitrar...
# thinking-together
n
Has anyone ever seen an implementation of arbitrary-precision floats (N.B: not rationals) where you don't actually have to set the precision BEFORE you perform an operation? I want to do a sequence of operations and then round the final result, not round constantly and incessantly. It seems like no human on earth has ever implemented this before, which baffles me. (Edit: division is allowed to be rounded)
w
If you don't mean rationals, what do you mean?
n
Just floats! But where the mantissa and exponent are BigInts, not fixed-size bit fields. I want to be able to add or multiply two floats and get an exact result, unless I explicitly ask for rounding.
w
Probably not what you’re looking for but: https://en.wikipedia.org/wiki/Unum_(number_format)
n
Unums don't provide exact results, despite the marketing type. Interval arithmetic (which floats or unums can be used for) provides a guarantee that the true result lies within an interval, but I'm not really interested in interval arithmetic.
m
probably you know about this and I'm not sure it solves your problem, but just in case: https://gmplib.org/
n
Unfortunately, you have to specify the precision of GMP floats before each operation.
I could "hack" these existing libraries by setting the target precision insanely high (kilobytes), but it's a dodgy solution.
w
The kind of hack I've done is to have arithmetic operations build an expression tree, which you then reduce in some special way later.
👀 1
g
^^that’s what i would do
n
How does that give me an associative SUM()?
i
This is a rabbit hole I went exploring about a year ago. Some of the libraries had a maximum precision threshold you could set, which sounds like the thing you want to avoid. Other libraries dodged that by doing all calculations in a symbolic way (see also). There were a bunch of libraries I came across, but I didn't end up saving links to them because I never found one that fit my performance needs (ie: perfect precision with relatively constant execution time that handles the accumulation of millions of operations).
k
@Nick Smith If I understand correctly what you are looking for, it's impossible. The very principle of floating-point arithmetic is to make the precision part of the storage format for each number, in order to ensure a constant storage size, no matter which operations you do. If you are OK with indefinite storage size, as for big integers, then go for rational numbers. Something that many people don't realize is that floats a subset of rationals with fixed storage size. Perhaps it's because so many languages call them "reals" that so many people think floats are a superset of rationals, whereas reality is the exact opposite. If what you want is a superset of rationals, go for computable numbers (the subject of Alan Turing's famous paper that introduced the Turing machines). A number is then represented by a procedure that you call with the requested precision. But this is rarely what you really want, once you understand all the consequences, for example the impossibility to test for equality.
👍 2
c
I wonder if there's some solution that carries several representations, and is aware of their accuracy, so it can fall back to an expensive symbolic calculation if it knows the test is outside float accuracy.
💡 1
r
@Nick Smith the reason it doesn't exist is that not all numbers are representable in base 2 float, even with infinite precision. For example: 0.1 is 0.00011001100110011...2 (with the 0011 part repeating forever). So, you literally can't represent all numbers of even a finite base 10 decimal space without infinite precision. See this for a more detailed
a
there’s systems like Common Lisp that support arbitrary-sized rational numbers, any rational can be stored as a fraction of (bigint) / (bigint)
e
there is an excellent youtube video and code sample by Crockford on the DEC64 floating point representation, which is far superior to stupid IEEE double precision format. It was a tragedy that the committee sought to save a few transistors and used binary exponents. Countless programs wrestle with this, and i believe it is one of the reasons COBOL still runs in many industries because COBOL for all its verbosity and ugliness contains BCD arithmetic in its syntax and computation model.
👍 2
And let's not forget mathematica, which as a symbolic language, will deliver whatever precision you wish to ask for. It is one of its "superpowers".
i
Yeah, DEC64 is nice. Wish we had that in JS.
d
Konrad is right that this is generally impossible. As soon as you compute 1/3, it's impossible to represent that exactly in either binary or decimal floating point. However, if you use decimal notation for your floating point inputs, use decimal floating point representation, and restrict your operations to add, subtract and multiply (no division, no exponentials or trigonometry), then I think it could be done. If you extend this system of exact arithmetic to support division, then you need to store a denominator, which means you are using rationals.
n
Just to address everyone at once: I'm sure that I do NOT want a rational number type. I don't really care about exact division, I just want exact addition and multiplication. For those talking about 0.1 (1/10) and 1/3, those are both division operations, and I do not want them to be exact. I probably wasn't clear enough about my stance on division.
😄 2
👍 2
And a decimal number type doesn't help me out at all, because all it gives you is exact division by 10.
@Konrad Hinsen I'm not sure what the agreement is on the definition of "floating-point", but constant storage size isn't one of my constraints. You can define a broader subset of rationals than IEEE floats: any integer multiplied by any power of two, they're called the dyadic rationals and I guess that's what I'm talking about. I want to see a library for dyadic rational arithmetic.
The reason I don't care about exact division is that division is relatively uncommon compared to addition and multiplication, and if you start trying to address the uncommon cases you end up going down a rabbit hole where you want exact roots, exponentials, logarithms, and trigonometry. In other words, you want Mathematica. I'm not going that far. I'm happy for division and transcendental functions to be rounded. The key thing I want is associativity of addition and multiplication, because my programming language avoids inessential ordering of values (it has no "default ordering"). Division isn't associative, so it's fine for it to be inexact.
d
The topic of symbolic computation makes me miss my TI-89
e
Mathematica lets you request a desired precision for the results. Also, the Intel 8087 subunit which is inside every Intel compatible chip for over 20 years has operations that do everything in 80 bits precision, and is then downconverting to 64 bits after you run a long sequence of operations and store the result into a double precision result. In this way they avoid rounding errors from creeping in. It is extremely clever and although not amenable to parallel operation, is still the preferred way to do floating point in the intel world because it doesn't accumulate error like everyone else's 64 bit arithmetic operations, which accumulate error rather quickly. 80 bits is 10 bytes, and there are even BCD instructions in the 8087 unit which i have occasionally used. They are nearly forgotten nowadays as no higher level language i know of lets you access them, you have to write in Assembler. A tiny bit of assembler can ofttimes do some very clever things.
n
But that still doesn't enable associative addition and multiplication 😕 I'm looking for conceptual simplicity. Users of my programming environment should have addition and multiplication that "just work", no strings attached, no land mines, no cognitive overhead.
k
@Nick Smith It seems that what you want is scaled integers, right? Maybe this proposal would work for you. There even seems to be an implementation (in C++).
@Edward de Jong / Beads Project DEC64 follows different priorities than IEEE 754. If your needs are closer to DEC64, that’s fine, but not a reason to call IEEE754 stupid.
n
@Konrad Hinsen I want fractional numbers like 1.5, and dynamic range (compact representations of 2^100 and 2^-100) and I don’t think that proposal addresses that.
(Though I appreciate the suggestions)
d
I want to see a library for dyadic rational arithmetic.
I googled that, and got: • https://github.com/SRI-CSL/libpoly/tree/master/src/number The code for exact dyadic rational addition and multiplication is actually quite simple.
w
@Nick Smith please remind us what operations you want to do. You've mentioned addition and multiplication, sometimes division. What else? Roots? Trig? What?
k
@Nick Smith Oops, right, I have filed that library under the wrong heading in my notes. Which leaves... no library I know about for scaled integers. Which have nevertheless been widely used, as they are very simple to implement in many situations. Addition, subtraction, and multiplications are nearly trivial.
n
@Doug Moen Ah, I couldn't find anything like this when I looked, thank you! Yes, it does seem that an implementation won't be too challenging, so I'm probably going to write my own from scratch. Truncated division (to custom precision) might be tricky. I guess I'll find out!
@Konrad Hinsen "Scaled integers" is a legacy term, right? Most of the sources seem to be referring to older systems and hardware design (or conflate them with fixed-point numbers). I can't seem to find anything interesting.
@wtaysom Basically all the operations you can do on floats today, except those that consume/produce non-arithmetic values like -0, NaN, and infinities. So yeah, I'd want rounded roots and transcendental functions. But I will probably have to implement those myself, which is acceptable, because I don't need them in my initial prototyping.
k
@Nick Smith Well possible that it’s an old-fashioned term. I learned about this 25 years ago.
👍 1