A sample Rational number type


(Hooman Mehr) #1

For anybody who is interested: This gist <https://gist.github.com/hooman/6e08c48e1e06ee19e06e5b09f664f9be> contains a Rational number implementation for Swift 3.0. It is a single file that you can paste in a playground and take a look, or remove the last few lines and drop the file into your own module. The recommended way to create a rational number is specifying Rational type as in:

let r: Rational = 18/64
// or
let x = 5 + 2/3 as Rational

or use tolerance operator `±` to convert from floating point:

let r2 = 2.109±0.0005 // 2⁷⁄₆₄

Rational type conforms to AbsoluteValuable (hence Equatable, Comparable, ExpressibleByIntegerLiteral, SignedNumber), Strideable, and CustomStringConvertible.

It always uses fully-reduced representation for simplicity and clarity of comparisons and uses LCM (lowest common multiple) for addition & subtraction to reduce the chance of overflow in the middle of computations. The disadvantage of these design choices are: It is not guaranteed to keep the nominator and denominator as specified during construction, and GCD / LCM computations reduce its performance.

The performance trade-off is not huge and is usually acceptable for typical rational number use cases. Preserving denominator can be addressed with a number formatter for rational numbers that I will add later. GCD is computed with Stein's algorithm (binary GCD algorithm).

Hooman


(Dave Abrahams) #2

For anybody who is interested: This gist
<https://gist.github.com/hooman/6e08c48e1e06ee19e06e5b09f664f9be>
contains a Rational number implementation for Swift 3.0. It is a
single file that you can paste in a playground and take a look, or
remove the last few lines and drop the file into your own module. The
recommended way to create a rational number is specifying Rational
type as in:

let r: Rational = 18/64
// or
let x = 5 + 2/3 as Rational

or use tolerance operator `±` to convert from floating point:

let r2 = 2.109±0.0005 // 2⁷⁄₆₄

Presumably you mean:

  let r2 = r±0.0005 // turn a Rational into a Double with no more than
                    // .0005 rounding error

? That is supercute!

Rational type conforms to AbsoluteValuable (hence Equatable,
Comparable, ExpressibleByIntegerLiteral, SignedNumber), Strideable,
and CustomStringConvertible.

Have you thought about what this would look like after we land
https://github.com/apple/swift/pull/3796 (see
https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md)
?

You can also find a working prototype of that code at
https://github.com/apple/swift/blob/b7622b41756e33bdf3f3415320ffec52aec73281/test/Prototypes/Integers.swift.gyb
It would be great to know that the protocols we've designed actually
work out for Rational numbers. With these protocols, can you make your
Rational generic on the underlying integer type?

There's a BigInt implementation based on these protocols here:
https://github.com/natecook1000/swift/commit/45c52a75dcc15024649a5e093a5da4ee031595c2

It would be really cool to see it work with a generic version of your
Rational type.

It always uses fully-reduced representation for simplicity and clarity
of comparisons and uses LCM (lowest common multiple) for addition &
subtraction to reduce the chance of overflow in the middle of
computations. The disadvantage of these design choices are: It is not
guaranteed to keep the nominator and denominator as specified during
construction, and GCD / LCM computations reduce its performance.

The performance trade-off is not huge and is usually acceptable for
typical rational number use cases. Preserving denominator can be
addressed with a number formatter for rational numbers that I will add
later. GCD is computed with Stein's algorithm (binary GCD algorithm).

It's interesting to see that one wants a different algorithm for GCD
when using BigInts:
https://en.wikipedia.org/wiki/Binary_GCD_algorithm#cite_note-11

I wonder how that should be dispatched...

···

on Fri Sep 30 2016, Hooman Mehr <swift-users-AT-swift.org> wrote:

--
-Dave


(Hooman Mehr) #3

Presumably you mean:

let r2 = r±0.0005 // turn a Rational into a Double with no more than
                   // .0005 rounding error

? That is supercute!

It is actually the other way around: It returns a rational given a double so that the maximum difference of the created rational with the input double is the specified tolerance.

Have you thought about what this would look like after we land
https://github.com/apple/swift/pull/3796 (see
https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md)
?

That is the next thing I want to look at, the next chance I get. For now, it isn’t much more than a quick hack on a slow Friday…

You can also find a working prototype of that code at
https://github.com/apple/swift/blob/b7622b41756e33bdf3f3415320ffec52aec73281/test/Prototypes/Integers.swift.gyb
It would be great to know that the protocols we've designed actually
work out for Rational numbers. With these protocols, can you make your
Rational generic on the underlying integer type?

There's a BigInt implementation based on these protocols here:
https://github.com/natecook1000/swift/commit/45c52a75dcc15024649a5e093a5da4ee031595c2

It would be really cool to see it work with a generic version of your
Rational type.

Interesting idea. A BigInt type can really lift Rational to new levels. Interesting case study to see how well the new protocols work.

It's interesting to see that one wants a different algorithm for GCD
when using BigInts:
https://en.wikipedia.org/wiki/Binary_GCD_algorithm#cite_note-11

I wonder how that should be dispatched…

Moreover, there is something I ran into after posting: On today’s CPU architectures with fast integer divide, simple binary CGD is slower than basic Euler’s algorithm using remainders. Binary GCD needs the equivalent of __builtin_ctz to run faster. It would be nice if compiler could detect the shift loops and replace the whole loop with x86 BSFL instruction.

I need to put some thought on whether there is an efficient means of switching the algorithm based on the actual size (or nature) of the given Int type, otherwise we may have to specialize, which is not good from a generics standpoint.

···

On Oct 2, 2016, at 5:23 PM, Dave Abrahams via swift-users <swift-users@swift.org> wrote:


(Dave Abrahams) #4

Presumably you mean:

let r2 = r±0.0005 // turn a Rational into a Double with no more than
                   // .0005 rounding error

? That is supercute!

It is actually the other way around: It returns a rational given a
double so that the maximum difference of the created rational with the
input double is the specified tolerance.

Oh, nifty. How about the other direction?

Have you thought about what this would look like after we land
https://github.com/apple/swift/pull/3796 (see
https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md)
?

That is the next thing I want to look at, the next chance I get.

Thank you!

For now, it isn’t much more than a quick hack on a slow Friday…

You can also find a working prototype of that code at

https://github.com/apple/swift/blob/b7622b41756e33bdf3f3415320ffec52aec73281/test/Prototypes/Integers.swift.gyb

It would be great to know that the protocols we've designed actually
work out for Rational numbers. With these protocols, can you make your
Rational generic on the underlying integer type?

There's a BigInt implementation based on these protocols here:
https://github.com/natecook1000/swift/commit/45c52a75dcc15024649a5e093a5da4ee031595c2

It would be really cool to see it work with a generic version of your
Rational type.

Interesting idea. A BigInt type can really lift Rational to new
levels. Interesting case study to see how well the new protocols work.

Quite so! [FWIW, I rather wish that BigInt was using two's complement
representation, as the protocols were designed to make that work
smoothly]

It's interesting to see that one wants a different algorithm for GCD
when using BigInts:
https://en.wikipedia.org/wiki/Binary_GCD_algorithm#cite_note-11

I wonder how that should be dispatched…

Moreover, there is something I ran into after posting: On today’s CPU
architectures with fast integer divide, simple binary CGD is slower
than basic Euler’s algorithm using remainders. Binary GCD needs the
equivalent of __builtin_ctz to run faster.

The new integer protocols expose that as a "leadingZeros" property.

It would be nice if compiler could detect the shift loops and replace
the whole loop with x86 BSFL instruction.

Talk to the llvm people :-). You'll want to use masking shifts under
the new integer protocols.

I need to put some thought on whether there is an efficient means of
switching the algorithm based on the actual size (or nature) of the
given Int type, otherwise we may have to specialize, which is not good
from a generics standpoint.

It depends exactly what you mean by that term :slight_smile:

···

on Sun Oct 02 2016, Hooman Mehr <swift-users-AT-swift.org> wrote:

On Oct 2, 2016, at 5:23 PM, Dave Abrahams via swift-users <swift-users@swift.org> wrote:

--
-Dave


(Hooman Mehr) #5

I encountered a bug in the standard library while working on an exact conversion from floating point types to my rational type <https://gist.github.com/hooman/6e08c48e1e06ee19e06e5b09f664f9be> (when you pass a tolerance of zero)

SR-2868 <https://bugs.swift.org/browse/SR-2868>: If the value of the floating point type is a power of two, its `significandWidth` property returns a nonsense value, because the behavior or Builtin.int_cttz_IntNN (used in countTrailingZeros property) is undefined for zero.

The test suite should have included such special cases and caught this one.

···

On Oct 3, 2016, at 9:08 AM, Dave Abrahams via swift-users <swift-users@swift.org> wrote:

on Sun Oct 02 2016, Hooman Mehr <swift-users-AT-swift.org <http://swift-users-at-swift.org/>> wrote:

On Oct 2, 2016, at 5:23 PM, Dave Abrahams via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:
Presumably you mean: