Notes on numerics in Swift

In the spirit of Ted's RFC on making swift.org a more valuable resource, I thought I'd share something I've been working on for a while:

A great number of questions are raised in these forums as to the functioning and design of numeric types and protocols in Swift. Sometimes, the answer can be found only by digging through the source code, and even then it's hard to place what you learn there within a larger context. Therefore, I've compiled a series of articles that delve into these topics in more detail.

Who's it for? Anyone who's working intensively with Swift but less familiar with its numeric types, or anyone who's working intensively with numbers in other programming languages but less familiar with Swift. Hopefully, you'll find tidbits you didn't know but find useful:

NOTES ON NUMERICS IN SWIFT

Concrete integer types, part 1
Introduction, integer literals, conversions among integer types

Concrete integer types, part 2
Operator precedence, overflow behavior, integer remainder, bitwise operations

Concrete binary floating-point types, part 1
Introduction (IEEE 754, C mathematical functions, finite constants)

Concrete binary floating-point types, part 2
Floating-point precision (striding, fused multiply-add, unit in the last place, approximating π, subnormal values on 32-bit ARM, string representation)

Concrete binary floating-point types, part 3
Float literals, conversions between floating-point types, other initializers

Concrete binary floating-point types, part 4
Signed zero, infinity, and NaN; floating-point remainder; significand representation

Numeric types in Foundation
Foundation.Decimal, Foundation.NSNumber

Numeric protocols
Introduction, design rationale, generic algorithms, conformance

The source repository for these documents is at GitHub - xwu/xwu-swift-numerics: Notes on numerics in Swift. Corrections welcome. And, of course, if the community feels that this would be the sort of thing that should be available on swift.org, I'm happy to contribute.

43 Likes

Thanks Xiaodi, lots of hard work poured into it :)! This kind of content is exactly what Swift needs (... and we need more of “teaching Swift quirks and gems from a position of practical experience)

Great work - hopefully it will be put onto Swift.org. Java benefited from similar efforts like, e.g. the books Effective Java and Java Puzzlers.

Just read about Decimal and NSNumber. This is so well written and contains so many interesting details. Looking forward to reading the rest! Thanks for doing this!

1 Like

Any chance you could provide one single markdown file somewhere? I‘d convert it to pdf and read in in iBooks on the go. Never mind. ;)

One thing I have recently wondered is if Numeric.magnitude should actually guarantee that it will be the absolute value of self?

All stdlib types use it that way, but for complex values it may be better for it to be the square of the absolute value (aka x.conjugate * x), and I don't think there is any syntactic requirements that would prohibit this usage.

Of course, BinaryInteger would still guarantee that self.Magnitude will be the absolute value of self.

It is guaranteed: " For any numeric value x , x.magnitude is the absolute value of x ."

Making magnitude be the squared absolute value is a mess. There's a bunch of undesirable fallout, but the worst is that it means that integer magnitude always requires a type with twice as many bits to represent.

1 Like

As I said above I still think BinaryIntegers would still require that magnitude be the absolute value.


However, if general Numeric permitted square norm Magnitudes, it would permit non-linear types like complex values to have Magnitudes that do not have to represent arbitrary sums of square roots.

A hypothetical complex number's Magnitude currently has to represent not just √2 and √3 for (1 + .i).magnitude and (2 + .i).magnitude respectively, but also √2 + √3 for (1 + .i).magnitude + (2 + .i).magnitude.

I don't know how many unique values can be obtained by a sum of an (n + m) bit complex number's magnitudes, but I suspect it will be more than for a square norm Magnitude, and be harder to implement.


But it would be a semantics breaking change to move the guarantee of Numeric down to BinaryInteger, so it will probably not happen.

You’re basically talking about the field norm, I think, which does not really correspond to a useful notion of magnitude outside of that domain, and certainly causes a lot of bugs in C++ complex codes.

This is insanely useful, thank you very much.

In most engineering code magnitude/abs of a complex number, x + i y, is sqrt(x^2 + y^2). Similarly vectors.

It's the same in complex analysis really.