Hi, I was trying out the new Swift Numerics package. I wrote a generic function to add two numbers where lhs and rhs conformed to Real. In my case when adding integers, the compiler complains that Int32 and UInt8 do not conform to Real. Aren't integers real numbers?
If you look at the Real protocol defined in Real.swift, types conforming to Real have to conform to FloatingPoint (as well as the new RealFunctions protocol). The integer types (Int32, and UInt8 are examples) do not conform to FloatingPoint.
Integers are real numbers, but computational integer arithmetic is not (approximate) real arithmetic. In particular, the real numbers are a field, but integer division is not the division operation of a field (not even approximately). For these (and other) reasons, it doesn't make sense for integer types to conform to the Real protocol.
Some other similar examples are illustrative:
every integer is also a complex number, but should integers conform to a hypothetical ComplexProtocol?
If I had an EvenInteger type, those values would all be integer, but that type should not conform to BinaryInteger (because it cannot satisfy the semantic requirements of Numeric, since the even integers do not form a ring [with identity]).
As a general principle, subsets should not necessarily conform to the same protocols as the superset. This is perfectly normal.
Swift (like many other programming languages) borrows some names from math, but it can't fulfill all the aspects that are associated with the original terms: Int has a maximum, Sets have an order and x + 1 == x can be true. UInt often behaves like ℕ, and Real tries hard to mimic ℝ — but algebra and analysis don't suffer from some of the technical limitations computers have ;-)
No, integers aren't real numbers. They approximate the natural numbers (albeit being finite). For instance integers are not closed under division, and not all integers have multiplicative inverses.
(This is all very academic, and in a purely practical sense, @scanon has already said everything there is to say.)
In standard set-theoretic mathematics, this is all a bit fuzzy because the notion of "types" doesn't really exist, so in a certain sense integers can be considered "real numbers" (inasmuch as you can use your favourite construction of the real numbers and then identify a subset of those with the integers, but this is also a bit awkward, because you need another notion of integers in the first place in order to be able to construct the real numbers...), but critically, mathematical theorems about the set of integers are not the same theorems as those about the set of real numbers, because these two are distinct sets (and when proving things about a set, of course it matters that certain elements are not part of the set).
This means you have to be a bit careful about which statements about the real numbers are applicable for integers, too. For example, if you prove that 2^{x+y} = 2^x2^y for all real numbers x,y, the same proof is valid for all integers x, y, but a theorem like "every non-zero real number has an inverse" is of course not true for the integers. The first statement relies only on a predicate that is valid for every two elements of the set, so it's still valid when considering a subset, but the second theorem needs to "look at other elements in the set".
You can look at other types of mathematical foundations which have some type-theory background and then integers and real numbers are more clearly distinct, and if you want to extend some proof about the reals to a corresponding proof about the integers in e.g. something like Coq, you need to work with explicit homomorphisms.
Tangential to your point, but neither of these subset relations holds in Swift, nor does Float ⊇ Int32. Int32 can represent values that are not representable as Float (e.g. 0xffff_ffff; the closest representable Float value is 0x1_0000_0000), and Int is equivalent to Int32 on 32b platforms, so cannot necessarily represent all Int64 values.
Why would data loss be ideally handled at runtime? Part of the benefit of Swift's strict non-conversion rules is that data loss is made explicit at point-of-loss.
I can get behind auto-promotion of smaller integer widths to larger, as those can never lose data. I can't agree with the reverse. For those applications where it doesn't matter, it's still possible, and for those where it does, the developer can be sure he's not making any invisible mistakes.
You are dismissing the value of explicitness. This is a design principle of Swift, and I feel you're banging your head against the wall in trying to change it.