Generic "math functions"

We call the integer type Int even though it isn't a correct model of Z (addition isn't even total!), because it's still the most widely-used computational type for working with the integers we typically encounter. Similarly, it's reasonable to call these types Real even though they aren't a correct model of R, because:

  • While bignum integers are successfully deployed in some languages, IEEE floating point is basically the only widely-used approximation to R. There are a few niche alternatives, but they are extremely niche.
  • Unlike integers, there can be no computational model of R, so there's no "clearly better" model to reserve the name for.

I.e. you can make a coherent argument that Int should have been reserved for a "true integer" type. You can't coherently argue that Real should be reserved for a "true real number" type, because that type cannot exist.

IEEE floating point makes some design decisions that you personally do not agree with. It makes some decisions that I don't agree with, too. But it is--by multiple orders of magnitude in both support and performance--the most successful computational model of the real numbers, so it's very hard to justify holding out the name Real for some other model that we can't even point to a sketch of. That would cut very, very hard against Swift's goal of being a "pragmatic" language.

5 Likes

Again, try to make the distinction between the algebra between types and the storage of those types. This is a distinction that can be reasonably made. Both @scanon and @xwu are tearing down a strawman argument that I am not making.

Other languages do have better algebriac models and try to respect the algebra...while still using floating points and ints for storage. I'm baffled by the resistance to acknowledge this.

What other languages have better algebraic models? In what way are they better?

To make this totally concrete: why do you think having sqrt(-1) produce a complex number is better? The usual mathematical interpretation is that √c is one of the solutions to x² = c in the field you're working in. As a mathematician, I would say "x² = -1 has no solution over the real numbers", not "the solution of x² = -1 over the real numbers is the complex unit." I would expect a language's algebraic model to follow suit:

let x: Double = -1
sqrt(x) // NaN, because there is no solution to x^2 = -1 over the reals.

let z: Complex = -1
sqrt(x) // i, the principal sqrt of -1 + 0i in the complex numbers.

This is precisely what it means to respect the algebra. Algebra is not just a bag of arithmetic operations. It is vital to be precise about what structure you are working in.

To simplify this even further, consider the monoid of unsigned integers under addition. Inverses do not exist for non-zero numbers in this structure. Should the following be valid, and what should the result be:

let x: UInt = 1
let y = -x

"Respecting the algebra" means respecting that we are working in the unsigned integers, and so -x does not exist, even though we could return -1 as Int and have that be "correct" in some sense.

6 Likes

I have no problem with there being a protocol (or algebra) that maps sqrt(-1) to NaN for floating point types—that is obviously a very useful feature for some people and has some practical consequences for programming in some cases. In fact, I think that all the basic storage types in Swift (e.g., Float32 and UInt8) should have elementary mathematical operations remain closed within those types (and your proposal does this!). The great thing about the naming scheme for Swift storage types is that I can go look up on Wikipedia how 32-bit floating point numbers behave—and what Swift does actually matches this expectation. This is good.

The problem with using the term Real for a protocol in this way is that most people think of the square root of a negative real number as mapping to a complex number (specifically an imaginary number). Again---this is about expectations. The idea that the square root of a negative real number has no solution in the real numbers is certainly correct in some contexts (which is your point), but it is indisputable that if you polled most college educated people what the square of minus one is, that they wouldn't say "it's undefined".

At the end of the day there are enormous numbers of scientists and engineers that expect sqrt(-1) to give a complex number. Many of these scientists and engineers know that when they sit in front of a computer and do some computations that they need to be careful because some programming languages don't match this expectation. So let's make the expectations clear.

1 Like

Seeing this discussion, I believe ElementaryFunctions, as given, should refine FloatingPoint, because it's documentation implies IEEE behavior, and even specifically references .nan in a few places, which I think should not be assumed to exist unless the type extends FloatingPoint.

No, because SIMD types with FloatingPoint elements and future Complex types will both conform. Quaternions or square matrices could also conform in the future. These types all expose aspects of IEEE floating-point semantics, and are composed of floating-point elements, but should not conform to the FloatingPoint protocol themselves.

6 Likes

There’s a long history of using real for floating point numbers in programming languages. See Algol, Pascal, Modula-2, etc. The square root functions in those languages don’t return complex numbers.

4 Likes

If you ask me, Real is a terrible name for this protocol, for two groups of people.

  1. People who who know a lot of advanced math and use the word Real to talk about numbers that everyone else calls “decimals”. These people are not going to be happy with a protocol called Real, because it will never line up with their idea of how

    should behave.

  2. People who don’t know a lot of advanced math, and use the word decimal (or fraction, or float, because, well, Decimal) to talk about these kinds of numbers. These people are not going to be happy with a protocol called Real, because that word sounds excessively academic and doesn’t have a whole lot of meaning to them.

I don’t think anyone is going to be happy with Real.

1 Like

I like the name Real for the protocol. I don't see that confusion:

  • Integers: Int, UInt8 and family (they do not map all ℤ, but it is clear)
  • Reals: Double, Float and family (they do not map all ℝ, but it is clear)
  • Complex: future Complex type?
3 Likes

If you ask me, Real is a great name for this protocol, for two groups of people.

  1. People who know a lot of advanced math and use the word Real to talk about numbers that everyone else calls “decimals”. These people are going to be happy with a protocol called Real, because it will line up with their idea of how ℝ should behave, and they understand it is only an approximate model, just like Int approximately models ℤ.

  2. People who don’t know a lot of advanced math, and use the word “decimal” to talk about these kinds of numbers. These people are not going to understand the intricacies of ℝ anyhow, so they will accept a protocol called Real as a black box just like they accept types called Float and Double, because those words don’t have a whole lot of meaning to them in this context but they can learn to use them just fine.

I don’t think anyone is going to be unhappy with Real.

5 Likes

Well, at least a few people will (@taylorswift is an existence proof). It's worth trying to understand more precisely what they don't like.

A side note: decimals (at least how most people understand them) and reals are two different things. Similar, but the difference between them is just one of the ways in which the real numbers are very, very weird. Another way that the real numbers are weird is that "almost all" of the real numbers are uncomputable, which means that anyone who knows a lot of advanced math understands that any type called "real" in a computer system cannot actually be the real numbers, so this causes no confusion.

These people will generally have no trouble because Double and Float and an eventual stdlib Decimal type will all conform to Real, so they can use the names they are familiar with and everything Just Works.

1 Like

int, float, and double have such long traditions in computing that they’ve really acquired meanings of their own separate from the mathematical definition. real doesn’t have this tradition.

... wait, what? real has been used for precisely this purpose for at least 50 years (Fortran 66, maybe earlier). C-family languages are the odd ones out on this point.

6 Likes

I think the easiest way to explain this is that “0.999…” and “1.000…” are obviously different decimals—they are composed of distinct sequences of base-10 digits—but they represent the same real number.

Sure, the real numbers are weird, but are they weird enough? As a counter-proposal, I vote that we model the surreal numbers instead!

1 Like

ALGOL 60 (and probably ALGOL 58)

4 Likes

Also in SQL! [1] [2] [3]

2 Likes

As somebody who currently is not all that interested in numerical/applied maths and works mainly with symbolic maths, I could play devil's advocate: It seems to be possible to do exact real arithmetic, see for example RealLib. As far as I understand it (I haven't really looked into it deeply), it represents real numbers as Cauchy sequences of rationals, so while you can't associate a value with any such real number, you can associate a sequence (Int) -> Rational with it that gives successively better approximations, and it should be possible to do actual arithmetic on these sequences. Maybe it's even possible to define other implementations of real numbers using e.g. Dedekind cuts. It seems that similar approaches are also used in proof assistants like Coq.

That said, I understand that this is a very rare use case, so while it might slightly irk me that Real should be restricted to floating point implementations only, I can get behind it as a pragmatic choice that works for 99.9% of people (and the occasional symbolic mathematician can always implement their own AbstractReal protocol).

You can define all sorts of approximations to the Reals, but those aren’t really more “the real numbers” than floating-point is. The library you mention models the computable reals, which is a neat thing, but:

  • almost all reals are uncomputable.
  • even if you restrict to the computable reals, equality is undecidable.

Note that these are not technical limitations. Real arithmetic cannot be modeled on a computer, unless “computer” means something radically more powerful than our conventional definitions allow. Any computational type called “real” isn’t.

6 Likes

off-topic-ish: For anyone with 7 minutes of free time there’s a great numberphile video, imho, that explains where uncomputable numbers are relative to our commonly used computable numbers.

5 Likes

Let's flip this question on it's head: let's say that Swift is able to accommodate some sort of structural subtyping in the future, so that one could define a set of types like Real, Imaginary and Complex that are closed within the three subtypes under the usual mathematical operations, by which I mean, e.g., that the square root of a negative Real type would map to an Imaginary type. I think this is a reasonable straw-man to consider because it would produce behavior expected by scientists and engineers (e.g., Matlab).

As @scanon indicated, this triple subtype (Real, Imaginary and Complex) could sit on-top of the proposed Real protocol being proposed here. So the question is, how do we distinguish these concepts from each other? What are the appropriate names?