Complex Numbers

I want to start a discussion about overall design of a complex number system in Swift.

There is a strict mathematic hierarchy for the following number types:
Natural < Integer < Rational < Real < Complex
where I'm using the < symbol as subset.

These number systems support the following operations that stay within their own system:
Natural: Addition, Multiplication
Integer: Addition, Multiplication, Subtraction
Rational: Addition, Multiplication, Subtraction, Division
Real: Addition, Multiplication, Subtraction, Division
Complex: Addition, Multiplication, Subtraction, Division, Exponentiation, Logarithm

It's worth noting the following as well,
Imaginary: Addition, Subtraction
Positive Real: Addition, Multiplication, Division, Exponentiation, Logarithm

In Swift, we have the following protocols, which roughly correspond to these types,
UnsignedInteger -> Natural
SignedInteger -> Integer
FloatingPoint -> Real
However, these protocols have three (?) inconsistencies with their mathematical equivalences as currently implemented in Swift:

  1. Subtraction is defined for UnsignedInteger .
  2. Division is defined for ```SignedInteger`` (as Joe Groff pointed out, this could have been called the quotient operator, like in Python, and overloaded to // instead of / which would have avoided this as an inconsistency)
  3. Exponentiation is defined for FloatingPoint in the form of squareRoot (e.g., exponent of 1/2).

Inconsistency #1 isn't a huge deal, because one has to explicitly request an unsigned integer, so you kind of know what you're getting in to. According the discussion here, #2 is broken forever, which, IMO, sucks because it's easy to end up with Integer types without knowing it. So the big question with regards to how Swift handles numbers, is whether or not #3 is broken forever???

In other words, what's the consequence of not guaranteeing that the .squareRoot() return another Real? Is it okay to design a Complex number system in Swift where squareRoot of a Real type returns a Complex type? You can see that this exactly parallels #2, which is why I brought #2 up yesterday.

The consequence that I think would probably be the biggest, is that Complex numbers are not comparable. Real number and Imaginary numbers are (with themselves), but a Complex number, in general, is not. So if we change sqrt() to return a Complex type of some sort will not always be true that sqrt(a) < sqrt(b) is a valid operation.

If it is true that we can't change #3, then we have to design a Complex number system that lives on its own, separate from the current numbers. It's hard to see how this won't start feel "tacked on", because not even sqrt(-1) (eh-hem, sqrt(-1.)) will map to something sensible.

Before digging deeper into specifics, I wanted to see if others have thought about this, and what other issues might arise.

Edit: Changing BinaryInteger to UnsignedInteger

1 Like

IEEE–754 requires the square-root operation on floating-point types. We are not going to drop adherence to the standard, and we are not going to change the spelling of the existing method.

In my opinion, the existence of the square-root function on floating-point types is irrelevant to the design of a complex-number type. It is not an impediment, and it is not worth discussing. Just design the complex-number type on its own merits.

That said, I am not convinced that the standard library needs a complex-number type. It is useful for signal-processing, and for teaching purposes, but it would probably be fine to put it in a dedicated math library alongside matrices and suchlike.

I’m not saying that to be discouraging—in fact I strongly support the design and implementation of a complex-number type—I just want to be realistic about where it may ultimately reside.


Perhaps. But C has complex numbers as a language feature and it could be useful to have C functions that uses complex numbers mapped correctly in Swift. This could justify it being part of the standard library, as it'd have to be a type known to the compiler.

Each numeric type is already like that in Swift. All the operators are basically (Self, Self) -> (Self) and you have to convert types explicitly everywhere if you try to mix them. Type inference with functions and literals usually makes that usable, and you can leverage that for a complex number type too. Here's a starting point:

struct Complex: ExpressibleByIntegerLiteral, Equatable {
	var real: Double
	var imaginary: Double
	init(real: Double, imaginary: Double) { self.real = real; self.imaginary = imaginary }
	init(integerLiteral value: Int) { self.init(real: Double(value), imaginary: 0) }
	static let i = Complex(real: 0, imaginary: 1)
func sqrt(_ c: Complex) -> Complex {
	// FIXME: only works for -1
	assert(c == Complex(real: -1, imaginary: 0))
	return .i
// and finally:
assert(sqrt(-1) == .i)

You're right that it raises the same issue with the literals previously brought up. The problem is that this still can't work,

let a = -1;
let b = sqrt(a);
let c = sqrt(-1);
assert(b == c)

which is really awkward and non-intuitive that the second line fails, but the third line is okay.

I should add that a Complex number type must work in conjunction with a Real (and perhaps even Imaginary) type. The reason is that you do not want to assume that all numbers are Complex. That's an enormous waste of memory and computation time for HPC. If you have a Real variable that uses 1 GB of memory, you don't want to be hauling around the extra 1 GB of zeros just because you're going to be multiplying it against a Complex variable.

Thus, the most painless way forward is to fix the .squareRoot definition in FloatingPoint, otherwise you're just going to end up recreating an almost identical protocol for a Real number type.

and 99% of the time if you take the sqrt of a negative number you want NaN, not a complex number. that seems like the real waste here

Either way it requires a check to see if you have something sensible (i.e., check to see it's NaN or if its complex).

And while NaN may be what you want 99% of the time, there's a whole other world out there that actually wants the correct mathematical answer.

I also think it's fair that you have to explicitly ask for a complex result when you're operating on a Double and want to get a complex square root. We also don't promote Int8 + Int8 to Int16 just in case the resulting value is not representable as Int8. But we have specialized functions to add while checking for overflow.

I don't like these appeals to the “correct mathematical answer”, because there is no single correct answer for the square root of a negative real number in mathematics. It's perfectly reasonable in many contexts to say that the result is undefined; that's what the standard mathematical definition of the square root is and what a lot of square root functions in programming languages will tell you by returning NaN. In some other domains you might extend the definition to complex numbers instead, but note that there are multiple choices for this definition and you inevitably break a lot of reasonable or useful properties of the square root in the process.


.squareRoot() is an instance method on FloatingPoint. That means it is only available in contexts when:

  • You have a concrete FloatingPoint type or instance, like a Double, or
  • You have an object or generic type parameter whose generic constraints include conformance to FloatingPoint

I don't see how this overlaps with a complex number type.

I think your idea that FloatingPoint is shorthand for any Real number is not entirely correct. This is what the documentation for the protocol actually says:

/// A floating-point numeric type.
/// Floating-point types are used to represent fractional numbers, like 5.5,
/// 100.0, or 3.14159274. Each floating-point type has its own possible range
/// and precision.


/// Types that conform to the `FloatingPoint` protocol provide most basic
/// (clause 5) operations of the [IEEE 754 specification][spec]. The base,
/// precision, and exponent range are not fixed in any way by this protocol,
/// but it enforces the basic requirements of any IEEE 754 floating-point
/// type.

It's well accepted that you take the principal root between -pi and pi, but you can make some other choice as well. That's what the current implementation of the sqrt function does on real numbers, and I don't hear you complaining about that. Do you want sqrt(4) to start returning -2? Or maybe a tuple of (2,-2)?

Look, that link you provided was a really great read and shows that there are many wonderful mathematical abstractions that reveal other ways of thinking about seemingly simple concepts. I spent years of my life delving deep into an esoteric field of differential geometry and learned all kinds of cool things. That said, at the end of the day, these higher level abstractions are often not the most pragmatic approach. There are very reasonable choices about what the right abstractions are that provide the most utility.

Finally, I gotta say this is just INSANE that we're even having a discussion about the merits of complex numbers as a useful tool for programming. I realize many app developers don't see this stuff every day, but there are enormous numbers of people that use this stuff in all kinds of programming languages. Swift is meaningless to scientists and many engineer fields unless you fix these (pretty basic) problems.

If you build Swift so that it can be used by scientists and engineers, then the whole Swift ecosystem will reap enormous rewards in the future---otherwise it will survive as a niche language that caters to app developers. That's okay too, but I think it would benefit everyone in the long run if we just made these few small changes to Swift that significantly increases its diversity.

The idea that there is just one "correct mathematical answer" might be short-sighted.

Take for example the Harmonic Series. In Calculus, it's one of the first divergent series that you'll meet. But in Complex Analysis, it's extended into the Riemann Zeta Function and it turns out to have a value: -1/12.

I'd replace the "correct mathematical answer" idea for "the right tool for the job".

IMO, it'd be best to have a library for this separate from the Stdlib. Then you could leverage non-NaN results, arbitrary precision, symbolic algebra and sparse representations. All of them in their own world, with their own rules, advantages and drawbacks.


You already have complex, e.g. DSPComplex, on Apple platforms via the accelerate framework. Whatever is done should be compatible with this since this is an existing, high-performance framework. I tend to think that math functions , in general, should be in an optional library. Hopefully SPM will get up to speed and an 'extras' library can be developed.

The same could be said about making the result of the square root operation on floating point numbers a complex number.

I don't recall anyone saying that complex numbers have no merit, so I'm not sure what discussion you're referring to. The questions here are actually about how complex number functionality should be integrated into Swift, e.g. what should the API be, where should it live (standard library? Foundation? an official math module?), does it require compiler support, how should it interoperate with the current numeric types, etc.


My two cents on a few comments:

  1. I'd love to see a complex number type standardized as part of Swift. I don't have a strong opinion if it's in the stdlib or a math library, but we should clearly have it along with the protocols to tie it into the larger numeric system.

  2. Some of the discussion above seems to conflate utility (of course we should have complex numbers!) with expected behavior in specific cases (e.g. whether Float.sqrt should return a complex number).

There is an important difference between the design of Swift numerics and standard math - imposed by many constraints, including the intended design style of Swift code. I think that most people would agree for increased utility and generality of the numerics library, but that doesn't mean that we're looking to directly emulate Python or mathematical notation directly in Swift syntax. We need to work within the constraints of the type system, aim for principle of least surprise for typical programmers (who are not always math experts!), and work within the constraints of source compatibility.

One other note, comments like this:

I gotta say this is just INSANE that we’re even having a discussion

are not a very productive way to gain consensus for one's ideas.



Welcome to Swift evolution ;-)
I think nobody actually said that complex numbers are useless, but even I won't say they belong to the stdlib: Complex numbers are a somewhat advanced topic which many people will never need, and afaics, they don't depend on any special support from the language, so they can be defined in a regular library.
It's a pity that there's no "advanced math" lib bundled, but apparently, Swift has no ambitions to replace Mathematica, nor R or Octave (you might be interested in Julia, though ;-)

I hardly know a single language that is as ill suited for calculations as Objective-C, so maybe Swift just has a bad heritage ;-); but if you want to see some real Swift backlash against math, you should look into the topics I recently spend the most time on ;-)

Besides that, floating point numbers already violate some fundamental principles, so you'd have a hard time making Swift "correct".

The good think is that thanks to operator overloading, it's easy to build custom numeric types -- complex numbers, quaternions or even posits (Unum (number format) - Wikipedia)


I apologize for that.

1 Like

A couple quick points:

  • Of course Swift should have a complex number type. I don't think that's at all controversial. There isn't one yet because the language is still young and there have been more pressing issues to address. It took C and C++ decades to get to the state of complex number support that they have today (and C actually made complex support optional in a later standard!)
  • There's a lot of hazards and limitations in how complex numbers are handled in every other language I'm aware of. It would be good to do better.
  • In the meantime, it's pretty simple to achieve C++ like support by defining a struct with operator overloads and a set of shims for the C stdlib functions for math operations (as I mentioned, these are optional, but provided by both Darwin and Glibc). I believe there are several implementations of this readily available, but I'll let the maintainers plug them.
  • FloatingPoint .squareRoot( ) should absolutely not return a complex number. In a language without implicit conversion, this would force 99.9999% of square-root call sites to write x.squareRoot().real or something equally ugly.
  • But Swift does support return-type overloading. It's not completely obvious that there cannot be a func squareRoot() -> Complex<Self> as well (it's also not obvious that this should exist--it complicates type inference somewhat, though probably not too badly, and not everyone is fond of return-type overloading from an API clarity perspective. This would need to be tested and debated, obviously).
  • I see no problem at all with defining a .complexSquareRoot( ) extension when a complex type exists. Alternatively, this could be spelled Complex(x).squareRoot( ). There's lots of options here, and they will be explored in the fullness of time.
  • There are two distinct operations that you're conflating as "exponentiation". Many languages and many people conflate them, but that's a mistake, and we can do better. There's a function GxN --> G defined on any multiplicative group G by repeated multiplication (and its obvious generalization to FxZ --> F for any division ring F). This function is continuous in its first argument except at zero when the second argument is negative, where it has a pole. The other function is a function of two variables defined on the complex numbers by pow(x,y) = exp(y * log x), with a branch cut on the negative real axis of x (we also often work with this function restricted to R). This function has a branch cut as well as an essential singularity at zero. These two operations happen to coincide for most values, which is why they share notation, but they are significantly different operations, and probably should be treated differently.
  • BinaryInteger is not at all analogous to the Natural numbers.

Again, you definitely aren't the only one who would like to have a built-in complex type. It's just a question of time and priorities. Swift is still extremely young as a language.


I apologize for the drive-by-ness of this response, but I've been lurking on this thread and I'm wondering (not having any stake in the matter or deep understanding of the implementations), if there's some compromise avenue somewhere along the lines of making Double.NaN (and friends) opaquely contain enough information to represent a Complex number while still evaluating as isNaN in a Double context.

Then everyone who expects isNaN from an operation on Double still gets it, but those wanting to treat it as a Complex can do so.

Double.NaN is Double, so any extra information would have to be carried on Double itself, so this may be a non-starter, but maybe there's some way that I haven't thought of to make it work.