Does Swift's type checker always run from left to right?

I'm writing a "Swift Symbolics" package for fun, similar to how sympy, sagemath, MATLAB (MuPAD) or Wolfram's Mathematica work. With Swift it's the first time I'm experimenting with operator overloading and I'm loving having the ability to write something on the lines of

// calculus
let x = Variable("x")
lim(of: 1/x, for: x, to: 0+)  // returns +infinity
lim(of: 1/x, for: x, to: 0-)  // returns -infinity


// algebra
let ZZ = IntegerRing()
let P  = ZZ["x"]        // returns the polynomial ring in x over ZZ
let Z2 = ZZ / Ideal(2)  // returns the ring of integers modulo 2

using almost 1-1 the mathematical standard notations.

In order to let the user type without thinking too much to specify and place the types every time, I wrote an Expression type to be used in the following way:

let expression: Expression = 1 + 3 / 4 - 9 ** 0.5

It's just a simple enum conforming to ExpressibleBy(Integer|Float)Literal

indirect enum Expression: CustomStringConvertible {
    case integer(Int)
    case real(Double)
    case sum(Expression, Expression)
    case difference(Expression, Expression)
    case product(Expression, Expression)
    case division(Expression, Expression)
    var description: String {
        switch self {
        case let .integer(a): return "\(a)"
        case let .real(a): return "\(a)"
        case let .sum(a, b): return "(\(a) + \(b))"
        case let .difference(a, b): return "(\(a) - \(b))"
        case let .product(a, b): return "(\(a) * \(b))"
        case let .division(a, b): return "(\(a) / \(b))"

// MARK: Literal initialization

extension Expression: ExpressibleByIntegerLiteral, ExpressibleByFloatLiteral {
    init(integerLiteral value: IntegerLiteralType) {
        self = .integer(value)
    init(floatLiteral value: FloatLiteralType) {
        self = .real(value)

// MARK: Operations

extension Expression {
    static func + (lhs: Self, rhs: Self) -> Self { .sum(lhs, rhs) }
    static func - (lhs: Self, rhs: Self) -> Self { .difference(lhs, rhs) }
    static func * (lhs: Self, rhs: Self) -> Self { .product(lhs, rhs) }
    static func / (lhs: Self, rhs: Self) -> Self { .division(lhs, rhs) }

// MARK: - Testing

let e: Expression = 3 + 4.1 / 65 * 2 - 12


that allows me to easily get the operations tree. The user just needs to place ": Expression" after every expression declaration and he'll get more or less the same behavior of sympy.

Unfortunately, with just a couple more operations

let expression: Expression = 3 + 4.1 / 65 * 2 - 12 + 4.1 / 65

the compiler already starts failing to infer the type of every operand

The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions

even though the only valid combination is the one in which every operand is of type Expression.

The only way to make the compiler's life easier is to explicitly write that the first integer literal is an Expression

let expression = 3 as Expression + 4.1 / 65 * 2 - 12 + 4.1 / 65

and it won't raise errors even with a thousand of operations involved. Honestly, I don't like how it looks, so my question is: does Swift's type checker always run from left to right?

If a variable has an explicit "resulting" type, shouldn't this information be used to infer the types of the operands in a top-down way instead of a bottom-up one?

I'm actually reading in lib/Sema and I'm currently using both

swiftc -dump-ast file.swift


swiftc -Xfrontend -debug-constraints file.swift

to understand what the type checker does under the hood. Unfortunately with the second command I can only get an output if the type checker completes its job. Is there a way to stop the execution and get the partial output?

I was able to get output from your second command just fine. It took a while, but eventually the compiler dumped everything.

FWIW the interaction between custom ExpressibleBy*Literal conformances and overloaded operators has been a type checking issue for as long as I can remember—there's been considerable improvements over time, but there's still sharp corners. Your best bet is to file a bug at with this example, if you haven't already.

In my opinion, having a defined "result" type is something that the compiler may use as a starting point instead of using it as filter to apply at the end of every type combination found (if that's how the type checker works).

Ideally, in an expression like

1 + 2 + 3 + 4 + 5 as Expression

given the following variants for the + operator

Float + Float -> Float
Double + Double -> Double
Float80 + Float80 -> Float80
UInt8 + UInt8 -> UInt8
Int8 + Int8 -> Int8
UInt16 + UInt16 -> UInt16
Int16 + Int16 -> Int16
UInt32 + UInt32 -> UInt32
Int32 + Int32 -> Int32
UInt64 + UInt64 -> UInt64
Int64 + Int64 -> Int64
UInt + UInt -> UInt
Int + Int -> Int
FloatingPoint + FloatingPoint -> FloatingPoint
Numeric + Numeric -> Numeric
BinaryInteger + BinaryInteger -> BinaryInteger

Expression + Expression -> Expression

... and any other ExpressibleByIntegerLiteral conforming type

I would constrain as

(1 + 2 + 3 + 4) + 5  // result is Expression
// compatible overloads: Expression + Expression -> Expression
// 5 literal is Expression
// 1 + 2 + 3 + 4 is Expression
(1 + 2 + 3) + 4      // result is Expression
// compatible overloads: Expression + Expression -> Expression
// 4 literal is Expression
// 1 + 2 + 3 is Expression
(1 + 2) + 3          // result is Expression
// compatible overloads: Expression + Expression -> Expression
// 3 literal is Expression
// ...

obtaining only one possible valid type combination, instead of 18 different ones.

Expressions in which the resulting type is defined are not that rare: type casts using "as", computed properties and function/closure results all behave the same.

I personally don't see this behavior as a bug. It's intended to work like this for expensive type checks.

Is it time based? I hope it's not.

I don't dispute that such an approach may be sound, but its certainly not so simple to make wholesale changes to overload resolution—we need to ensure that existing valid code continues to resolve overloads identically.

Whether you call it a bug, an optimization opportunity, or an enhancement, filing an issue on is appropriate.

Are you asking about the diagnostic, or the lack of output from -debug-constraints? I believe the "compiler is unable to type-check this expression in reasonable time" diagnostic is both time and memory based (see ConstraintSystem::getExpressionTooComplex). I can't comment on the lack of output. What do you see if you run swiftc -Xfrontend -debug-constraints file.swift and let it sit for a long period of time?

I see. Theoretically the type checker should resolve only if a unique type/overload combination is valid, so starting to check in a bottom-up way or in a top-down way should give the same results. However the errors in case of failed matches would be probably displayed differently:

let a: Double = Float(4) + 3

With a bottom-up approach you will get

error: cannot convert value of type 'Float' to specified type 'Double'
let a: Double = Float(4) + 3
                Double(     )

with a suggestion to convert the whole result into a Double, while with a top-down approach you would get something like

error: cannot convert value of type 'Float' to expected type 'Double'
let a: Double = Float(4) + 3
                Double(     )

suggesting the user to only convert the operand Float(4) into a Double, since the only infix + operator overload that results in a Double is

func + (lhs: Double, rhs: Double) -> Double

and the operand 3 can be seen as a Double since it's ExpressibleByIntegerLiteral.

Shouldn't suggestions like this one be discussed here before being submitted on labeled as "New Feature"? What's the current workflow?
I see that the evolution process requires a prior discussion, a pitch with an implementation in a linked pull-request and then a review process. Swift evolution covers only changes in the language and in the public interface of the standard library, so I'm assuming that changes in the overloads resolution aren't related to SE.
On I'm not seeing proposed pull-requests along with the "Improvement" and "New Feature" submissions. Aren't they required to show that there are actual improvements/benefits with the proposed changes?

I let it run for some minutes and I got the desired output. I thought -Xfrontend -debug-constraints was following the same time/memory restrictions of the AST compiler, since the output is similar.

So you can write a library that runs without issues on your machine, but that can fail on the user side. If it was only memory constrained, the output would've been deterministic, but I guess it needs to be like this. Swift code should be always written keeping in mind low specs machines.

This is not the case. The compiler uses ranking rules to decide which overload is the 'best match' for a given use. E.g., in the following example

struct S: ExpressibleByIntegerLiteral {
    init(integerLiteral value: Int) {}

func foo(x: Int) {

func foo(x: S) {

func foo(y: S) {

foo(x: 0) // Int
foo(y: 0) // S

The application foo(x: 0) chooses the foo(x: Int) overload even though foo(x: S) would also be valid.

It can never hurt to discuss things here, but as you note this isn't really a new feature, since it doesn't (or rather, shouldn't) change the semantics of the language at all. We're really just discussing a performance optimization for overload resolution.

The bug tracker is intended to be used by the broader Swift community (not just compiler engineers), so there is no expectation that submissions be accompanied by a proposed solution. If someone decided to attempt to implement your proposed optimization, then yes, it would be evaluated for actual performance impact before being merged.

Yeah. In general it's good to avoid long sequences of literal expressions glued together by operators. Even if you don't hit the time/memory limit on any one expression, it can have a major impact on your overall compile times.