The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions

I follow the instruction in this talk at 26:35, and try to implement and run the following code:

Result { (42, [User?.some(user)]) }
  |> (fmap <<< first)(add(1))
  |> (fmap <<< second <<< fmap <<< fmap <<< prop(\.favoriteFood.name)) { $0.uppercased() }

It shows an error: "the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions."

Adding types for Result doesn't help:

let result: Result<(Int, [User?]), Error> = Result { (42, [User?.some(user)]) }

result
  |> (fmap <<< first)(add(1))
  |> (fmap <<< second <<< fmap <<< fmap <<< prop(\.favoriteFood.name)) { $0.uppercased() }
// still can't be compiled

It can be compiled and run if deleting one of the pipeline:

// It can be compiled if ignoring the first pipeline
Result { (42, [User?.some(user)]) }
  |> (fmap <<< second <<< fmap <<< fmap <<< prop(\.favoriteFood.name)) { $0.uppercased() }

// It can be compiled if ignoring the second pipeline
Result { (42, [User?.some(user)]) }
  |> (fmap <<< first)(add(1))

Questions:

  • Why can the speaker Stephen Celis in the talk compile and run his code, but I can't with almost identical code?
  • How could I make the above code compile without losing the flow/fluent style in functional programming way with piping and composing?

To make the above code run, you need the following implementation:

// Operators
precedencegroup Pipe {
  associativity: left
  higherThan: AssignmentPrecedence
}
infix operator |>: Pipe
@inlinable public func |> <A, B> (_ lhs: A, _ rhs: (A) -> B) -> B {
  rhs(lhs)
}

precedencegroup Compose {
  associativity: right
  higherThan: Pipe
}
infix operator <<<: Compose
@inlinable public func <<< <A, B, C> (_ lhs: @escaping  (B) -> C, _ rhs: @escaping (A) -> B) -> (A) -> C {
  { x in lhs(rhs(x)) }
}

// Higher order functions for composing
func fmap<A, B, E>(_ f: @escaping (A) -> B) -> (Result<A, E>) -> Result<B, E> {
  { r in r.map(f) }
}

func fmap<A, B>(_ f: @escaping (A) -> B) -> ([A]) -> [B] {
  { r in r.map(f) }
}

func fmap<A, B>(_ f: @escaping (A) -> B) -> (A?) -> B? {
  { r in r.map(f) }
}

func first<A, B, C>(_ f: @escaping (A) -> C) -> ((A, B)) -> (C, B) {
  { t2 in (f(t2.0), t2.1) }
}
func second<A, B, C>(_ f: @escaping (B) -> C) -> ((A, B)) -> (A, C) {
  { t2 in (t2.0, f(t2.1)) }
}

public func prop<Root, Value>(_ keyPath: WritableKeyPath<Root, Value>)
-> (@escaping (Value) -> Value)
-> (Root)
-> Root {
  { update in
    { root in
      var copy = root
      copy[keyPath: keyPath] = update(copy[keyPath: keyPath])
      return copy
    }
  }
}

// Operation
let add: (Int) -> (Int) -> Int = { (x) in { (y) in y + x } }

and the models are:

struct Food {
  var name: String
}

struct User {
  var name: String
  var favoriteFood: Food
}

let user = User(name: "Stephen", favoriteFood: Food(name: "Hamburgers"))

Hey Xaree, I may be able to help answer your questions, but first could you please provide the code that defines User? Thanks!

1 Like

Thanks hborla, I forgot to paste the code about the models. I update the my post at the end.

When code is very generic and operator heavy, even the slightest changes to the code can have a big compile-time performance impact. For example, if you added just one additional overload, chained an extra operator in the expression, added one more generic parameter, etc, any of those individually can be enough to push the compiler over the edge. Compile-time performance can also change between compiler versions.

It looks like the compiler finds 5,477 different solutions for the expression, and then it times out trying to compare them all while trying to find the best solution. I'm not sure what's different about these solutions, or if some of them are different paths to the same solution (the output is large so it's hard to tell). There are a few different ways you could eliminate some of the possibilities:

  • Name your overloads differently so the compiler doesn't have to attempt them all each time you use the name
  • Get concrete types into the expression (e.g. using as)
  • Chain fewer operators together
  • Break up the expression using intermediate results

If you're set on putting this all into one big expression, your best bet is to use coercions to concrete types at various points throughout the expression. However, the type checker also has a memory threshold, so eventually the expression may get too big and type annotations won't help anymore. Breaking up the expression into intermediate results will always help, because these will be type checked separately.

2 Likes

Thanks for your reply, hborla.

How can I know the how many different solutions for the expression found by the compiler are?

I would try to reduce the complexity of the expression.

I'll try the first two suggestions.

I'd like to keep the point-free style for functional programming, so I think I have only a few options.

There isn't a proper tool for Swift users (though I think there should be!), but there's a debugging flag that folks who work on the type checker use to see this information. If you'd like to test it out, I'd recommend running swiftc directly at the command line like this:

swiftc -Xfrontend -debug-constraints <path-to-file>.swift

This will have a HUGE amount of output with information about what happened during the type inference algorithm, but you'll be able to see what it was doing right before it timed out at the end.

4 Likes

I'd like to note that breaking up an expression will not make your code less functional, as it will still behave the same and produce the same results. It might even be somewhat counterproductive to try to force single long expressions since:

  1. the compiler implementation might change so that the code that compiled quckly enough earlier might suddenly exceed the threshold without you personally touching anything,
  2. you might not know if the compiler running on less powerful machines will be able to typecheck the same code,
  3. you suddenly restrict yourself in the number of possible overloads that you can add in the future and overall flexibility of what you can write

— so given all these cases, there's no real downside to simplifying your expressions a bit, and you shouldn't be afraid of doing so.

1 Like
Terms of Service

Privacy Policy

Cookie Policy