Inconsistent optional injection behavior

When I was working on reimplementing the way we deal with implicitly unwrapped optionals earlier this year, I came across some odd and inconsistent behavior, but had not yet had a chance to follow up on it. Someone else recently noticed the same behavior and opened a bug: https://bugs.swift.org/browse/SR-8326

The TL;DR version is that we sometimes flatten nils when injecting optionals, and sometimes don't.

class B {}
class D : B {}
let d: D? = nil
let dprime: D?? = d // .some(.none)
let bprime: B?? = d // .none

We do not produce a nil result when we are strictly injecting an optional without any conversion of the underlying type (e.g. from derived-to-base, something-to-Any), but do flatten them when there is also a conversion involved.

This behavior can of course manifest as part of an if let or guard let with an explicit type specified, and in fact that the context in which I first noticed this as well as what the report in the bug mentions.

if let b: B? = d {...} // convert from D? to B??, then unwrap one level

This raises the questions:

  • Should the condition tested in the if let and guard let be the result after converting to the user-specified type, or should it always be based on the RHS prior to conversion?
  • Should we always flatten the nil when injecting, or never flatten the nil when injecting? Always flattening of course means we don't need to answer the first question, but it also means we have a type system where we can form multi-level optional types where the extra levels of optionality are redundant.

It seems reasonable to me that the answers here would be:

  • We should always use the RHS prior to conversion in if let/guard let
  • We should never flatten nils in this way.

Thoughts?

P.S.: Here's an example demonstrating the behavior.

class B {}
class D : B {}

func takesD(d: D??) {
  if d == nil {
    print("nil")
  } else {
    print("non-nil")
  }
}

func takesB(b: B??) {
  if b == nil {
    print("nil")
  } else {
    print("non-nil")
  }
}

func generic<T>(lhs: T, rhs: T?) {
  if rhs == nil {
    print("nil")
  } else {
    print("non-nil")
  }
}

func test() {
  let d: D? = nil
  takesD(d: d) // passes .some(.none)
  takesB(b: d) // passes .none

  // warning: explicitly specified type 'D?' adds an additional level of optional to the initializer, making the optional check always succeed
  if let _: D? = d {  // forms .some(.none)
    print("non-nil")
  } else {
    print("nil")
  }

  // no warning
  if let _: B? = d {  // results in .none
    print("non-nil")  // skipped
  } else {
    print("nil")      // printed
  }

  generic(lhs: d, rhs: d)            // rhs is .some(.none)
  generic(lhs: d as B?, rhs: d) // rhs is .none
}

test()
8 Likes

Yes, this behavior is indeed odd; in fact, I'd argue that it's inexplicable enough that it could be considered not a deliberate part of the design but an implementation bug. I believe your proposed answers are the most reasonable ones possible.

I'm definitely on the "never flatten" side of the second question. I think I agree about the first question too, but that does seem very subtle. Sometimes the result type influences overload resolution of the RHS, so it seems simpler to me to say "convert to the LHS type plus one extra level of Optional, then test". I don't have a good way to explain the other behavior being proposed.

2 Likes

For the second question, I'm also on the never flatten side.

For the first question, my mental model would actually expect the opposite order: to unwrap d one level, then convert D to B?. My expectation as a naive programmer is that: let b: B? = d! and if let b: B? = d {} (note the force in the first case) are exactly the same operations except that the second one won't happen if d is .none.

On the unfortunate side, I see that that's not actually how the compiler currently works, but on the positive side, before now I've never actually felt the need to write any Swift where I'm explicitly declaring the type in an if-let binding in order to discover this. So I see the order of operations as a mostly academic question - even though my expectation is currently wrong, my stakes are low enough in this that I'd be on the side of continuing the current behavior for backwards compatibility rather than have it do what I'd want.

1 Like