So, I've been writing some python code, and since I've grown to like strong typing (funny, that) I've been using type annotation, and pylance to keep me honest, and help out.
One of the features I particular like is if a variable is of type str | None, say, I can write something like this and nothing complains:
def bar(a: str):
print("Hark, a string: ", a)
foo: [str | None] = "Hi Mom"
if foo != None:
bar(foo)
else:
print("No foo")
The linter recognizes that I have checked foo, and lets me use it for the non-optional parameter of bar without coercion or unwrapping.
This is what I always hoped swift would do, but instead it went with the whole if let foo... thing.
Was there some reason why the python style was difficult or was it thought to be misleading?
Note that "if foo != nil" is longer than "if let foo"
One of the advantages of if let/var foo is that you are getting a variable copy, so even if original "foo" is getting changed (directly or indirectly, say it was a weak reference and the referenced object got deallocated) - that won't affect the variable copy, and if there's no copy that's not the case:
if foo != nil {
baz()
// meantime foo became nil (in baz or indirectly as a weak reference)
foo.bar() // equivalent to foo!.bar() -- crash
// or equivalent to foo?.bar() -- unexpected no-op
}
Another way "foo" might be changed is if it is a (possibly computed) property.
if let foo = someVal.foo {
... // 'foo' is definitely not nil
}
if someVal.foo != nil {
... // 'someVal.foo' might very well evaluate to nil on subsequent calls
}
Creating a new binding which pins the value is more scalable.
Since the binding is only over a limited scope, I think it would be fine to use an immutable borrow instead of a copy. But it would need new syntax.
Yes, this is an important point. A variable might have been changed between the nil test and its usage (think about parallel access). In Kotlin, where for constants testing for null suffices, you get an appropriate warning (or error?) in the the case of a variable.
So even if the compiler is "smart" about certain nil/null tests (as in the case of Kotlin or even smarter), in certain cases you still have to assign a new constant (or variable), so you need different formulations for different use cases. Swift goes the way of demanding the same formulation for all cases so to speak, so e.g. changing an optional constant to an optional variable does not change the way you have to use it. I find this a very good feature of Swift.
My understanding is that this kind of destructuring, at least in the general case, must consume its argument. Itβs possible that the standard Optional type will be special-cased in this regard, but Iβm not sure whether that direction is planned at this time.
This kind of flow-sensitive analysis is a bit of a hack, or at least, it exists outside of the type system proper. Swift's formalization of Optionals is more in line with traditional algebraic data types in functional languages.
Aye, and this isn't even theoretical, it's a very real issue that comes up often with Sorbet, a static type checker for Ruby.
Ruby doesn't have properties. An object's instance variables are only exposed through methods. To the outside world, there's no way to distinguish between a method that's just a dumb reader (that returns consistent values between calls) vs a method that does more complicated things (and returns different values on each call).
Thus, Sorbet is forced to pessimistically assume all method calls can return a different value from one call to the next, making this code fail type-checking:
x = !maybe_int.nil? && (2 * maybe_int)
(Ruby lets you omit the parens, so maybe_int is a method call, as if you wrote maybe_int().nil?)
The fix is to introduce a new local variable:
tmp = maybe_int
y = !tmp.nil? && (2 * tmp)
Where Sorbet can be confident that the value of tmp stays consistent from one read to the next.