... Now you're just trolling at this point.
Anyway, the non-strict mode you proposed is very unlikely to happen. Debating (trolling?) on this wonât make it happen. If you canât stand it, you probably need a different language. Just like you canât use Rust without ownership model.
Open source code not needing to be bug free and copying and pasting code between languages are pretty good indicators he's just trolling, but the rest of his mindset is incredibly common.
Things are moving quickly and in a good direction, but many devs are failing to keep up, either due to laziness, inertia, lack of ability or poor leadership. @aswft is nailing their rationalisations, so it's worthwhile debating his points if only to educate other devs who might be slipping into thinking like him.
No concrete stats, but for the last ~10 years of C# development on projects large and small, with exceptional devs, clean architecture and good test coverage, the #1 exception littering telemetry was NullReferenceException
, by a huge margin. Number two was ArgumentNullException
.
Absolutely yes. Those bugs either became impossible to write, or got caught on my machine before I committed the code, so I didn't have waste QA's time 'running it and checking it (the horror)'.
At first, yes. Optionals and other idioms from functional programming mean you need to put in a little more work up front, for sure, but that time is dwarfed by the savings in wrestling with your own codebase once the project is more established. But you'll quickly get used to it and build an intuition for how things should be done, and soon even that upfront cost largely disappears.
It's the same as automated testing. Small cost upfront, massive saving down the line.
Code bloat is decreased, because I don't need to code defensively anymore, and if I have to deal with data I might not be able to control, like JSON or SQL rows, I can push all the logic to handle that impedance mismatch into one place, rather than letting it leak all over my code.
There's also less test code covering cases that should never happen. I've got a strong feeling you're not a fan of automated testing, though, so you might not care about that.
Coding without optionals feels like driving without a seatbelt.
Type systems don't just exist to catch bugs, they exist to prevent them. The more tools your type system has, the more of your domain and its logic it can represent, and less and less bugs become possible to write.
In most cases, if you can't meet a deadline, you should cut scope, not ship broken code, and a team that is too badly run to understand this isn't going to give you time to refactor that code later, because, welp...another deadline!
Code written in a language that isn't nil-safe is not automatically incorrect. It's just corrected a faster way. The thread is about the complexity of the design of Swift's optional handling being counterproductive to the safety it provides.
I'm not advocating writing unsafe code, I want Swift's optional unwrapping to be more efficient when dealing with untyped data.
Type-checking is a great feature and an example of a feature that once it was available, everybody was eager to use. Optional unwrapping doesn't have the same appeal.
How can you disagree with that point (about good developers writing safe code by design)?? You think professional developers are all writing unsafe code not caring about nil types just because the language doesn't force them to structure their code a certain way? Just like handling array out of bounds, it's done as a matter of convention.
I never said open source code didn't need to be bug free, that's a strawman argument. Of course code needs to be migrated between languages. Multiplatform companies like Facebook develop native apps on each platform and have to keep the code in sync in different languages. Swift is an iPhone language, it doesn't get deployed on Windows or Android.
Those are mostly irrelevant discussion points though. The point raised a few times is that Swift is not efficient at handling nested untyped data, which is commonly used in production.
Yes, I use automated testing and I agree that the investment is worthwhile. I just haven't seen the same return on dealing with optional unwrapping so far. For me nil errors make up a tiny fraction of my bug reports, as I mentioned earlier it's about 2% at most.
How do you handle nested data? Do you have any wrapper classes you can recommend?
Maybe, I feel like coding with them (at least Swift's implementation) is like driving with straps all over and I can't move to get anything done. No problems with type checking though.
This isn't a choice between shipping broken code faster and safe code slower. I'm saying it's faster to fix nil types in other languages, it take about 2 minutes to fix those bugs. I've wasted days trying to design code around Swift's optional system. It takes time to adjust to new things but unwrapping nested optionals is just not intuitive.
What is the point of a nested optional type?
So what really bother you is nested optional, not optional itself. That explains. What is really counterproductive is putting question mark everywhere brainlessly, not optional.
You should't use it if you can't answer. Nested optional usually indicate code smell.
How many times does it need to be repeated?
if let v = some?.really?.long?.optional?.chain {
}
The good part. Good developers are the subspecies so rare I'd think is a myth hadn't I seen ones myself. Though the ones I've seen write essentially bug-free and follow conventions, so not sure if that counts.
Expecting each individual to keep track of nullability by themselves is just a waste of their concentration.
Nil safety is just another aspect of a well-rounded type system, and optionals are just a specific case of algebraic data types. Incorrect code is code that, for example, doesn't accurately represent the states of a system: it's technically possible to write correct code without adequate typing, but it requires a colossal effort, a lot of extra code, and a huge amount of tests (that, unfortunately, will never be as accurate as what a type system provides). If there's even a small chance of getting a unhandled nil exception, the code is incorrect (but might coincidentally work in some real-world use cases): with Swift, it's possible to write code that cannot have this kind of exception with 100% accuracy.
Optionals are just another type in the type system of a language with type checking. In Swift (in contrast to other languages, like Kotlin), Optional
is an actual type, which comes with a series of advantages that might require a little more understanding from the developer, but pay back x1000.
It might actually be the case that you're not understanding something, or that you're modeling your states wrong. Care to show some actual production code example?
How? Optionals are just another type. Another clue that you're doing it wrong. Do you use Swift enums, with associated types? Optional is just one of them:
enum Optional<Wrapped> {
case some(Wrapped)
case none
}
The fact that you don't understand the point means to me that you don't understand in general what a type system exists for, and what are algebraic data types. Optional<Optional<A>>
can be nil
for 2 reasons:
- the "outer" optional is
.none
, which might mean that, for example, a value was retrieved from some storage, maybe at some index or key, and the value was not found; - the "inner" optional is
.none
, which might mean that some value was retrieved, but its type is incidentallyOptional<A>
, and in this case it's.none
, a very different case from the first.
Swift syntactic sugar automatically merges the 2 because you usually don't care (in code, ?.
is both .map
and .flatMap
), but from the perspective of the type system, Optional<Optional<A>>
is a different thing from Optional<A>
, and the type system, in trying to be correct and not sacrifice correctness in the name of some arbitrary "convenience" (that could apply to you, but not to everyone, certainly not to me), treats them very differently.
You simply cannot write nil-safe code 'by design' without optional types, because short of the most trivial code you cannot prove, even informally to yourself, that code is nil-safe without following the reference all the way up every possible call stack to its instantiation. And if that point is from a database, or an API call, or anything whatsoever outside your code, then you're totally out of luck - and yet SQL/JSON etc are the exact case you're using to show that optionals aren't necessary! And that's ignoring thread safety, parallelism, laziness, async and callbacks etc.
If you use optionals correctly, this constrains the problem to a single place, or gets rid of it entirely. In your case it's like an impenetrable force field protecting your domain from the wild-west of JSON documents.
I can't speak for nested optionals, as in Optional<Optional<String>>
, I can't recall ever having to deal with that. If I did, I imagine the correct way to deal with them would be pretty obvious. If you have a concrete example from your own code where it gave you trouble, please provide it.
If you're talking about nested JSON documents or whatever, thats a higher level architectural issue.
I felt the same way about moving to strongly typed languages, learning map/filter/reduce, foreign keys/not null/unique etc constraints in SQL, TDD, SOLID principles and lord knows how many other things. Now I love them all, and because I understand them, know when it is safe to not use them.
One slightly convoluted example would be a JSON document describing an update to a field of, e.g. a User
. Quite often, HTTP PUT
requests to update resources only contain the fields that have changed. So lets say you have such a request, and it deserialises to the following...
struct UserUpdate {
let userId: Int
...
let favouriteWord: String?
}
Your code then receives an instance of the above, and favouriteWord
is nil
. Does that mean the user didn't want to update their favourite word? Or that they did, but they explicitly wanted to delete it? There's no way to know.
On the other hand...
typealias OptionallyUpdatable<T> = Optional<Optional<T>>
struct UserUpdate {
let userId: Int
let favouriteWord: OptionallyUpdatable<String>
}
With nested optionals, you could tell the difference between 'the user didn't provide a value' and 'the user explicitly provided a value of nil
' without having to resort to horrible sentinel values (probably don't do this, there are probably better ways)
Would you mind explaining a bit more about this, I'd be really interested to know!
It really comes down to being able to represent Optional
as an actual type, with methods and extensions. For example, if I wanted to write map
for Optional
I can simply write:
extension Optional {
func map<B>(_ transform: (Wrapped) -> B) -> B? {
switch self {
case .some(let wrapped):
return .some(transform(wrapped))
case .none:
return .none
}
}
}
In the example, B
could be everything, including another optional (which would result in a nested optional, which is fine from a type system perspective, but semantically it would be better to use flatMap
).
Then:
let x1: Int? = 42
let y1 = x1.map { $0 + 1 }
let t1 = y1 != nil
print(t1) /// prints true
let x2: Int? = 42
let y2 = x2.map { _ in Optional<Int>.none }
let t2 = y2 != nil
print(t2) /// ALSO prints true
The second prints true
because map
is designed not to change the base type, and if it wraps a value, it should also, always wrap a value after map
even if we return nil
inside the closure passed to the method.
If you try to write this in Kotlin, it doesn't work:
inline fun <A, B> A?.map(f: (A) -> B): B? =
if (this != null) f(this) else null
val x1: Int? = 42
val y1 = x1.map { it + 1 }
val t1 = y1 != null
println(t1) /// prints true
val x2: Int? = 42
val y2 = x2.map { null } /// arbitrarily decides the final type to be `Int?`
val t2 = y2 != null
println(t2) /// prints false !!!
A "real" map
in Kotlin looks like the following:
inline fun <A : Any, B : Any> A?.map(f: (A) -> B): B? =
if (this != null) f(this) else null
in which we tell Kotlin to enforce that A
and B
by themselves will not be null
. This is because technically everything in Kotlin can be null
(actually, even non-nullables if some instance comes from Java), it's not an actual type, it's a "state" that's possible for every instance, and it's treated differently than types.
- JSX wasn't familiar, but then React took off.
- Generics weren't familiar, Java didn't even have them until 1.5. Now it's hard to find a mainstream statically typed language without at least some level of support for generics (cough Go, cough)
- Async/await wasn't familiar, but it was so popular in C# that Python, JS and other languages adopted within years.
- Functional reactive programming wasn't familiar, and now it's hard to find a mainstream programming language that doesn't have its own port of Rx.
- The actor model of Erlang wasn't familiar, yet Akka is bringing it closer into the mainstream Java world, and now the async message passing idea is taking over (first with Go, and now with multiple languages' interest in developing native actor models).
Programming technologies and libraries evolve, and they have absolutely no obligation to be familiar. They find a niche, and they innovate on ideas that help towards their goals.
If you're complaint against a tech is that it's unfamiliar, you'll be left behind very rapidly.
There's certainly a trade off. I work at a large company that was founded on a dynamic language. I can tell you that there are most certainly huge pains when it comes to scaling. You might not have felt those pains because you're working on smaller projects, which means that perhaps dynamic languages are a good trade off for you. But in the general case, that doesn't hold.
Doing some clever meta-programming to define some object fields or methods is fast and fun when it's a little side project. "I don't have a good object for this, ah fuck it, I'll just define a new field on this singleton over here ..." works really great, until it doesn't. When it's a shared code-base with thousands of developers, it becomes overwhelmingly complicated and difficult to navigate.
It's interesting that you mention TypeScript, whose entire premise was to add a type system to Javascript. It's "getting in your way" precisely because quick hack/slash stuff works great in the first week days of a project, becomes hard after a month, and impossible after a year.
Similar efforts exist to add type systems to Python (PEP 484), Ruby (Sorbet). Companies are investing millions of dollars to retroactively bolt-on type systems onto their code bases written in dynamic languages, because they already have millions of lines of JS/Ruby/Python (which cost 10s or 100s of millions of dollars to develop).
I only use Swift for writing silly iOS games and the odd one-off tool, but even with these the benefits of writing the code well, instead of just saying âsod it, doesnât matter, this will doâ pay off surprisingly quickly. I no longer dread fixing âthat bugâ or implementing that feature. And even if they donât pay off immediately or even at all on any one project, itâs good practice.
Just write good code, all the time.
Even PHP 8 is getting optional chaining.
The just-hack-it-together language is getting optional chaining.
Good to know, I actually didn't use them on purpose. I'm trying to avoid them. They were foisted on me without my consent.
That makes sense, I need to use them correctly to see the benefit. This is the kind of thing I want to be doing:
Maybe I should build a different kind of app before doing that stuff because most of the rest of Swift code looks familiar, that stuff is quite hard to keep tidy.
Yeah, I always saw the benefit of type-checking. Mainly I just haven't seen the same need regarding nil-checking.
For example, Swift doesn't protect against array out of bounds exceptions but nobody really minds. Developers just know to take care of it.
Right but it's not enforced in Typescript, that's really the difference. It's nice when I need to use it but having to do the housekeeping on everything I feel makes code less readable, maintainable, slower. It's safer by design, I just haven't seen the value from it. Maybe the more I use it, that'll change.
Thanks for all the replies, including people I haven't quoted. Very interesting reading and good to get people's perspectives.
What do you mean? That it doesn't silently fail?
The design of Swift makes a number of language decisions about optionals. It is reasonable to disagree with those decisions, but the basic decisions are not going to change, and Swift is not going to introduce a language dialect which somehow doesn't enforce optionality. There are productive directions this thread could have taken, but it doesn't seem to be taking any of them, and instead it's just turning into an argument for the sake of itself. I'm closing the thread.