(Much-delayed response, since I've now just gotten back to addressing feedback from this thread in the vision document).
Unlike with concurrency, there is no expectation here that strict safety will be enabled in a new language mode. The updated vision document lists some reasons, but for me the salient point is that Swift's current (non-strict) memory safety is the right default for the vast majority of Swift users, and that's not likely to change.
The strict memory safety is also much simpler than the concurrency model, because the use of unsafe constructs can always be dealt with locally: @unsafe
doesn't make its way into the type signatures anywhere, so it's not viral in the way that (say) async
or @Sendable
have to be. And at any point, you can take the use of an unsafe construct and "encapsulate" it so that the unsafety doesn't propagate further. The vision talks about a couple different syntaxes for it (unsafe { ... }
blocks, @safe(unchecked)
), but this is something we didn't have in concurrency world: you can't just "wrap up" and async operation and forget that it was async without it having systemic effects.
Checking for unsafe constructs is a simpler problem: if your code isn't building with strict safety checking, we ignore the @unsafe
annotations and don't produce any diagnostics, so it doesn't matter if the modules you depend on have enabled strict safety checking. With data-race safety, we could only approximate this through things like @preconcurrency
and minimal checking, but the implementation of those is very difficult because they are effectively trying to work around fundamental changes in the type system. @unsafe
doesn't impact the type system, so the problem is far easier.
One data point to support my claim above: development toolchains have had @unsafe
on the standard library APIs mentioned in the vision document for a few months now, and it's had zero impact on anyone because nobody has enabled the strict memory safety mode outside of compiler tests.
To be clear, we are working on improving the concurrency story as well. You wrote this before the vision on improving the approachability of data-race safety was posted, and I hope that helps. We can handle more than one thing at a time, and the strict memory safety vision here is significantly smaller in scope and impact.
It does not require annotations, because those libraries can continue to be compiled without strict memory safety checking. Clients of those libraries that choose to enable strict memory safety checking will still be able to use them, and the compiler will assume that every API is safe unless its declaration involves some @unsafe
type. For example ContiguousBytes.withUnsafeBytes
traffics in UnsafeRawBufferPointer
, so it is considered @unsafe
.
There will be an audit trail that states that Foundation has not enabled strict memory safety checking, and that may create social pressure on Foundation to do so. That exercise will make the implicit @unsafe
based on type information explicit, and require annotation for all other unsafe uses. It's likely that they'll want to add safe counterparts to some of their APIs, such as a Span
-based version of the withUnsafeBytes
mentioned above. However, this is additive and will not break existing clients.
Nothing is unusable in this subset. One can locally acknowledge and encapsulate any use of unsafe constructs.
I said this above as well, but to restate it here: if you don't have strict memory safety checking enabled, you will not get any warnings from it.
Some of the types we depend on to provide safe counterparts to unsafe APIs, for example the newly-introduced Span
type, may have back-deployment issues. The changes described in this vision have zero impact on the ABI and no back-deployment concerns.
I'd expect that most app-level code won't care to enable this checking, unless the app is in a domain where the security requirements are such that strict memory safety is required. Low-level modules are more likely to enable this checking both because they are more likely to be doing the kind of work that benefits from stricter memory safety checking, and because they're more likely to have clients asking for it.
Sure, here are some examples:
- Libraries that are parsing untrusted data (HTML, XML, images, fonts, messages, cryptographic libraries, etc.) and want to be sure that memory-safety issues don't turn into security problems
- Clients of those libraries, such as messaging apps, web browsers, anyone that handles confidential personal information
- Low-level systems software that cannot afford to go wrong, such as firmware or OS kernels
Yes, this vision is distilled from many discussions with folks who care deeply about security and the impact of memory (un)safety. The auditability section I've recently added is in direct response to requests from security-minded folks who are responsible for ensuring that projects and organizations can make deliberate incremental, forward progress on memory safety.
Those projects that have adopted this strict checking should be able to show that there were no memory-safety issues introduced within those places that are covered by the strict checking, and that it was possible to enable this checking over large swaths of code without sacrificing other goals of the project (whether performance, developer productivity, or whatever).
Doug