[Pitch] Opt-in Strict Memory Safety Checking

Hm, I kind have the opposite gut reaction: the implementor of the SerialExecutor should be paying extra attention to make sure that they implement things in a safe manner, and they should either mark their type as @unsafe or else assert that they're confident they've implemented it correctly and make it @safe(unchecked). It should be fine to use a well-implemented custom executor if that executor is confident in its own soundness.

This is more or less what we've done by saying actors without a custom executor are safeā€”this relies on the assumption/assertion that the default executor does not admit races!

1 Like

Right. I think this means we would have to introduce a ~Escapable equivalent to UnownedSerialExecutor (that would be used the same way) and a serialExecutor property that returns one of those, with a lifetime dependency on self.

Yes, this is absolutely the goal. I don't think we can get there without introducing some lifetime-safe version of unownedSerialExecutor.

Elsewhere in the standard library, we end up using @safe(unchecked) to say "trust me". The default actor executor is mostly hidden from the user, so we don't have a place where we explicitly say that. Where should it be, though? On the conformance to SerialExecutor, e.g., something like this:

struct MyExecutor: @safe(unchecked) SerialExecutor { ... }

and is that something triggered by an @unsafe on SerialExecutor or one of its members, or something else?

Doug

Doug

2 Likes

Right, this makes total sense, but I still think this is notionally what we've done: we say the default executor is safe not because it's impossible for there to be a bug, but because we have taken a lot of care with its implementation and are highly confident in it being a safe abstraction over unsafe primitives.

I think yes on both counts. I could also see an argument for the @unsafe being on just enqueue(_:) as the core thing that must be implemented safely to make the executor safe overall, though arguably the check/precondition methods are equally important to get right as well, so perhaps it does make sense to apply it to the whole protocol.

My concern with this is that it infects everything that uses SerialExecutor, when really that's not what we want: SerialExecutor by itself is fine, it's the conformance where we need to be extra careful.

Doug

1 Like

Gotcha, in that case marking some or all of the individual requirements makes sense to me.

1 Like

A couple of other protocol requirements on SerialExecutor can also lead to data races if implemented incorrectly:

func isSameExclusiveExecutionContext(other: Self) -> Bool

    /// This method must be implemented with great care, as wrongly returning
    /// `true` would allow / code from a different execution context (e.g. thread)
    /// to execute code which was intended to be isolated by another actor.
    ///
    /// This check is not used when performing executor switching.
    ///
    /// This check is used when performing ``Actor/assertIsolated()``,
    /// ``Actor/preconditionIsolated()``, ``Actor/assumeIsolated()`` and similar
    /// APIs which assert about the same "exclusive serial execution context".

func checkIsolated()

    /// During executor comparison, the Swift concurrency runtime attempts to compare
    /// current and expected executors in a few ways (including "complex" equality
    /// between executors (see ``isSameExclusiveExecutionContext(other:)``), and if all
    /// those checks fail, this method is invoked on the expected executor.
    ///
    /// This method MUST crash if it is unable to prove that the current execution
    /// context belongs to this executor. At this point usual executor comparison would
    /// have already failed, though the executor may have some external tracking of
    /// threads it owns, and may be able to prove isolation nevertheless.
    ///
    /// A default implementation is provided that unconditionally crashes the
    /// program, and prevents calling code from proceeding with potentially
    /// not thread-safe execution.
    ///
    /// - Warning: This method must crash and halt program execution if unable
    ///     to prove the isolation of the calling context.

The default implementations of both of these requirements are safe, because they are conservative.

Should we say that all of these requirements must be marked @unsafe (in the protocol definition), and unless they are all marked @safe(unchecked) in the conformance, the conformance itself will be considered unsafe?

Another idea might be to introduce some kind of runtime bookkeeping to ensure all tasks on a given serial executor are, in fact, executed serially. It would be unfortunate to have to introduce overheads, but we do for ARC and runtime bounds checking. It might be possible to implement efficiently - if the serial executor is operating correctly, that bookkeeping data will never be under contention, at least.

It's unfortunate to have to do that, because that means that any code interacting with these requirements must deal with them as-if they are unsafe. They aren't, really, but the conforming type has to be extra careful in implementing them.

The SerialExecutor protocol is starting to feel very special here. Where else in the language can you do something as benign as conforming a type to a protocol, where you aren't trafficking in unsafe types anywhere, but which can undermine the memory-safety properties of the language? I'm almost inclined to say that this protocol is special enough to the language that we could require the conformance to be explicitly marked @unsafe or @safe(unchecked) but otherwise leave it unchanged.

Doug

1 Like

I spent some more time trying to break Swift (for science).

  1. Actor.unownedExecutor might not always return the same executor

    In order for a custom actor executor to be safe, the executor of course needs to work correctly, the returned reference needs to have the correct lifetime, and the runtime always needs to dispatch to the same executor.

    Example
    actor MyActor {
        let executorA = DispatchSerialQueue(label: "ExecA")
        let executorB = DispatchSerialQueue(label: "ExecB")
        var value = 50
    
        nonisolated var unownedExecutor: UnownedSerialExecutor {
            Bool.random() ? executorA.asUnownedSerialExecutor() : executorB.asUnownedSerialExecutor()
        }
    
        func reset() { value = 50 }
        func increment() { value += 1 }
        func decrement() { value -= 1 }
    }
    
    let instance = MyActor()
    
    for _ in 0..<100 {
        await instance.reset()
        await withDiscardingTaskGroup { group in
            for _ in 0..<50 {
                group.addTask {
                    try! await Task.sleep(nanoseconds: 200)
                    await instance.increment()
                }
            }
            for _ in 0..<50 {
                group.addTask {
                    try! await Task.sleep(nanoseconds: 200)
                    await instance.decrement()
                }
            }
        }
        let result = await instance.value
        print("\(result) \(result != 50 ? "[!]" : "")")
    }
    

    One way to improve this could be to make manually-written unownedExecutor functions @unsafe, and provide an alternative as a macro directly on a storage declaration:

    actor MyActor {
    
      @ActorExecutor
      let executor = MySerialExecutor()
    
    }
    

    In order to uphold the actor's part of the contract, I think it would be sufficient for the macro to validate that the storage is a let constant, so it always returns the same executor instance.

  2. GlobalActor.shared might not always return the same actor.

    Same principle as above.

    Example
    @globalActor
    struct BadGlobalActor: GlobalActor {
    
        actor ActorType {}
    
        // New instance every time.
        static var shared: ActorType { ActorType() }
    }
    
    @BadGlobalActor
    final class MyClass {
        var value = 50
    
        func reset() { value = 50 }
        func increment() { value += 1 }
        func decrement() { value -= 1 }
    }
    
    let instance = await MyClass()
    
    for _ in 0..<100 {
        await instance.reset()
        await withDiscardingTaskGroup { group in
            for _ in 0..<50 {
                group.addTask {
                    try! await Task.sleep(nanoseconds: 200)
                    await instance.increment()
                }
            }
            for _ in 0..<50 {
                group.addTask {
                    try! await Task.sleep(nanoseconds: 200)
                    await instance.decrement()
                }
            }
        }
        let result = await instance.value
        print("\(result) \(result != 50 ? "[!]" : "")")
    }
    

    Again, a macro on a storage declaration seems like a good solution to me. Since the existing API is named shared and that's still the best name for a singleton, I wonder if the macro could just validate that the storage is a let constant and mark it as safe.

    @globalActor
    struct GoodGlobalActor: GlobalActor {
    
        actor ActorType {}
    
        @GlobalActorExecutor    // Just checks that this is a 'let' constant.
        static let shared: ActorType = ActorType()
    }
    
2 Likes

The thing about those functions is that they are not intended to be called by regular code - they are called by the runtime, or by wrappers such as Actor.assertIsolated().

So I think we could get away with just marking them as unsafe, even if it's not quite correct.

1 Like

Do we think this problem is general enough that all protocols deserve two 'flavors' of @unsafe, e.g., @unsafe(conform) and @unsafe(use)? It doesn't seem that outlandish to me for a protocol author to say: "this library makes the assumption that conformers implement the methods correctly in ways that cannot be statically verified, and if conformers get it wrong there can be memory safety issues". But the assumption on the consumer side is that all implementors have ensured their implementations are fully safe. And if a particular implementor is unable to guarantee correctness of their conformance, and requires their client to hold it in a particular way to guarantee safety, then they ought to mark the conformance, or perhaps the whole type, as @unsafe.

As an asideā€”what exactly does it mean for a conformance to be unsafe? If I have:

struct S: @unsafe P {}

where does that get diagnosed for clients? At any point where we do an S-to-P conversion? At any point where a requirement of P is used on concrete S? Anywhere else?

1 Like

Defining conformance to a protocol always promises to faithfully implement the semantics of that protocol, or things go badly... but for generally "badly" doesn't mean there's a memory-safety issue. SerialExecutor seems fairly special in this manner. Your @unsafe(conform) suggestion is a good way to generalize from SerialExecutor to other protocols that might have this behavior. I guess I'd like to understand whether there are more such protocols out there.

It's essentially anywhere that you need the conformance. I've added an example to the proposal document as well, but effectively it's this:

func acceptP<T: P>(_: T.Type) { }

func passUnsafe() {
  acceptP(S.self) // warning: use of @unsafe conformance of 'S' to protocol 'P'
}

It's the same general approach we use to diagnose conformances whose availability is more restricted than the type or protocol.

Doug

1 Like

Hi all,

I built some toolchains that implement this proposal. With these, you'll need to enable two experimental features: AllowUnsafeAttribute and WarnUnsafe, and you'll get diagnostics for every use of unsafe constructs described in the proposal.

The diagnostics will look something like this:

swift/stdlib/public/core/EmbeddedRuntime.swift:397:13: warning: global function 'swift_retainCount' involves unsafe code; use '@safe(unchecked)' to assert that the code is memory-safe
395 |
396 | @_cdecl("swift_retainCount")
397 | public func swift_retainCount(object: Builtin.RawPointer) -> Int {
    |             `- warning: global function 'swift_retainCount' involves unsafe code; use '@safe(unchecked)' to assert that the code is memory-safe
398 |   if !isValidPointerForNativeRetain(object: object) { return 0 }
399 |   let o = UnsafeMutablePointer<HeapObject>(object)
    |           |                              `- note: call to unsafe initializer 'init(_:)'
    |           `- note: reference to unsafe generic struct 'UnsafeMutablePointer'
400 |   let refcount = refcountPointer(for: o)
    |                  |                    `- note: reference to let 'o' involves unsafe type 'UnsafeMutablePointer<HeapObject>'
    |                  `- note: call to global function 'refcountPointer(for:)' involves unsafe type 'UnsafeMutablePointer<Int>'
401 |   return loadAcquire(refcount) & HeapObject.refcountMask
    |          |           `- note: reference to let 'refcount' involves unsafe type 'UnsafeMutablePointer<Int>'
    |          `- note: call to global function 'loadAcquire' involves unsafe type 'UnsafeMutablePointer<Int>'
402 | }
403 |

and should have Fix-Its to add @unsafe or @safe(unchecked) as appropriate to silence the diagnostics. Give it a spin if you're interested, and I would of course appreciate feedback on how it goes.

Doug

3 Likes

I tried it. I have a few comments:

Steer towards @unsafe instead of @safe(unchecked)

In my experience with -fbounds-safety and related, we have always regretted volunteering unsafe solutions in diagnostics and fixits. If the fixit says "you must use __unsafe_forge_bidi_indexable to perform this conversion", people will do that instead of looking for safe solutions to their problems. I want to advise that saying "you should use @safe(unchecked) here" will result in people who don't understand what they're doing (and whom we must account for) to use @safe(unchecked) incorrectly.

I can think of 3 reasons that the diagnostic should steer people towards @unsafe instead of @safe(unchecked):

  • @safe(unchecked) is less safe than @unsafe. For auditors, it's better to somewhat overestimate the set of unsafe things than to somewhat underestimate it.
  • It is a source break to replace @safe(unchecked) with @unsafe, but the other way around isn't.
  • Through progressive disclosure, users who do not read documentation are more likely to learn about @safe(unchecked) on their own after butting their heads on @unsafe than the other way around.

Add an optional "precondition" string parameter/documentation to @unsafe

@safe(unchecked) means "this will never cause memory corruption no matter how you use it (trust me)". On the other hand, @unsafe means "this can cause memory corruption if you use it wrong". We should have a built-in or near-built-in way to help people hold @unsafe symbols correctly. This could take the form of a DocC attribute on the function, or a string parameter on the @unsafe attribute, like @unsafe(mustEnsure: "the index is within bounds"). "mustEnsure" cannot be an expression (in the general case) because what you must ensure is often not expressible in code. For instance, while you can type out the expression to check for bounds safety, you couldn't type out an expression to check for lifetime safety.

unsafe as a keyword to use unsafe symbols

I ran a quick survey with code auditors to see how they feel about this and the biggest point that came up is that they would like to know immediately where the unsafe is happening (the exact words: "a human reviewing the code should have a clear, definitive understanding of what is unsafe"), which the current implementation only does at a very coarse granularity. One Swifty way this could be done is with an "unsafe" effect keyword. For instance, with the current toolchain, this builds:

@unsafe func test(x: inout CInt) {
	withUnsafePointer(to: &x) { ptr in
		print(ptr)
	}
}

If test wasn't so trivial, we could lose track of the unsafe operations that are being performed, or it could be easy to mistakenly add a new one somewhere without noticing and cause issues with that. As I understand it, these are the same reasons that we require try and await keywords when calling functions that can throw or suspend. If we accept that memory safety is at least as important as knowing what may or may not throw in a function already marked throws, it seems to me that the consistent course of action would be that you need to mark the expressions where unsafety can occur. (With the same rules as try/await, where any number of unsafe things can occur in the sub-expression.)

I realize that this is not always possible without very intrusive language changes that are probably not warranted. For instance, we probably don't want methods in protocols to need to be decorated with @unsafe, and all uses to be unsafe myProto.stuff(), just so that there can hypothetically be implementations of stuff() that are unsafe (especially when it'll come to interoperation with modules that don't enable strict memory safety and that might just never bother to annotate protocol members). The rules could be:

  • substituting a generic parameter with a type that has an unsafe implementation of that protocol would be an unsafe operation
  • turning a concrete type into an existential with an unsafe conformance would be an unsafe operation

It would still be possible to dynamically cast your way to an unsafe implementation of a protocol without using unsafe, which is unfortunate but probably fine.

I think that it's unlikely unsafe function would often also be throwing or async. Most unsafe operations are shortcuts to remove boilerplate, and async and throws both add a lot of boilerplate. As a result, I don't think that you would often run into something like let bar = unsafe try await funnyOperation().

Has that been considered?

9 Likes

I like the proposal!

A few considerations.

  1. There will be some extra noise for the "@unsafe" / "@safe(unchecked)" declarations but that's probably inevitable.
  2. Once this is well established we may remove "Unsafe" from the type/function names as that would be redundant.
  3. Will it be possible to have a similar marker for closures, or is it only for functions?
  4. Technically a function that calls another function that's marked "@safe(unchecked)" is also "safe unchecked" on its own, although I could see that the intention here is to have some non-propagating (non viral) safety marker, so it's probably okay.
  5. When auditing code for safety we'd need to search for both terms, I wonder if we could have used some "@unsafe(xxxx)" instead of "@safe(unchecked)" with "xxxx" being something to indicate: "we believe the code to be safe but that's not proven by the compiler".
  6. For C headers we might have some ASSUME_SAFE_UNCHECKED_START ASSUME_SAFE_UNCHECKED_END brackets to ease this mode adoption.
  7. Is there a way to trap on (really) unsafe contracts? If there is - we could probably trap inside functions marked with "@safe(unchecked)" to make them fail deterministically.
  8. I'd like those warnings to be errors by default, with a way to opt-out to make them warnings.

Yes, suggesting @safe(unchecked) at all can lead to overuse. One way we've tried to make things like this discoverable is to put suggestions like this into notes with our order of preference and scary text if it needs it. For example, trying to access a member of an optional has these notes:

1 | func f(s: String?) {
2 |   let x = s.uppercased
  |           |- error: value of optional type 'String?' must be unwrapped to refer to member 'uppercased' of wrapped base type 'String'
  |           |- note: chain the optional using '?' to access member 'uppercased' only for non-'nil' base values
  |           `- note: force-unwrap using '!' to abort execution if the optional value contains 'nil'
3 | }

We could do the same, e.g. the first note is "mark @unsafe if this function might ever violate memory safety on any input" and the second is "mark @safe(unchecked) if the implementation has been verified to present a memory-safe interface for all inputs" (or whatever). I don't think we should hide @safe(unchecked) more than that, though.

Sure, although it's worse for users because @unsafe propagates out, and an extraneous @unsafe can cause unnecessary work.

We've previously taken the stance that new warnings don't count as a "source break", because they aren't changing semantics and can be suppressed / ignored by clients.

This makes sense, and we already have it for @safe(unchecked). I think I'd want to call it "message" so that the language isn't leaning too heavily on what should be here, but I think it's good practice to say how to correctly use the function beyond what's in the normal signature.

FWIW, these are already effectively the rules we have for unsafe types and conformances. With my branch, you can see uses of @unsafe conformances being tagged as unsafe in, e.g.,

16 | func testInt(i: Q) {
   |      `- warning: global function 'testInt' involves unsafe code; use '@safe(unchecked)' to assert that the code is memory-safe
17 |   acceptP(i)
   |   `- note: @unsafe conformance of 'Q' to protocol 'P' involves unsafe code
18 |   acceptAnyP(i)
   |              `- note: @unsafe conformance of 'Q' to protocol 'P' involves unsafe code
19 | }

where:

struct Q: @unsafe P {
  @unsafe typealias Ptr = UnsafeType
}

func acceptP<T: P>(_: T) { }
func acceptAnyP(_: any P) { }

Right. We'd need to extend the runtime to account for this.

I'm not sure I believe that. To get a sense of the @unsafe and @safe(unchecked) surface area of the standard libraries, I went ahead and auto-applied fix-its through. The pull request is here, and there are definitely a bunch of throws functions on Unsafe(Mutable)BufferPointer and friends. Remember that it isn't just @unsafe functions that are unsafe: it's generic code that's been given an unsafe type like UnsafeMutableBufferPointer.

Doug

I'm trying to decide how different your suggestion of an unsafe effect is from an unsafe { ... } block. I don't want unsafe to be part of a function's type the way other effects (throws and async) are, because we don't want to create a coloring problem here: that explodes the complexity and makes the interoperability and incremental-rollout story much, much harder.

Is the idea that treating unsafe like an effect at the use site would make it easier to learn and understand because it's like try and await? I guess it also means that you only get an expression inside the unsafe, which would force you to limit the scope of the unsafe code. On the other hand, we still might want to restrict it so that it doesn't "escape" any unsafe types, e.g., we'd want to disallow the dreaded:

let ptr = unsafe array.withUnsafeBufferPointer { $0 }

So, is your intent that the unsafe effect is essentially "this one expression can have unsafe", to force the scope of the unsafety to be limited down to an expression, or am I misunderstanding?

Doug

1 Like

Disagree. Making stuff like this opt-out will only frustrate and add friction to the workflow of the average Swift user. Stuff like this should be explicitly opt-in to maintain the goal of "progressive disclosure".

1 Like

Correct me if I'm wrong, but when it comes to offering chained optionals or force-unwrapping, I don't know that compiler implementers have an incentive to propose one over the other. Engineers need to familiarize themselves with both solutions, and the feedback they get from running the program (and from crash logs) help them build experience and make the right decision most of the time. If engineers make the wrong decision, it's not the compiler's problem, either. It makes sense to propose both and just let things happen.

When it comes to security, this feedback loop does not exist. Memory corruption, even when it happens 100% of the time, can easily go undetected. (This is common enough that we gave it a name: "casual memory corruption"). Memory corruption bugs that turn into exploits are usually triggered under circumstances that are only exercised for the purpose of an attack. Whereas you can't really get force-unwrapping wrong and never know, you definitely can get @safe(unchecked) wrong and only find out when your program is abused in an exploit. Engineers overwhelmingly will not learn that they are using @safe(unchecked) wrong from field results.

At the same time, we're doing this because we kind of want it to be the compiler's problem that engineers continue to write memory safety bugs. This points to being extra careful about what we propose in diagnostics. Suggesting @safe(unchecked) because @unsafe is worse for users seems like it's prioritizing convenience over safety in the safety-first mode.

I don't want to battle on that hill because it's a really small hill, but I'd be more comfortable if we waited for feedback telling us these diagnostics really should include @safe(unchecked) as an alternative.


Security people are going to encourage people who enable strict memory safety to upgrade the warning to an error. If we document that swapping @safe(unchecked) and @unsafe are not source breaks, somebody is going to do it and break somebody else's build. This will be a bad look.


I'm happy with any label for a message parameter. FWIW, while I think a message parameter is valuable in @unsafe, I don't really know what I would do with the message in @safe(unchecked). API users don't really have any reason to read it, and auditors would get the same benefits (if any) from comments.


I see the point, but I also think that the standard library is skewed. I don't know if there's an easy way to find that, but a good measure would be to compare the occurrences of withUnsafeBufferPointer against try withUnsafeBufferPointer at large. I expect that we'll find a lot of try and that we will find a lot more without. Although, this is not necessarily useful to debate.


Yes, the intent is to be more precise about what is unsafe. (Analogously: you need to use try even inside do { ... } catch { ... } blocks and functions that rethrow.) In a big change, it's easy to accidentally add an unsafe call in an existing unsafe block with it going insufficiently noticed, and review tools have historically been bad at helping with this kind of stuff. It's really convenient for auditors that they can tell where exactly the unsafety is at a glance. I agree that we don't want to create a coloring problem.

I'm not sure I understand the let ptr = unsafe array.withUnsafeBufferPointer { $0 } part. Leaving aside the obvious problem, the use of unsafe looks fine to me there. Even if doesn't stop anything, it's doing its job if it draws attention.

3 Likes

The block version of unsafe could be unsafe do, which would also match a previously suggested syntax for applying async and try to a block.

1 Like

One way we've tried to make things like this discoverable is to put suggestions like this into notes with our order of preference and scary text if it needs it.

This would be much more meaningful if notes were surfaced in a useful way to the vast majority of Swift users. That is, Xcode users. Before relying on notes to convey useful information, please ensure we can actually see them.

Also, message isnā€™t really a useful label in @unsafe, @unsafe(unless:) would make it more clear both what authors should put in there and what users should do.

2 Likes

Yes, although we do have some choices here. For example, a declaration like this:

func withUnsafeBytes<R, E>(body: (UnsafeRawBufferPointer) throws(E) -> R) throws(E) -> R {
  // ...
}

The proposal and implementation currently requires that this be declared @unsafe in the strict checking mode, because it has an @unsafe type in its type signature. But we don't have to do this: it's already the case that using this API is considered unsafe because of the unsafe type in the signature, so we could say that the @unsafe is inferred (not required). That reduces the annotation burden for code that's using unsafe types, but potentially makes it less clear what part of a program is unsafe.

As for @safe(unchecked), that's something that can't go away, although perhaps it would be an effect or block that goes into the definition and isn't seen in the interface.

I wouldn't want to do this, because I don't think " well established" means that everyone has enabled the strict checking, and taking away the "Unsafe" means that such code won't be as obvious about the unsafe behavior it does have.

Closures are unnamed, so they are referenced from anywhere else, making an @unsafe marking on them less useful.

Yes, that's intentional. We don't want this to be viral once someone has attested that their use of unsafe constructs presents a safe interface.

The xxxx here would have to somehow negate the un, i.e., @unsafe(sanitized) or @unsafe(encapsulated). I'm open to this direction if we can find a word where the double-negative doesn't feel forced..

What would this do? Right now, imported C follows the same conventions as Swift code that hasn't adopted strict memory safety, so a C construct is considered unsafe only if it deals with unsafe types.

I don't know what this would mean in practice. Do you have an example in mind?

Warnings make it easier to incrementally adopt this feature and maintain it as (e.g.) the packages you depend on add new @unsafe annotations. I don't feel strongly about the default here.

I agree that it is prioritizing convenience (as well as discoverability). I don't know to what extent making @safe(unchecked) harder to learn about will actually prevent folks from overusing it.

Okay, so it helps us be more precise as to what is actually unsafe in the code, and fits well with existing expression-level constructs (try and await). Grabbing a random @safe(unchecked) function out of my pull request, let's see how it would look:

  static func getProcessName(pid: pid_t) -> String {
    let buffer = unsafe UnsafeMutableBufferPointer<UInt8>.allocate(capacity: 4096)
    defer {
      unsafe buffer.deallocate()
    }
    let ret = unsafe proc_name(pid, buffer.baseAddress, UInt32(buffer.count))
    if ret <= 0 {
      return "<unknown>"
    } else {
      return unsafe String(decoding: buffer[0..<Int(ret)], as: UTF8.self)
    }
  }

As we probably expected, it's going to be fairly noisy to use unsafe types like UnsafeMutableBufferPointer with this approach. That's probably a good thing: it's indicating everywhere that we have an unsafe operation, and "count of uses of unsafe" is a decent proxy for how many unsafe operations there are.

Regarding the discussion about the compiler suggesting @safe(unchecked) (or not). If we're to mark unsafe expressions with unsafe as you're suggesting, would the compiler would suggest the addition of that keyword as a Fix-It? I'd assume "yes", but my other expectation was going to be that @safe(unchecked) would go away entirely if we had this unsafe block / expression marking. That puts us back at the place where something like the getProcessName function above uses unsafe in its implementation but is not itself @unsafe. I'm happy with that, but I want to point out that it does make the "encapsulation" of unsafety implicit again, with a Fix-It based workflow potentially making more things safe than it should.

One of the thoughts from earlier in the thread was that an unsafe block could prohibit any unsafe types from escaping, so it would catch this particular issue.

What specific IDEs do is outside the scope of the proposal, but FWIW that example I gave with optionals looks like this in Xcode:

I think we're better off following the precedent of @available(..., message: ...) than trying to be clever with the name of this label, but I don't care that much overall about the name.

Doug

3 Likes