[Prospective Vision] Improving the approachability of data-race safety

I don't disagree with this! But I don't lay the blame for the current state of things on Swift 6 itself — Apple has known what they were doing with Swift, and had most of the language tools to fix things, for years. And instead, they are still shipping APIs that were known-broken under strict concurrency 2 years ago, without replacement, in Xcode 16.

And if the political thing to do is for Swift to change to allow the rest of Apple to stand still, that's one thing (but it'd be nice for say, Craig, to have the guts to come out and say so). But it's a shame, because the things Swift 6 brings to the table are important.

(And I don't have confidence that the proposed MainActor-only dialect will actually allow you to use all of Apple's APIs anyway; like I said, at least AVFoundation is definitely still broken. Feel free to try annotating all your code explicitly with @MainActor to try it out though, I'd love to hear from someone who's actually tried something like this)

8 Likes

A few thoughts on the matter:

  • async/await code makes an impression of being sequential while it is not. It is too easy to forget that after each and every await everything you've checked before could be invalid:
if x == 1 {
    doSomething()
    if y == "2" {
    	doSomethingElse()
    	let z = await t()
    	// hmm. should we recheck x & y here?
    	// maybe it is important for doYetAnotherThing() ?
    	doYetAnotherThing(x, y, z)
    }
}
  • actors exaggerate this to the whole new level, e.g. you have a big class and you decided to refactor it into several individual classes, and then you decided that those individual classes should be individual actors, boom – the previously easy to reason about synchronous code that was calling one method from another has to become hard to reason about asynchronous code.

  • async/await (at least what we use out of the box without customisation) enforces multithreading to what could have been a purely single threaded code. Callbacks / "promise" based approaches do not introduce/enforce multithreading – in that model you have to go one extra step and reach out (opt-in) for external things (GCD, etc) to introduce the actual multithreading.

In theory I very much like the premise of automatic compile time checking of concurrency issues, but what we have now is far from ideal. Before I was at least getting low level data races and runtime crashes – those highlighted the problematic areas and it was thus (in a peculiar way) easier to isolate and fix the issues. And the "pyramid of doom" was an automatic "slow down and think of an easier way" feedback control mechanism. Now with actors and async/await we just (have to) add an extra "await" which doesn't lead to "a perceived increased complexity" that would otherwise tell you "stop and double check"... And while we no longer have easy low level races... we now have hard concurrency logical errors which are inhumanely hard to spot during peer reviews for anything but trivial short snippets.

I'm afraid it is impossible to not have such a bias at this stage...

6 Likes

What if logging used across different threads in server environment?

Tbh you should be very sure to use @unchecked and it's not hard to guess.


Though of course I do understand everyone's experience is different, I agree with lots of replies here that documentation and examples should be done better.

1 Like

I think your mind is wrong here, but regardless of what I think, you're not offering any hard evidence about your sweeping, overly general and encompassing statements. You are getting specific answers on specific points, but your position seems to always be that even if the answer you got is right, it's "too hard".

The remark about the fact that most people will never catch up is just arbitrary and unsubstantiated.

We all got that NSImage and many other Apple-specific APIs have not caught up with strict concurrency checking, but that's not a problem with Swift, and extrapolating hard statements about Swift 6 from this is obviously illogical.

That's not true. The whole point is that considering some things "natural" is incorrect: they might seem natural, they feel natural, but they are not, and strict concurrency checking exposes the counterintuitive and hidden problems behind apparently simple things.


Now, it's perfectly possible that, in practice, something that is not technically concurrency safe is used, in a particular place, in a particular case, in a safe way. But for the Swift compiler to understand that, it would need to do a very complex, long and costly analysis of the whole program code, including the code of all libraries (including closed source ones) which is, for all means and purposes, impossible. It is impossible in practice, and there's no point in complaining about it: the vision document actually mentions it in passing, but maybe there should be more emphasis on this basic fact.

The solution adopted by Swift is based on a local analysis, that is, an assumption is made about the rest of the world, given how the boundaries of the local scope are annotated, and then only the local code is analyzed in full. This means that if something has not been adapted and annotated properly yet, the local assumptions will not reflect the practical reality, and that is certainly a problem, but it's, again, a practical problem, not an essential one. Several Apple-specific APIs have not caught up yet, but that's simply because time and effort and limited resources, and a real-world team must prioritize some things over others, which of course will create problems to those that rely on things that were not prioritized. The best we can do is:

  • point out what's missing;
  • look for other temporary solutions.

Whining about missing APIs is not going to be useful in the long run, nor generate a productive conversation (in fact, it's going to create a bad environment and wrong impressions for newcomers), so I'd suggest to instead provide info and suggestions about what's missing (so that the appropriate people can look into it) and explore alternatives, while reporting on them so other people can benefit from it.


I think you are instead missing the fundamental issue that things are actually complex, even if they don't look that complex to you due to your limited view of the problem (which is as limited as anybody else's, to be clear).

Unhelpful documentation and error messages are very frustrating if they don't clearly refer to the thing that's missing in the local code area where they are mentioned, and that is an objective issue, not a personal perspective, and as such it's going to be more important to solve, and more impactful if it's improved upon.


There's a lot of wrong with passive-aggressive remarks like these, that are not helpful for the conversation and create a negative and unwelcoming environment, and as such should probably ignored, but I also feel this kind frustration in other contexts (that have nothing to do with programming), so I'd like to comment on a couple of things:

  • everyone has bias and cognitive dissonance, at different levels, including you of course, so there's really no use in wondering such things; I'd suggest instead to do more self-analysis and understand which biases of yours are affecting your perspective, and how to overcome them; I don't mean to be patronizing, but wondering about "other invested people"'s cognitive dissonance is a sign that you're being uncharitable;
  • there's no such thing as "regular dev", as there's no such thing as "normal people"; it seems to me that you're projecting you're very own personal specific experience on the world of Swift developers, assuming that they look more or less like you, but they don't; there are other comments from other people on specific issues, but the tone and approach is very different, and that's going to be better for the conversation, and more helpful in relation to the feedback about the OP.
8 Likes

Right - this is the issue.

I care about 'in practice'

Folks trying to prove things with a compiler have to care about a much harder problem.

Which is more important? Making the developer do hard things so that the compiler can solve technically difficult problems - or making the 'in practice' case work nicely.

Some developers/teams would absolutely opt in to a lot of pain to get compiler guarantees that would be valuable for them. I wouldn't.

I just want to build apps and make a living - and I honestly don't care that my programs are not provably correct. After all - not having that provable guarantee has been good enough for obj-c and swift up to now. It's not like developers haven't been building useful programs thus far.

The current swift 6 vision seems to be that I'll take my medicine and be thankful even if the cure is worse than the disease.

4 Likes

It's not unsubstantiated - it's based on multiple interactions where even the folks who are closely plugged in to the latest swift evolution changes battle to understand implications.

I would put out a challenge on this.

Document what developers need to know.

Not in a series of evolution proposals - but in a clear section (or sections) of the swift language guide.

I suspect that the project of trying to document how the system works would do a great job of exposing how complex and convoluted it is.

But perhaps I'm wrong. Perhaps documentation will make it all clear. I'll be happy to admit I'm wrong if that's the case.

I'm obviously not in charge of the swift vision process - but if I was, my edict would be

Don't even start talking about the future until you have clearly documented the present.

7 Likes

Some proposals in this pitch will make the language immensely better for prototyping and small scripts / truly single-threaded applications. It's indeed very cumbersome to write small single-threaded programs where you don't care about parallel execution. So on this use case, the proposed changes sound like a massive win.

I am, however, unsure about the impact this will have on app codebases that are bigger than a few thousand lines of code, where I've (subjectively) perceived most of the negative feedback about Swift 6 coming from.

I'm not sure these kind of nontrivial app executable projects would benefit from having everything on the Main Actor by default in the long run. It seems like we'd be trading one set of problems (cryptic compiler errors) for a different one (inadvertently blocking the Main Actor for long periods of time). To address the former, it's possible to make the compiler smarter and the error messages more helpful. But the compiler can't really help diagnose the latter as of now (can it?).

I fear that after months of development some apps could find themselves in a situation where the performance gets bad enough that the app regularly drops frames, but there's not one main critical path to blame, but rather a sort of "death by a thousand cuts" of thousands of tiny little things all being executed in the Main Actor. Perhaps this concern is unsubstantiated, it's hard to know without actually trying to build a project like that.

Lastly, would there be some sort of diagnostic mode for transitioning a module between the two dialects? Seems like it could be a common issue: an app is built using Main Actor by default, but some time down the road the developers realize they use parallelism more than they thought at first, and want to transition to the nonisolated-by-default mode. What would be the approach for this?

1 Like

I think this is the foundational distinction in discussed topic. Concurrency checks are aimed to make such code more likely correct, because now cold-blooded machine is one who analyses this.

You’ve been talking about biases, and I have to agree that they are in place, but as said above, you are have biases too. For instance, you get used to some ways to handle concurrency tasks, that are convenient for you because you know them well. That’s bias. Not bad or good, it just exists.

But, please, try account for the two major things that is true for almost anyone at some point:

  1. Concurrency is hard. It is a complex topic that requires years of experience to at least reason at decent level in concurrency issues. Having compiler on your side here is a win for many in long term. Yes, there are things to polish right now, but that’s ok for such a problem that language attempts to tackle.
  2. We all are humans. And that means we can make mistakes. The more sophisticated system, the more engineers are into the problem, the more likely they miss something out just because there are so many things to be aware of. Leaving the burden to analyze such nuances to a machine reveals from this in many ways.

Try to see compiler as a second pilot on such things, that suggests something might be wrong, and you give hints where you want some property — as ability to pass between “threads” — to hold true. You’ll be surprised how often you can just stop fighting with the compiler, and just think with it.

As for code that exists, especially in Apple ecosystem, some parts — maybe large parts — can be problematic to express in Swift 6. Nobody said you should rush into full 6 mode though. Keep using compiler. Adjust some parts gradually.


As for the vision document, really happy to see the road forward. I have doubts on one more mode for compiler though, it might end up complicating things more. For me it seems that fixing nonisolated async functions to keep isolation is one of the largest things that might simplify using Swift 6, resolving a large portion of struggles.

3 Likes

You know what, I believe the opposite!

I think many projects have far too much concurrency in them. This has two important effects, both relevant to our discussion here. First, it makes their behaviors extremely hard/impossible to model with Swift's concurrency system. Second, it introduces lots and lots of context switches while background threads do teeny tiny amounts of work to satisfy a need from the UI.

This, for me, was unexpected. I suppose I shouldn't have been surprised by this, but I did not see it coming. Stricter rules push people in the direction of simpler concurrency usage. And when you go that route, your designs don't just get simpler, they can get faster too!

This, of course, is not a rule. It's absolutely possible to accidentally shift too much long running synchronous work onto the main thread. That's a very valid concern. You might even be doing it on purpose, because you feel you have no choice given the compiler's constraints. But, this is why unsafe opt-outs exist. I have had to use them for performance-sensitive code. We'll always need escape hatches, forever. But I certainly think less is better!

But that said, I'm not arguing in factor of MainActor-by-default. I'm still kinda skeptical of that idea, even though I acknowledge that it could really help in a bunch of cases.

7 Likes

What does in practice mean here? When the developer tested it? For the majority of the users? For all the users? How can one be reasonably sure that code works for the majority of users without knowing what the issues are on the first place?

See, this is an example of a bad code solution that many would say works "in practice":

import Foundation

let formatter: DateFormatter = {
    let formatter = DateFormatter()
    formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
    return formatter
}()

formatter.date(from: "2024-01-10T15:58:10.918Z") // <--- Works?

The last line creates the correct Date all the time... unless the user has a locale with a non-Gregorian calendar (entire countries!). Yet it's not feasible to test all the code in every possible locale all the time.

Similarly, in the concurrency world, it's impossible to test all the code in every situation. Requests that arrive in a specific order in my super-fast home office may arrive out of order for a user trying to use my app on a Metro station with a single bar of network connectivity.

How could I know how big of an issue these things will be if I'm unaware they exist at all? This is the kind of place where compilers can be helpful.


Oh, you're right. I'm tired of seeing code spawning tiny tasks to change one tiny UI-related value and then switching back:

func handle(error: any Error) {
    print(error)
    Task { @MainActor in
        showError = true
    }
    errorCounter += 1
}

When I see those I take it as a signal that something is not being modeled right (concurrency-wise), most often due to too much concurrency, as you say. Having everything on the main actor would probably help get rid of those tiny context switches.

I was thinking more about a scenario where lots of small or medium sized work items (instead of a huge synchronous block) are added to the main queue faster than the main thread can clear them. Seemingly minor things like logging, analytics and initializing different modules that simply add up over time as a project grows.

Then task prioritization would take place, and lower-priority tasks that the programmer (wrongly) assumed would be executed instantly or near instantly could unexpectedly take noticeable time until getting execution time. Yet profiling through instruments wouldn't reveal any single block stalling the main thread.

This is not an issue right now because most things end up being executed in the concurrent thread pool, and you need A LOT of work being done at once before you start running into contention issues there. There's almost always an idle thread in the pool that can be used immediately. But I wonder if this could become an instant performance downgrade for apps when switching to use the Main Actor by default (with dropped frames and minor but noticeable UI hitches).

3 Likes

I agree. Everything should be Main Actor by default unless you start using concurrency. I am strongly against executing async functions in Main Actor by default.

I read this as wanting async to imply parallelism in the sense of DispatchQueue.global(…).async { }. While many people reach for concurrency to exploit parallelism, it is important to note that concurrency and the actor model do not actually require any sort of parallelism. In fact, you can pass a compiler argument to limit the concurrency runtime to a single thread.

On the topic of parallelism, now that custom executors have been in the concurrency model for a while, is it time to revisit the relationship between the main actor and the main thread? On Darwin (and Linux?), the main actor is strongly associated with the process’s main thread. But there are several ways in which this relationship falls apart:

  • Windows processes don’t have a main thread.
  • POSIX applications can exit their main thread without tearing down the entire program.
  • Darwin daemons are explicitly encouraged to exit their main threads by calling dispatch_main().

Is it necessary—or even possible—to separate the main actor from the main thread in order to make Swift concurrency approachable for more kinds of programs and a wider variety of platforms?

3 Likes

I'd like to make sure I understand this.

Are you saying you are specifically against the change to inherit isolation for non-isolated async functions? Or something more broad about all async functions?

I'm not so sure it was the actors themselves that make this kind of transformation complex. Moving a synchronous system to async will have similar state management problems, you just would have to use different tools to manage them.

Help me understand this! I think it is possible to write code that uses async/await without any background threads being involved and without your code ever leaving the main thread.

1 Like

Yes, in my ideal world async would work like that. I understand that Swift concurrency is different and this is not going to change.

In my opinion, async functions are the opposite of synchronous functions and should always run in a different thread than the caller. But since we have isolation and actors (not threads), if an async function has its isolation defined, of course it should run in that isolation (possibly the same isolation as caller). So what I want is not reachable in Swift, and what I meant is that I am against the proposed change to inherit isolation for non-isolated async functions. I see no point of running an async function in the same thread as its caller.

1 Like

FWIW DispatchQueue.async also doesn't behave this way. A highly recommended pattern from the libdispatch team is to async to the queue you're already on.

3 Likes

TBH I didn't know that.

As a joke, not intended to offend anyone, let's deprecate the async keyword, and introduce two new keywords instead: parallel and perpendicular.

parallel means that the function is executed in a different thread than its caller. It can do whatever it should, for as long as it takes, without a need to await, yield, or anything like that.

perpendicular means that the function is executed concurrently in the same thread as its caller, possibly main thread. In my humble and most likely wrong opinion, this is akin to putting sticks in the wheels - in this context the keyword perpendicular makes a lot of sense.

Another important way Swift Concurrency differs from Dispatch is the conscious design choice that the callee, rather than the caller, decides where it runs, because it knows its own semantics and requirements at least as well, and often better than, the caller.

4 Likes