True, it unlikely makes sense to have async entry point on JS environment. However, there are some cases where users can't control if the entry point is async or not. A clear example is testing framework, the entry points of swift-testing and XCTest are provided by their framework (precisely by SwiftPM though)
So it's a bit difficult to say we can ask users not to use async entry points for JS environments unfortunately.
I'm already working on removing it and using the type-based approach Joe suggested, which actually works quite nicely.
That's not quite right either. If you are inside a Task then the first one returns the default executor right? So it's this:
- default actor + no task executor -> default executor
- default actor + task executor -> task executor
- custom actor executor + no task executor -> custom actor executor
- custom actor executor + task executor -> custom actor executor
The only time that var currentExecutor: (any Executor)? { get } returns nil is if you are not inside a task right?
Yes I agree that we should do the former and try one executor after the other.
That's not how it works in the runtime so I'm attempting to clarify that.
When running "on a default actor" the actor is the executor. And we happen to submit to the global pool; Is is technically not true that the global concurrent executor is "the executor we're on". E.g. if comparing executors, the and you used the unownedExecutor of a default actor as the executor of another actor -- the runtime compares them, it is the same identify (the orignal actor) and therefore there's no suspension there and the "switch" succeeds, and things like that.
Not only is it the runtime's behavior, it is also how @al45tair is suggesting this property is implemented.
If we were to choose to return the global executor here... we have to really think very deeply about this, perhaps we could?
It would not be a SerialExecutor which may prevent all the issues I am worried about (wrong assume isolation calls).
I think we are talking past each other. I understand that in the runtime there is no real executor for default actors. However, we are talking about two things here:
- What should the proposed
currentExecutorproperty return - How should
Task.yield()/sleep()behave
Now in the proposal currentExecutor is documented like this
/// Get the current executor; this is the executor that the currently
/// executing task is executing on.
public static var currentExecutor: (any Executor)? { get }
If we are inside a task there is always going to be a current executor because in the end we need to run the job somewhere. Regardless if we are isolated or not; or if the actor has a custom executor. Keep in mind this API is on Task and not Actor. So in my opinion the logic for currentExecutor should be like this:
- If we are isolated and the actor has a custom serial executor return it
- If there is a task executor preference return it
- Return the default executor i.e. the value of the proposed
var defaultExecutor: any TaskExecutor
Now for Task.sleep() I agree with @al45tair's previous proposal that we should go through the list in the same order and pick the first that supports scheduling.
Yeah I think we can agree to that -- it works out because we're not saying this is what we're "isolated to" but just "running on" and it's not a serial one, so I don't think people will have expectations of this being != with another actor's "running on" executor, since both may be on the default executor.
That's viable I think, unclear if that's the current implementation, it sounded like it would return nil today. But I think this would be ok to return the default one like this ![]()
OK, I've updated the PR again following some of the comments above. I'm also removing EventableExecutor for now. I think it's the right direction, but I'm not entirely happy with the API and for scheduling reasons I think it's best to defer it to a separate proposal.
I feel a bit late to the discussion, but thank you for your work on this! It would be a great addition to have. I recently ran into this since I run Swift in other Qt and PyQt applications and discovered anything explicitly running on the MainActor would always be blocked.
I'm sad to see that the runtime switching of the Main executor is gone, since in some cases I can't control how the app is started (third party binary executable), but I'd still want to ensure the MainActor ties into the Qt event loop. I tried using the C++ dispatch hooks but the one for the main actor isn't working Concurrency runtime never calls swift_task_enqueueMainExecutor_hook · Issue #63104 · swiftlang/swift · GitHub.
If it's not too late I'd really like to see some way of setting the Main executor at runtime. If it helps I'd be happy to provide more details about how I use Swift in other applications that aren't started by a Swift process.
I'm guessing we're talking about plug-ins of some kind? In which case what happens if two people write plug-ins in Swift, and one of you wants an executor that's compatible with Qt, but the other wants e.g. one that runs SDL's event loop instead? How would you go about squaring such a circle?
Yes, in theory two plugins with different needs would be in conflict with each other. I work in VFX and animation where Qt is the dominant framework by a large margin, and the Swift “plug ins” are loaded after the Qt application has started and any plugin would want to tie into the Qt event loop anyway.
In our C++ code that has threads managed by a library like oneTBB we need to explicitly call a Qt API to run a function back on the main thread. For the time being we’re doing the same with our Swift code, but I’d love to keep using the MainActor APIs and the rest of the concurrency model and have it all work together.
I think what you're after should be possible — with the type-based thing I'm currently proposing, the @main function actually makes a call into the runtime to set the executors (by passing the executor factory), and you could do that. Note though that if the owners of the application rewrote it in Swift and picked some other executor, you would then crash.
If I understand right, setting the executors would always happen in a @main function, so when the Swift code is compiled as a dynamic library and its @main is called the executor would still be set? If so that would work for our needs. I was under the impression from the document that it would only work where the host process is Swift and compiled as an executable.
If our 3rd party applications were rewritten in Swift we’d presumably want to use whatever executor they’ve already set and so would remove the executor factory from our code.
Thanks for clarifying!
AFAIK, dynamic libraries do not have an @main method. It's the executable that has that.
I wondered about that. We compile Swift code as dynamic libraries with the entry point defined using _cdecl(), which gets loaded and called by Python. If the Swift compiler doesn’t allow @main in a dynamic library then we’d need a different way to set the main executor, or substantially change how we run Swift in our applications.
In such a case, you would be responsible for calling the appropriate runtime function to create the executors from the executor factory, since you aren't going through an async @main. That's fine, but it will be fragile against other people trying to do the same thing.
At the very least it would be possible. Thank you!
I am concerned with addition of clock conversion APIs as part of this proposal. IMO it needs to be implemented as a separate thing in a separate proposal. I implemented conversion between Clocks in my pet project. Whatever the implementation is, it'll always be lossy in terms of losing time to capture the reference time point for those clocks. Either some time is lost when reference points for clocks are recorded on application launch (this is what CoreFoundation is doing for Date setup) or in-place of conversion where now is captured for both clocks. How is it addressed? What is the expected precision of that calculation?
Also, how are "sleeping" gaps between SuspendingClock and ContinuousClock going to be addressed in this conversion? The only algorithm I could come up to be reasonably precise for my use cases and avoid inconsistency gaps is literally this:
func convert<C: Clock>(to clock: C) -> C.Instant where C.Instant: DurationBasedInstant {
let theirNow = clock.now
let ourNow = Self.now
return theirNow + (self - ourNow)
}
This loses up 5 nanoseconds on a good day and who knows how much on a bad one.
What I'm trying to say is clock conversion is complex and deserves isolated proposal with rigorous investigation and careful implementation.
I was originally trying to avoid that whole area, not least for the reason you give, but without being able to perform clock conversions it's impossible to make Task.sleep() work.
So we have to address it in this proposal, whether we want to or not.
Obviously since we need to support arbitrary clocks, we can't capture the clock offsets at start-up, not that that would work in general anyway (the clocks might drift relative to one another over time). A consequence is that we are indeed, in some cases, going to ask for the current time of both clocks, and do the calculation you show above — and that does indeed mean that you might "lose" a few nanoseconds. Is this a big deal? That depends what you're using the clocks for, but if you're using them to calculate durations for Task.sleep(), then you'll discover that a typical scheduling quantum is a lot longer than 5ns. More like 10 or 15ms, in fact. And it's more than accurate enough for that. Particularly so when the guarantee given by most schedulers is that you will wait for at least the time you asked for.
As for suspending versus continuous clocks, in general an executor wants to try to match the clock it's been asked to use against the underlying facilities that it has available, which is why I added clock traits, so that the executor can tell whether you would like to wait on a continuous or suspending basis.
(p.s. you can see the implementation here: swift/stdlib/public/Concurrency/Clock.swift at main · swiftlang/swift · GitHub)
Basically you've implemented the exactly same logic (even variables are similarly named lol) as I did :D
I understand that timing wise it's worse for your proposal to be blocked on something else. Consider tho that you are adding a high level API that is going to look like it's something available for everyone. At least I would suggest making conversion internal or an SPI in some form. Exposing translation of clock units to be expressible in Duration is actually fine, becasue it's orthogonal to clock accuracy.
Aka, maybe clock API should be renamed to something like convertWithReducedAccuracy(to: ...) and hide it away from public. What do you think?
PS. after writing my previous post I came up with slight improvement to logic:
let ourNow = now
let otherNowWithLoss = clock.now
let callDelay = now - ourNow
let otherNow = otherNowWithLose - callDelay / 2
let otherDuration = otherNow.duration(to: instant)
...
This takes some of inaccuracy of the two calls, but does not eliminate it completely.
It can't be internal, unfortunately, because some (external, third-party) executor implementations may need to do the conversion. Given that fact, it would be much better if the code doing the conversion was in the library, because then we have the opportunity to improve it in future if we wish.
Also, I think you're over-egging the accuracy element here. If you have the kind of accuracy requirement where 1ns is going to make a difference (remembering that for a 2GHz core, that's two clock cycles; even at 5GHz you only get five), you're likely using a specific clock already that you know has that kind of precision. Note that both Duration and Instant are associated types on Clock, so you need to know that your clock can do that. For all you know, they could just be counts of seconds, never mind nanoseconds. This isn't just a theoretical objection either — on Windows, most clocks actually have a period of 15.625ms, and even at the highest precision they're only offered at 100ns precision.
I'll add that there's nothing stopping clocks that are strongly related (e.g. frequency locked and therefore only related by a known offset) from implementing an accurate conversion for the specific case where they're converting among themselves.
I'm not sold on reading one clock twice to gain a tiny bit of accuracy. Why?
- It won't necessarily work (whether or not you gain from this depends on the exact behaviour of the clocks in question).
- It will take longer. Some clocks may be expensive to read.
If you care about ultimate accuracy, you will need to ensure that you're using a clock that gives you the results you want.
I'm open to adding some text to the documentation comments for the convert() method explaining that accuracy will be clock dependent.