Why must it be so difficult to refactor async-colored code?

sometimes you have a one-pass algorithm that you must convert to a two pass algorithm.

for id:ID in self.ids
    try await doSomething(with: id)

this is a bit of a struggle, if the algorithm involves anything async.

let things:[Thing] = try await self.ids.map
    try await doSomething(with: $0)
let things:[Thing] = try await self.ids.reduce(into: [])
    $0.append(try await doSomething(with: $1))
var things:[Thing] = []

for id:ID in self.ids
    things.append(try await doSomething(with: id))

we, the humans of swift may be wise enough to remember that there is no such thing as an async map/reduce, but that doesn’t stop GitHub Copilot from hallucinating it. rather than hate the AI for being an AI, i wonder if we would be better off adding some async overloads for these common functional idioms to the standard library?

Ideally the 'asyncness' of the closure arguments to map et al would be like throws, in that it's irrelevant and supported either way.

unlike throws, it is allowed to overload on async, which i take as an indication that some form of reasync is unlikely to be supported.

I've written and use this. Feel free to give me code review on it:

public enum AsyncIterationMode {
    /// Serial iteration performs each step in sequence, waiting for the previous one to complete before performing the next.
    case serial
    /// Concurrent iteration performs all steps in parallell, and resumes execution when all opeations are done.
    /// When applied to `asyncMap`, the results are returned in the original order.
    case concurrent(priority: TaskPriority?, parallellism: Int)

    public static let concurrent = concurrent(priority: nil, parallellism: ProcessInfo.processInfo.processorCount)

public extension Sequence {
    func asyncForEach(mode: AsyncIterationMode = .concurrent, _ operation: @escaping (Element) async throws -> Void) async rethrows {
        switch mode {
        case .serial:
            for element in self {
                try await operation(element)
        case .concurrent:
            _ = try await asyncMap(mode: mode, operation)

    func asyncMap<NewElement>(
        mode: AsyncIterationMode = .concurrent,
        _ transform: @escaping (Element) async throws -> NewElement
    ) async rethrows -> [NewElement] {
        switch mode {

        case .serial:
            var result: [NewElement] = []
            for element in self {
                result.append(try await transform(element))
            return result

        case let .concurrent(priority, paralellism):
            return try await withThrowingTaskGroup(of: (Int, NewElement).self) { group in
                var i = 0
                var iterator = self.makeIterator()
                var results = [NewElement?]()

                func submitTask() throws {
                    try Task.checkCancellation()
                    if let element = iterator.next() {
                        group.addTask(priority: priority) { [i] in (i, try await transform(element)) }
                        i += 1

                // add initial tasks
                for _ in 0..<paralellism { try submitTask() }

                // submit more tasks, as each one completes, until we run out of work
                while let (index, result) = try await group.next() {
                    results[index] = result
                    try submitTask()

                return results.compactMap { $0 }

If you're using the Swift Async Algorithms package, then you can use the async property on a regular Sequence to turn it into an AsyncSequence. Once you have an async sequence, you can easily use the async version of map. For example:

let array = await Array((0..<5).async.map { await doAsyncStuff(with: $0) })

It should be noted that the AsyncSequence version of map is lazy by default, so if you want an array you'll have to use the Array(_:) initializer.

I strongly object to changing Swift just to make it easier for LLMs to generate code. Swift should be designed for humans.


Converting a Sequence into an AsyncSequence seems very heavy when the desired result is the original Sequence type. Personally, I have async overloads of some map, flatMap, etc methods in an extension and use them instead.

Maybe there is little runtime overhead for the Sequence -> AsyncSequence -> Array path but I've avoided it.

Absolutely agree on NOT adjusting the language or standard library to fit LLM's.

1 Like

we humans are not as smart as we think we are, we have simply accumulated this substance called “real world experience” which allows us to outperform an AI in certain tasks. in this situation, the real world experience is the knowledge that no such async-colored overload exists in the standard library, which is gained by trying to write try await x.map { try await $0.y() } and then realizing it does not compile.

oftentimes, observing how an AI reacts to some design is helpful because its failures highlight some implicit knowledge that experienced humans are relying on, that may not be obvious to an AI, or a human newcomer for that matter. in this situation, it is worth considering if the AI might be onto something, that a reasonable user might expect such an async overload to exist.