A case study for `reasync`

In light of some echoes that the need for reasync is still doubtful for some, I would like to present you a case study that demonstrates this need. The short version: I can already obtain the bulk of the functionality with a sed script, functionality which my code requires in order to get out of the libdispatch ghetto.

Since this case study involves a bit of code to, well, study, I thought I would start with some credentials. I have an electrical engineering background; while my first programming experiments were with AppleSoft Basic on the Apple ][e, my professional career so far has been 15 years of writing multimedia infrastructure code in C and C++; this code frequently involves the network and the filesystem, if not both simultaneously (e.g. sendfile()). I am looking at Swift as an improvement in almost all ways from C++, in particular the way Swift attempts to capture programmer intent ought to make merge operations much more straightforward. I have defended the thesis that you always ought to be able to invert control of your code if necessary (tenth comment), even in the absence of closures, and I have actually done so myself on a number of occasions (e.g. when integrating an XML parser with a SAX-based API, or when porting an Apache plug-in to nginx, or outside of work when coding against the raw JavaScript async filesystem APIs, etc.). That is, until I tried to invert control in the case of the algorithm I’m about to present, for concurrency purposes (yes, I have ”succeeded”. Yes, the resulting code is awfully complex. No, I won’t share it). Ironically, that was a few weeks before the first review of concurrency features; since then, I have become a convert of the async/await model, and of the Swift approach to that concept in particular: I think no one should ever have to invert control of his code ever again, even with the help of closures.

This is the code for a Countdown solver (if you’ve seen an earlier version, it has been updated somewhat: in particular, appending a new floor is now completely separate from adding a new operation to that floor).

//
//  Created by Pierre Lebeaupin on 22/04/2023.
//  Copyright © 2023 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation;

enum Op: String {
    case additive = "+", multiplicative = "*";
    
    func neutralValue() -> UInt32 {
        return ((self == .additive) ? 0 : 1);
    }
    
    func combine(_ a: UInt32, _ b: UInt32) -> UInt32 {
        return ((self == .additive) ? (a + b) : (a * b));
    }

    func uncombine(_ a: UInt32, _ b: UInt32) -> UInt32 {
        return ((self == .additive) ? (a - b) : (a / b));
    }
}


struct ReadyValue {
    let value: UInt32;
    let op: Op?;
    let description: () -> String;
    // Note that as a closure, it can depend on more than
    // the stored properties, contrary to a computed
    // property.

    init(_ inValue: UInt32) {
        value = inValue;
        op = nil;
        description = { return inValue.description;};
    }
    
    init(value inValue: UInt32,
         op inOp: Op,
         description indescription: @escaping () -> String) {
        value = inValue;
        op = inOp;
        description = indescription;
    }
}

func resolve(_ startGoal: UInt32,
             _ incb: @escaping (_ result: String) -> Void,
             _ primitives: UInt32...)
{
    var l = [ReadyValue]();
    var composedCount = 0;
    var otherComposedCount = 0;
    
    for element in primitives {
        l.append(ReadyValue(element));
    }

    {
    exploreAdditionOfFloor(kind: .additive);
    exploreAdditionOfFloor(kind: .multiplicative);
    }(); // https://bugs.swift.org/browse/SR-12243

    func exploreAdditionOfFloor(kind: Op) {
        guard otherComposedCount == 0 else {
            return;
        }
        otherComposedCount = composedCount;
        composedCount = 0;

        {
        exploreAdditionOfRightNode(skip: 0);
        }(); // https://bugs.swift.org/browse/SR-12243

        composedCount = otherComposedCount;
        otherComposedCount = 0;

        func exploreAdditionOfRightNode(skip: Int) {
            var decomposedValue = [false: kind.neutralValue(),
                                   true: kind.neutralValue()];
            // Siblings are the other right nodes on the same floor
            // Downstream ones are those created while this one is active
            var downstreamSiblingsRecruitmentSkip = Int(0); // placeholder
        
            {
            iteratePossibleLeftNodes(startingFrom: skip, { _ in return nil;});
            }(); // https://bugs.swift.org/browse/SR-12243

            /*
            The point of iterate_possible_left_nodes() is to enumerate all the possible ways
            to recruit candidates into the current operation, out of the pool of possible
            candidates; that pool has been determined by explore_addition_of_right_node().

            Obviously, the number of possibilities to pick a non-strict subset out of N
            elements is 2 to the Nth power: each candidate in the pool can either be
            recruited (1) or not (0) by this composition. Some of these possibilities are
            invalid, of course, but we will explore the solution space that way.

            By counting the possibilities from 00...0 to 11...1, we would in fact enumerate
            all the possible recruitments for this composition. For practical reasons, we
            will go backwards, from 11...1 to 00...0.

            What we will do in practice is evaluating the first candidate, first recruting
            it, which means we start with:
            - 1x...x
            then adjust the recruitment location, and delegate to ourselves. That way,
            assuming this function performs it job for N-1, it will perform the job from
            11...1 to 10...0; in other words, half the job for N. Then, we remove that
            candidate from the composition, and ihnibit it from any future recruitment for
            this composition; then we start over the loop.
            The result is that the next candidate will be evaluated and hired at first and
            recursive delegation will occur, resulting in these possibilities being
            explored:
            - 01...x
            That is, from 011...1 to 010...0.
            At the next loop iteration, the possibilities explored will start with 001, etc.
            up until we realize we excluded all candidates; at which point the only
            combination left is 00...0, which we don't need to explore at all. Unless, of
            course, there were only one or two candidates in the pool, in which case that
            test triggers even earlier.
             
            There is additional accounting to avoid a downstream sibling (a composition on
            the same floor) recovering candidates in a way that results in a composition
            that we ourselves explored earlier, but that is the basic principle.
            */
            
            func iteratePossibleLeftNodes(startingFrom: Int,
                                          _ walkOperands: @escaping
                                            (_ action: (_ value: ReadyValue,
                                                        _ reverse: Bool)
                                                -> Void?)
                                            -> Void?) {
                let emptyComposition = (walkOperands({_,_ in return ();}) == nil)

                /* Why that odd upper bound? If that conditional is true, emptyComposition is
                true, which means we are recruiting the senior contribution. If by doing so we'd
                end up skipping a composed contribution (the other part of the conditional means
                there is at least one left) as a result of the initial if, otherComposedCount
                would never reach 0 no matter how many siblings we'd add, and so we'd never be
                able to close the floor anyway. So it'd be pointless to keep going. */
                for candidateReverseOffset in
                        startingFrom ..< ((emptyComposition && (otherComposedCount > 0)) ? startingFrom+1 : (l.count - composedCount) ) {
                    if emptyComposition {
                        downstreamSiblingsRecruitmentSkip = candidateReverseOffset;
                    }

                    /* Note: new compositions are added at the end of the array, i.e. at the high offsets,
                     while primitives (if any left) will be at the low offsets. As a result, the candidates
                     to be recruited as a first priority are found at the high offsets first, though not so
                     high that we would end up recruiting from the same floor (forbidden). */
                    let rightChild = l.remove(at: (l.count - composedCount - 1) - candidateReverseOffset);
                    if let _ = rightChild.op {
                        otherComposedCount -= 1;
                    }
                    defer {
                        if let _ = rightChild.op {
                            otherComposedCount += 1;
                        }
                        l.insert(rightChild, at: ((l.count+1) - composedCount - 1) - candidateReverseOffset);
                    }
                    
                    for phase in 0...1 {
                        let reverse = (phase == 1);
                        { (_ valueComponent: inout UInt32) in
                            valueComponent = kind.combine(valueComponent,
                                                          rightChild.value);
                        }(&decomposedValue[reverse]!);
                        defer {
                            { (_ valueComponent: inout UInt32) in
                                valueComponent = kind.uncombine(valueComponent,
                                                                rightChild.value);
                            }(&decomposedValue[reverse]!);
                        }
                        
                        let selfNode = {(_ action: (_ value: ReadyValue,
                                                    _ reverse: Bool)
                                         -> Void?)
                            -> Void? in
                            action(rightChild, reverse) ?? walkOperands(action);
                        };
                        
                        iteratePossibleLeftNodes(startingFrom: candidateReverseOffset,
                                                 selfNode);
                        
                        // close current composition
                        var num = 0;
                        guard (selfNode({_,_ in guard num == 0 else { return (); }; num += 1; return nil;}) != nil) && ( (kind     == .additive)
                                                                                                                         ? decomposedValue[false]!
                                                                                                                         > decomposedValue[true]!
                                                                                                                         : ((decomposedValue[false]!
                                                                                                                             % decomposedValue[true]!) == 0) ) else {
                            continue;
                        }
                        
                        let realizedValue = kind.uncombine(decomposedValue[false]!,
                                                           decomposedValue[true]!);
                        let description = { () -> String in
                            var current = "(";
                            var isFirst = true;
                            var numRev = 0;
                            
                            selfNode({(_ value: ReadyValue,
                                       _ freverse: Bool) -> Void? in
                                guard !freverse else {
                                    numRev += 1;
                                    return nil;
                                }
                                
                                if !isFirst {
                                    current += " ";
                                    current += kind.rawValue;
                                    current += " ";
                                }
                                isFirst = false;
                                current += value.description();
                                
                                return nil; // keep going;
                            });
                            
                            if numRev > 0 {
                                isFirst = true;
                                current += " ";
                                current += ((kind == .additive) ? "-" : "/");
                                current += " ";
                                if numRev > 1 {
                                    current += "(";
                                }
                                
                                selfNode({(_ value: ReadyValue,
                                           _ freverse: Bool) -> Void? in
                                    guard freverse else {
                                        return nil;
                                    }
                                    
                                    if !isFirst {
                                        current += " ";
                                        current += kind.rawValue;
                                        current += " ";
                                    }
                                    isFirst = false;
                                    current += value.description();
                                    
                                    return nil; // keep going;
                                });
                                if numRev > 1 {
                                    current += ")";
                                }
                            }
                            current += ")";
                            
                            return current;
                        };
                        
                        guard l.count > 0 else {
                            if realizedValue == startGoal {
                                incb(description());
                            }
                            continue;
                        }
                        
                        composedCount += 1;
                        l.append(ReadyValue(value: realizedValue,
                                            op: kind,
                                            description: description));
                        defer {
                            l.remove(at: l.count - 1);
                            composedCount -= 1;
                        }
                        
                        exploreAdditionOfRightNode(skip: downstreamSiblingsRecruitmentSkip);
                        exploreAdditionOfFloor(kind: ((kind == .additive) ? .multiplicative : .additive));
                    }
                }
            }
        }
        
    }
}
//
//  main.swift
//  AlternateSolver
//
//  Created by Pierre Lebeaupin on 22/04/2023.
//  Copyright © 2023 Pierre Lebeaupin. All rights reserved.
//

import Foundation

resolve(6372, {print($0);}, 10, 5, 8, 3, 25, 1, 42);

If you’d like to understand how it works, here is an explanation (feel free to skip to the part where I transform the code if this is too much). A potential solution such as (1+3)*7 is best formalised as a binary tree, where leaves are the primitive amounts, and interior nodes are operations (either one of +, -, *, or /); every node has a value: either that of the primitive or the result of the operation. For a candidate tree to be a solution, the value of the root node must equal the goal.

Actually exploring the solution space in this way, however, would result in solutions that are equivalent to each other, for instance ( (1 + 3) + 4 ) * 7 is obviously equivalent to ( (1 + 4) + 3 ) * 7, and to ( (3 + 4) + 1 ) * 7. But those are three different binary trees (and I am not even mentioning the equivalences where two operands are just swapped). The key observation, since addition is associative, is to group “unbroken” additions into a single addition operation which can have from 2 to many operands: (1 + 3 + 4) * 7. The additions must be unbroken: there is no further regrouping to be done in the case of ((1 + 3) * 7) + 4, for instance, since that tree has no equivalent, or at least none that are a consequence of associativity.

Multiplications, being also associative, must also be grouped in the same fashion. And to integrate subtraction and division, an operand is allowed to contribute to its operation in a reverse fashion: 4 - 1 becomes (4 + rev(1)), and 10 / 2 becomes (10 * rev(2)). But all of this would be pointless if groupings are not canonicalised, so we must mandate additive operations to only take as operands either primitive ones or the results of multiplicative operations, or a mix, and mandate multiplicative operations to only take as operands either primitive ones or the results of additive operations, or a mix; by browsing the problem space this way, we already eliminate a significant chunk of redundancy. By forcing operands to be ordered (we will see what that means in a bit), we eliminate yet more, and the redundancies that remain can usually be handled on a case-by-case basis.

None of the above is breaking new ground, and I have seen implementations of that idea that rely on the Python itertools module for instance. But here is what I came up with on top of that.

I build up my candidate, now non-binary tree from the bottom up: first I add a floor, which is a collection of operations that have the same distance from the root operation; for instance, in ((2 + 3) * 4) + 7, 2+3 is on the bottom floor, the multiplicative operation implicating 4 is on the first elevation, and the additive operation implicating 7 is on the second and last elevation. Note that as a result of the mandate, a given floor can never contain a mix of operations: this is obviously the case for the root operation since it is alone in that floor, and a floor that contains only additive operations forces the floor below, which is made of the operations directly feeding into those, to be made only of multiplicative operations, and vice-versa, so we have just recursively proven the property for all floors. As a result, the type of operation alternates between each floor: if we seed the bottom floor with an additive operation, additive operations will only occur on even elevations (and the bottom floor), and multiplicative operations will only occur on the odd-numbered elevations.

But for now we only have the bottom floor, we don’t even know how high the root operation will be. So we start by adding an empty additive operation to that floor, and see where that takes us; once that is done, we instead add an empty multiplicative operation, and see where that takes us: this is the first bifurcation.

Inside either of these bifurcations, we take a node from the available operands list to add it to the operation, starting from the first available one, but we will also later try adding a further available node as the senior operand instead; that is the second bifurcation. In either case, the node can be added as either a forward or reverse operand: that is a further bifurcation. And now that a node has been added as a contribution to the operation, we can explore either adding a further node from the available ones (and further bifurcate from that depending on which we pick), or freezing the operation, which is another bifurcation still (of course, we can’t freeze an operation that only has a single contribution: that bifurcation path is closed off when we detect that in code, so assume there are at least two contributions). And once we’ve frozen the operation, we can choose to add another operation to that floor (which must be of the same type as a result), or close that floor and open a new one just above, in which case the operations from the just-closed floor become available as operands. And the choices can now recursively bifurcate from these…

There are a bunch of checks and what nots that are performed everywhere. For instance, you can’t close an elevation unless all operations from the floor below that elevation have been recruited as operands in that elevation: otherwise, you’d have an available node that just couldn’t be recruited anymore. Why? Because it couldn’t be recruited by a higher floor either since that would result in its distance from the root being inconsistent with that of its neighbours from the same floor (such a possibility will be explored, but when it comes time to do so, not by hijacking the function dedicated to exploring the possibilities resulting from its current elevation). As another example, to avoid exploring (1 + 2) * (3 + 4) then (3 + 4) * (1 + 2), we don’t just force the multiplication to recruit its operands in order, we also prevent the second addition from recruiting 1 if the first addition, which was earlier in the same floor, has recruited 3 as its senior member. And these checks sometimes combine: if those additions are on an elevation already, with one multiplication in the bottom floor, the first addition cannot recruit anything other than that multiplication as its senior member, since that would prevent any future addition on the same floor from recruiting the first multiplication, in turn preventing any makeup of that floor from closing as long as that first addition has anything other than the first multiplication as its senior member; so might as well cut off this whole bifurcation early.

Here is an example of a state we can find ourselves in

|_a: a*4*…
|_a: 2+3_|
Available primitives: [7]
Skipped operands for the current operation: []
Skipped primitives for the current floor: []

In this representation, operations in a given floor are designated with a, b, c, etc. operations in an elevation will refer to recruits from the floor immediately below them using these letters (no need to specify the floor, since they can only recruit from the one immediately below them). Here, the ground floor is closed: this is necessary anyway in order to have a floor above it. The multiplication in the first elevation is not closed yet: it can still recruit operands. When starting from this state, we can either:

  • Recruit 7 as a forward member of the currently open operation (the multiplication on the first elevation):

    |_a: a*4*7*…
    |_a: 2+3_|
    Available primitives: []
    

    At which point we cannot recruit any more, so the only possibility left is to close that operation, and other checks result in the tree being considered complete, so the only thing left is to compare the result of the newly minted root node with the goal.

  • Recruit 7 as a reverse member of the currently open operation (the multiplication on the first elevation):

    |_a: (a*4/7)*…
    |_a: 2+3_|
    Available primitives: []
    

    At which point we cannot recruit any more, so the only possibility left is to close that operation; but that is rejected as 7 does not evenly divide 20, which means exploration of that alternative is over.

  • Close the current operation as-is:

    |_a: a*4_
    |_a: 2+3_|
    Available primitives: [7]
    

    At which point a check detects we do not have enough recruitable operands to start a new operation on the same floor, so we close the floor and open another:

    |
    |_a: a*4_|
    |_a: 2+3_|
    Available primitives: [7]
    

    I don’t think it represents a spoiler if I reveal that only one operation can be made to fit in that floor, whose recruits can only be a and 7; a cannot be recruited as a reverse contribution (the result would be negative regardless of how 7 contributes), so the only two valid final states are:

    |_a: a+7_|
    |_a: a*4_|
    |_a: 2+3_|
    Available primitives: []
    
    |_a: a-7_|
    |_a: a*4_|
    |_a: 2+3_|
    Available primitives: []
    

    And in each case, the recursion terminates by comparing the result of the root node with the goal.

That is of course a simple case; a complex one can be:

|_a: a*10_b: 8*25…
|_a: 1+4+5_|
Available primitives: []
Skipped operands for the current operation: [9]
Skipped primitives for the current floor: [6]

Which can be completed in a number of ways.

Honestly, describing the algorithm in prose will only take you so far: I recommend you run the code under the debugger and play with it, by first stepping through the first recruitments, then putting a breakpoint for when the goal is reached and looking at what kind of function stack can lead there and how the solution tree is browsed to print the result, then my favourite technique: put a one-off breakpoint on each line (easy with a script), so you can just continue each time and be sure the next breakpoint hit will be a situation you haven’t seen before.

A few notes on the implementation proper:

  • it is well-known that you can take a list of non-binary trees, and reversibly transform that into a binary tree where a right-child relationship corresponds to that between a node and its senior child, and a left-child relationship corresponds to that between a node and its next sibling in the non-binary tree. Since I represent the list of contributions of a given operation as a linked list, this binarization is a natural way to express the various bifurcations when exploring the solution space:

    • exploreAdditionOfRightNode() is what we call to explore the bifurcation where we add a new operation, since it (more specifically, its senior node) will end up being the child in a right-child relationship to be determined later,
    • iteratePossibleLeftNodes() is what we call when an operation is open to a new node being recruited, resulting in a new left-child relationship. Unless we are recruiting a senior node, in which case this does not correspond to that, so indeed the function is slightly misnamed: an earlier version of that function was used only for non-senior nodes so the name was deserved, and since then the implementations for recruiting a node regardless of whether it will have a senior role or not have been merged… but the function retained its name.

    In effect, that naming scheme is best thought of as an allusion to this binarization; where it makes best sense is when browsing an actual solution in order to print it, where the closures will come from the relevant functions.

  • In effect, the main recursion is between iteratePossibleLeftNodes() and itself, though not necessarily directly. iteratePossibleLeftNodes() can invoke:

    • itself directly
    • exploreAdditionOfRightNode(), which ends up delegating to iteratePossibleLeftNodes()
    • exploreAdditionOfFloor(), which calls exploreAdditionOfRightNode(), which ends up delegating to iteratePossibleLeftNodes()

    In particular, it is up to iteratePossibleLeftNodes() to close the current operation prior to delegating to either exploreAdditionOf() function, so it can be thought of as the bulk of the logic; the two others are best thought as the way of allocating on the stack the necessary additional information for a floor and for an operation, respectively.

  • In order to support the checks, many calculations intermediates and related bookkeeping are kept around. Some, appropriately, are temporaries kept on the stack which are not captured by any anonymous closure, this is the case for the ongoing totals for an operation that is not closed yet. The total for a closed operation is a bit of an outlier, as it is stored in the structure for a recruit: this allows the same structure field to serve as the cached total for a composite recruit, and as the source of truth in the case of a primitive recruit. This was done for efficiency reasons: no redundant storage in the case of primitive recruits, and no code divergence when the goal is to obtain this value without probing the recruit any deeper, saving an unpredictable branch and cycles in general. However, storage in the linked list of contributions is immutable, for reasons we will see later.

4 Likes

Ready to transform that code for concurrency? OK, here we go:

//
//  Created by Pierre Lebeaupin on 17/01/2021.
//  Copyright © 2021 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation;
            
            
@Sendable func exploreAdditionOfFloorAsync(_ l: inout [ReadyValueNonGCD],
                                               _ composedCount: inout Int,
                                               _ otherComposedCount: inout Int,
                                               _ dispatcher: DispatcherAsync,
                                           kind: Op) async {
    guard otherComposedCount == 0 else {
        return;
    }
    
    otherComposedCount = composedCount;
    composedCount = 0;
    
    await exploreAdditionOfRightNodeAsync(&l,
                                          &composedCount,
                                          &otherComposedCount,
                                          dispatcher,
                                          kind: kind,
                                          skip: 0);

    composedCount = otherComposedCount;
    otherComposedCount = 0;
}

@Sendable func exploreAdditionOfRightNodeAsync(_ l: inout [ReadyValueNonGCD],
                                               _ composedCount: inout Int,
                                               _ otherComposedCount: inout Int,
                                               _ dispatcher: DispatcherAsync,
                                               kind: Op,
                                               skip: Int) async {
    var referenceDecomposedValue = [false: kind.neutralValue(), true: kind.neutralValue()];
    // Siblings are the other right nodes on the same floor
    // Downstream ones are those created while this one is active
    var downstreamSiblingsRecruitmentSkip = Int(0); // placeholder
    
    await dispatcher(&l,
                     &composedCount,
                     &otherComposedCount,
                     &referenceDecomposedValue,
                     &downstreamSiblingsRecruitmentSkip,
                     kind,
                     /* startingFrom: */ skip,
                     { _ in return nil;});
}

/*
The point of iterate_possible_left_nodes() is to enumerate all the possible ways
to recruit candidates into the current operation, out of the pool of possible
candidates; that pool has been determined by explore_addition_of_right_node().

Obviously, the number of possibilities to pick a non-strict subset out of N
elements is 2 to the Nth power: each candidate in the pool can either be
recruited (1) or not (0) by this composition. Some of these possibilities are
invalid, of course, but we will explore the solution space that way.

By counting the possibilities from 00...0 to 11...1, we would in fact enumerate
all the possible recruitments for this composition. For practical reasons, we
will go backwards, from 11...1 to 00...0.

What we will do in practice is evaluating the first candidate, first recruting
it, which means we start with:
- 1x...x
then adjust the recruitment location, and delegate to ourselves. That way,
assuming this function performs it job for N-1, it will perform the job from
11...1 to 10...0; in other words, half the job for N. Then, we remove that
candidate from the composition, and ihnibit it from any future recruitment for
this composition; then we start over the loop.
The result is that the next candidate will be evaluated and hired at first and
recursive delegation will occur, resulting in these possibilities being
explored:
- 01...x
That is, from 011...1 to 010...0.
At the next loop iteration, the possibilities explored will start with 001, etc.
up until we realize we excluded all candidates; at which point the only
combination left is 00...0, which we don't need to explore at all. Unless, of
course, there were only one or two candidates in the pool, in which case that
test triggers even earlier.

There is additional accounting to avoid a downstream sibling (a composition on
the same floor) recovering candidates in a way that results in a composition
that we ourselves explored earlier, but that is the basic principle.
*/
@Sendable func iteratePossibleLeftNodesAsync(_ l: inout [ReadyValueNonGCD],
                                             _ composedCount: inout Int,
                                             _ otherComposedCount: inout Int,
                                             _ decomposedValue: inout [Bool:UInt32],
                                             _ downstreamSiblingsRecruitmentSkip: inout Int,
                                             _ resultReceiver: @escaping @Sendable (_ result: String) -> Void,
                                             _ startGoal: UInt32,
                                             _ kind: Op,
                                             _ startingFrom: Int,
                                             _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?,
                                             _ dispatcher: DispatcherAsync) async {
    let emptyComposition = (walkOperands({_,_ in return ();}) == nil)

    /* Why that odd upper bound? If that conditional is true, emptyComposition is
    true, which means we are recruiting the senior contribution. If by doing so we'd
    end up skipping a composed contribution (the other part of the conditional means
    there is at least one left) as a result of the initial if, otherComposedCount
    would never reach 0 no matter how many siblings we'd add, and so we'd never be
    able to close the floor anyway. So it'd be pointless to keep going. */
    for candidateReverseOffset in
            startingFrom ..< ((emptyComposition && (otherComposedCount > 0)) ? startingFrom+1 : (l.count - composedCount) ) {
        if emptyComposition {
            downstreamSiblingsRecruitmentSkip = candidateReverseOffset;
        }

        /* Note: new compositions are added at the end of the array, i.e. at the high offsets,
         while primitives (if any left) will be at the low offsets. As a result, the candidates
         to be recruited as a first priority are found at the high offsets first, though not so
         high that we would end up recruiting from the same floor (forbidden). */
        let rightChild = l.remove(at: (l.count - composedCount - 1) - candidateReverseOffset);
        if let _ = rightChild.op {
            otherComposedCount -= 1;
        }
        defer {
            if let _ = rightChild.op {
                otherComposedCount += 1;
            }
            l.insert(rightChild, at: ((l.count+1) - composedCount - 1) - candidateReverseOffset);
        }
        
        for phase in 0...1 {
            let reverse = (phase == 1);
            { (_ valueComponent: inout UInt32) in
                valueComponent = kind.combine(valueComponent, rightChild.value);
            }(&decomposedValue[reverse]!);
            defer {
                { (_ valueComponent: inout UInt32) in
                    valueComponent = kind.uncombine(valueComponent, rightChild.value);
                }(&decomposedValue[reverse]!);
            }
            
            let selfNode = {@Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void? in
                return action(rightChild, reverse) ?? walkOperands(action);
            };
            
            await dispatcher(&l,
                             &composedCount,
                             &otherComposedCount,
                             &decomposedValue,
                             &downstreamSiblingsRecruitmentSkip,
                             kind,
                             /* startingFrom: */ candidateReverseOffset,
                             selfNode);
            
            // close current composition
            var num = 0;
            guard (selfNode({_,_ in guard num == 0 else {return ();}; num += 1; return nil;}) != nil)
                    && ( (kind == .additive) ? decomposedValue[false]! > decomposedValue[true]! :
                            ((decomposedValue[false]! % decomposedValue[true]!) == 0) ) else {
                        continue;
                    }
            
            let realizedValue = kind.uncombine(decomposedValue[false]!, decomposedValue[true]!);
            let description = { @Sendable () -> String in
                var current = "(";
                var isFirst = true;
                var numRev = 0;

                selfNode({(_ value: ReadyValueNonGCD, _ freverse: Bool) -> Void? in
                    guard !freverse else {
                        numRev += 1;
                        return nil;
                    }
                    
                    if !isFirst {
                        current += " ";
                        current += kind.rawValue;
                        current += " ";
                    }
                    isFirst = false;
                    current += value.description();
                    
                    return nil; // keep going;
                });
                        
                if numRev > 0 {
                    isFirst = true;
                    current += " ";
                    current += ((kind == .additive) ? "-" : "/");
                    current += " ";
                    if numRev > 1 {
                        current += "(";
                    }
                 
                    selfNode({(_ value: ReadyValueNonGCD,
                               _ freverse: Bool) -> Void? in
                        guard freverse else {
                            return nil;
                        }
                                
                        if !isFirst {
                            current += " ";
                            current += kind.rawValue;
                            current += " ";
                        }
                        isFirst = false;
                        current += value.description();
                    
                        return nil;
                    });
                    if numRev > 1 {
                        current += ")";
                    }
                }
                current += ")";
                
                return current;
            };
            
            guard l.count > 0 else {
                if realizedValue == startGoal {
                    resultReceiver(description());
                }
                continue;
            }
            
            composedCount += 1;
            l.append(.init(value: realizedValue, op: kind, description: description));
            defer {
                l.remove(at: l.count - 1);
                composedCount -= 1;
            }
            
            await exploreAdditionOfRightNodeAsync(&l,
                                                  &composedCount,
                                                  &otherComposedCount,
                                                  dispatcher,
                                                  kind: kind,
                                                  skip: downstreamSiblingsRecruitmentSkip);
            await exploreAdditionOfFloorAsync(&l,
                                              &composedCount,
                                              &otherComposedCount,
                                              dispatcher,
                                              kind: ((kind == .additive) ? .multiplicative : .additive));
        }
    }
}
2 Likes
// This file has been generated automatically from AsyncExplorers.swift. Do not modify directly.
//
//  Created by Pierre Lebeaupin on 17/01/2021.
//  Copyright © 2021 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation;
            
            
@Sendable func exploreAdditionOfFloorPseudoReasync(_ l: inout [ReadyValueNonGCD],
                                               _ composedCount: inout Int,
                                               _ otherComposedCount: inout Int,
                                               _ dispatcher: DispatcherPseudoReasync,
                                           kind: Op) {
    guard otherComposedCount == 0 else {
        return;
    }
    
    otherComposedCount = composedCount;
    composedCount = 0;
    
     exploreAdditionOfRightNodePseudoReasync(&l,
                                          &composedCount,
                                          &otherComposedCount,
                                          dispatcher,
                                          kind: kind,
                                          skip: 0);

    composedCount = otherComposedCount;
    otherComposedCount = 0;
}

@Sendable func exploreAdditionOfRightNodePseudoReasync(_ l: inout [ReadyValueNonGCD],
                                               _ composedCount: inout Int,
                                               _ otherComposedCount: inout Int,
                                               _ dispatcher: DispatcherPseudoReasync,
                                               kind: Op,
                                               skip: Int) {
    var referenceDecomposedValue = [false: kind.neutralValue(), true: kind.neutralValue()];
    // Siblings are the other right nodes on the same floor
    // Downstream ones are those created while this one is active
    var downstreamSiblingsRecruitmentSkip = Int(0); // placeholder
    
     dispatcher(&l,
                     &composedCount,
                     &otherComposedCount,
                     &referenceDecomposedValue,
                     &downstreamSiblingsRecruitmentSkip,
                     kind,
                     /* startingFrom: */ skip,
                     { _ in return nil;});
}

/*
The point of iterate_possible_left_nodes() is to enumerate all the possible ways
to recruit candidates into the current operation, out of the pool of possible
candidates; that pool has been determined by explore_addition_of_right_node().

Obviously, the number of possibilities to pick a non-strict subset out of N
elements is 2 to the Nth power: each candidate in the pool can either be
recruited (1) or not (0) by this composition. Some of these possibilities are
invalid, of course, but we will explore the solution space that way.

By counting the possibilities from 00...0 to 11...1, we would in fact enumerate
all the possible recruitments for this composition. For practical reasons, we
will go backwards, from 11...1 to 00...0.

What we will do in practice is evaluating the first candidate, first recruting
it, which means we start with:
- 1x...x
then adjust the recruitment location, and delegate to ourselves. That way,
assuming this function performs it job for N-1, it will perform the job from
11...1 to 10...0; in other words, half the job for N. Then, we remove that
candidate from the composition, and ihnibit it from any future recruitment for
this composition; then we start over the loop.
The result is that the next candidate will be evaluated and hired at first and
recursive delegation will occur, resulting in these possibilities being
explored:
- 01...x
That is, from 011...1 to 010...0.
At the next loop iteration, the possibilities explored will start with 001, etc.
up until we realize we excluded all candidates; at which point the only
combination left is 00...0, which we don't need to explore at all. Unless, of
course, there were only one or two candidates in the pool, in which case that
test triggers even earlier.

There is additional accounting to avoid a downstream sibling (a composition on
the same floor) recovering candidates in a way that results in a composition
that we ourselves explored earlier, but that is the basic principle.
*/
@Sendable func iteratePossibleLeftNodesPseudoReasync(_ l: inout [ReadyValueNonGCD],
                                             _ composedCount: inout Int,
                                             _ otherComposedCount: inout Int,
                                             _ decomposedValue: inout [Bool:UInt32],
                                             _ downstreamSiblingsRecruitmentSkip: inout Int,
                                             _ resultReceiver: @escaping @Sendable (_ result: String) -> Void,
                                             _ startGoal: UInt32,
                                             _ kind: Op,
                                             _ startingFrom: Int,
                                             _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?,
                                             _ dispatcher: DispatcherPseudoReasync) {
    let emptyComposition = (walkOperands({_,_ in return ();}) == nil)

    /* Why that odd upper bound? If that conditional is true, emptyComposition is
    true, which means we are recruiting the senior contribution. If by doing so we'd
    end up skipping a composed contribution (the other part of the conditional means
    there is at least one left) as a result of the initial if, otherComposedCount
    would never reach 0 no matter how many siblings we'd add, and so we'd never be
    able to close the floor anyway. So it'd be pointless to keep going. */
    for candidateReverseOffset in
            startingFrom ..< ((emptyComposition && (otherComposedCount > 0)) ? startingFrom+1 : (l.count - composedCount) ) {
        if emptyComposition {
            downstreamSiblingsRecruitmentSkip = candidateReverseOffset;
        }

        /* Note: new compositions are added at the end of the array, i.e. at the high offsets,
         while primitives (if any left) will be at the low offsets. As a result, the candidates
         to be recruited as a first priority are found at the high offsets first, though not so
         high that we would end up recruiting from the same floor (forbidden). */
        let rightChild = l.remove(at: (l.count - composedCount - 1) - candidateReverseOffset);
        if let _ = rightChild.op {
            otherComposedCount -= 1;
        }
        defer {
            if let _ = rightChild.op {
                otherComposedCount += 1;
            }
            l.insert(rightChild, at: ((l.count+1) - composedCount - 1) - candidateReverseOffset);
        }
        
        for phase in 0...1 {
            let reverse = (phase == 1);
            { (_ valueComponent: inout UInt32) in
                valueComponent = kind.combine(valueComponent, rightChild.value);
            }(&decomposedValue[reverse]!);
            defer {
                { (_ valueComponent: inout UInt32) in
                    valueComponent = kind.uncombine(valueComponent, rightChild.value);
                }(&decomposedValue[reverse]!);
            }
            
            let selfNode = {@Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void? in
                return action(rightChild, reverse) ?? walkOperands(action);
            };
            
             dispatcher(&l,
                             &composedCount,
                             &otherComposedCount,
                             &decomposedValue,
                             &downstreamSiblingsRecruitmentSkip,
                             kind,
                             /* startingFrom: */ candidateReverseOffset,
                             selfNode);
            
            // close current composition
            var num = 0;
            guard (selfNode({_,_ in guard num == 0 else {return ();}; num += 1; return nil;}) != nil)
                    && ( (kind == .additive) ? decomposedValue[false]! > decomposedValue[true]! :
                            ((decomposedValue[false]! % decomposedValue[true]!) == 0) ) else {
                        continue;
                    }
            
            let realizedValue = kind.uncombine(decomposedValue[false]!, decomposedValue[true]!);
            let description = { @Sendable () -> String in
                var current = "(";
                var isFirst = true;
                var numRev = 0;

                selfNode({(_ value: ReadyValueNonGCD, _ freverse: Bool) -> Void? in
                    guard !freverse else {
                        numRev += 1;
                        return nil;
                    }
                    
                    if !isFirst {
                        current += " ";
                        current += kind.rawValue;
                        current += " ";
                    }
                    isFirst = false;
                    current += value.description();
                    
                    return nil; // keep going;
                });
                        
                if numRev > 0 {
                    isFirst = true;
                    current += " ";
                    current += ((kind == .additive) ? "-" : "/");
                    current += " ";
                    if numRev > 1 {
                        current += "(";
                    }
                 
                    selfNode({(_ value: ReadyValueNonGCD,
                               _ freverse: Bool) -> Void? in
                        guard freverse else {
                            return nil;
                        }
                                
                        if !isFirst {
                            current += " ";
                            current += kind.rawValue;
                            current += " ";
                        }
                        isFirst = false;
                        current += value.description();
                    
                        return nil;
                    });
                    if numRev > 1 {
                        current += ")";
                    }
                }
                current += ")";
                
                return current;
            };
            
            guard l.count > 0 else {
                if realizedValue == startGoal {
                    resultReceiver(description());
                }
                continue;
            }
            
            composedCount += 1;
            l.append(.init(value: realizedValue, op: kind, description: description));
            defer {
                l.remove(at: l.count - 1);
                composedCount -= 1;
            }
            
             exploreAdditionOfRightNodePseudoReasync(&l,
                                                  &composedCount,
                                                  &otherComposedCount,
                                                  dispatcher,
                                                  kind: kind,
                                                  skip: downstreamSiblingsRecruitmentSkip);
             exploreAdditionOfFloorPseudoReasync(&l,
                                              &composedCount,
                                              &otherComposedCount,
                                              dispatcher,
                                              kind: ((kind == .additive) ? .multiplicative : .additive));
        }
    }
}
1 Like
//
//  Created by Pierre Lebeaupin on 17/01/2021.
//  Copyright © 2021 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation;

@available(macOS 12.0.0, iOS 15.0, *)
struct ReadyValueNonGCD:Sendable {
    let value: UInt32;
    let op: Op?;
    let description: @Sendable () -> String;
    // Note that as a closure, it can depend on more than the stored properties, contrary to a
    // computed property.

    init(_ inValue: UInt32) {
        value = inValue;
        op = nil;
        description = { return inValue.description;};
    }
    
    init(value inValue: UInt32, op inOp: Op, description indescription: @escaping @Sendable () -> String) {
        value = inValue;
        op = inOp;
        description = indescription;
    }
}

@available(macOS 12.0.0, iOS 15.0, *)
func resolveCoreNonGCD(_ injectedDispatcher: DispatcherAsync, _ primitives: [UInt32]) async {
    var referenceL = [ReadyValueNonGCD]();
    var referenceComposedCount = 0;
    var referenceOtherComposedCount = 0;
            
            
    for element in primitives {
        referenceL.append(.init(element));
    }
        
    await exploreAdditionOfFloorAsync(&referenceL,
                                     &referenceComposedCount,
                                     &referenceOtherComposedCount,
                                     injectedDispatcher,
                                     kind: .additive);
    await exploreAdditionOfFloorAsync(&referenceL,
                                     &referenceComposedCount,
                                     &referenceOtherComposedCount,
                                     injectedDispatcher,
                                     kind: .multiplicative);
}
//
//  Created by Pierre Lebeaupin on 16/07/2021.
//  Copyright © 2021 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation;
import os;

typealias DispatcherAsync = (_ l: inout [ReadyValueNonGCD],
                        _ composedCount: inout Int,
                        _ otherComposedCount: inout Int,
                        _ decomposedValue: inout [Bool:UInt32],
                        _ downstreamSiblingsRecruitmentSkip: inout Int,
                        _ kind: Op,
                        _ startingFrom: Int,
                        _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?) async -> Void;

typealias DispatcherPseudoReasync = (_ l: inout [ReadyValueNonGCD],
                        _ composedCount: inout Int,
                        _ otherComposedCount: inout Int,
                        _ decomposedValue: inout [Bool:UInt32],
                        _ downstreamSiblingsRecruitmentSkip: inout Int,
                        _ kind: Op,
                        _ startingFrom: Int,
                        _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?) -> Void;

// Despite https://github.com/apple/swift-evolution/blob/main/proposals/0309-unlock-existential-types-for-all-protocols.md
// I am unable to do the following, even with Xcode 13.2.1: I get the error that:
// "Protocol 'AsyncStringIteratorProtocol' can only be used as a generic constraint because it has Self or associated type requirements"
// Update: with Xcode 14.2 I get "No type for 'Self.AsyncIterator' can satisfy both 'Self.AsyncIterator == any AsyncStringIteratorProtocol' and 'Self.AsyncIterator : AsyncIteratorProtocol'"
#if false
protocol AsyncStringIteratorProtocol: AsyncIteratorProtocol where Element == String {
}

protocol AsyncStringSequence: AsyncSequence where Element == String, AsyncIterator == any AsyncStringIteratorProtocol {
}
#endif

struct AnyAsyncIterator<A>: AsyncIteratorProtocol {
    typealias Element = A;
    mutating func next() async throws -> A? {
        return await inner.next();
    }

    fileprivate var inner: AsyncStream<A>.AsyncIterator;
}

struct AnyAsyncSequence<A>: AsyncSequence {
    typealias Element = A;
    typealias AsyncIterator = AnyAsyncIterator<A>;
    __consuming func makeAsyncIterator() -> AnyAsyncIterator<A> {
        return .init(inner: inner.makeAsyncIterator());
    }
    
    fileprivate var inner: AsyncStream<A>;
}

@available(macOS 12.0, iOS 15.0, *) struct Channel: BatchIteratorProtocol {
    enum PayloadOrOutOfBand<Payload, OutOfBand> {
        case payload(Payload)
        case outOfBand(OutOfBand)
    }
    
    struct Sink {
        var limiter: AsyncStream<Void>.Iterator;
        var sinkHole: AsyncStream<PayloadOrOutOfBand<Task<Void, Never>, AsyncStream<Void>.Continuation>>.Continuation;
        
        mutating func yield(microbatch: Task<Void, Never>) async {
            sinkHole.yield(.payload(microbatch));
            _ = await limiter.next();
        }
    }
    
    var source: AsyncStream<PayloadOrOutOfBand<Task<Void, Never>, AsyncStream<Void>.Continuation>>.Iterator;
    var limiterContinuation: AsyncStream<Void>.Continuation;
    //var rootTask: Task<Void, Never>;

    init(exec: @Sendable @escaping (_ sink: inout Sink) async -> Void) async {
        var tempsource = AsyncStream(PayloadOrOutOfBand<Task<Void, Never>, AsyncStream<Void>.Continuation>.self,
                                     bufferingPolicy: .bufferingNewest(1),
                                     { (firstContinuaton) in
            /*let localRoot =*/ Task {
                var param = Sink(limiter: AsyncStream(Void.self,
                                                      bufferingPolicy: .bufferingNewest(1),
                                                      {
                    firstContinuaton.yield(.outOfBand($0));
                }).makeAsyncIterator(),
                                 sinkHole: firstContinuaton);
            
                _ = await param.limiter.next();
                
                await exec(&param);
                
                param.sinkHole.finish();
            }
            
            //firstContinuaton.yield(.outOfBand(.outOfBand(localRoot)));
        }).makeAsyncIterator();
        
        var tempLimiterContinuation: AsyncStream<Void>.Continuation?;
        //var tempRootTask: Task<Void, Never>?;
        //while (tempLimiterContinuation == nil) || (tempRootTask == nil) {
            // The two types of out of band data may come in any order…
            guard case let .outOfBand(drop) = await tempsource.next() else {
                fatalError();
            }
            
        tempLimiterContinuation = drop;
            /*switch drop {
            case let .payload(inContinuation):
                guard tempLimiterContinuation == nil else {
                    fatalError();
                }
                tempLimiterContinuation = inContinuation;
            case let .outOfBand(inTask):
                guard tempRootTask == nil else {
                    fatalError();
                }
                tempRootTask = inTask;
            }*/
        //}
        
        //rootTask = tempRootTask!
        limiterContinuation = tempLimiterContinuation!;
        source = tempsource;
    }
    
    mutating func provide() async -> Task<Void, Never>? {
        limiterContinuation.yield(());
        guard let rawDrop = await source.next() else {
            return nil;
        }

        guard case let .payload(drop) = rawDrop else {
            fatalError();
        }

        return drop;
    }
    
    /*mutating func invalidate() {
        rootTask.cancel();
    }*/
}


@available(macOS 12.0, iOS 15.0, *) func resolveAsync(_ startGoal: UInt32, _ primitives: UInt32...) async -> AnyAsyncSequence<String> {
    return .init(inner: .init(String.self) { resultContinuation in
        let _ = Computer(exec: { () -> Channel in
            return await Channel(exec: { (sink) in
                let resultReceiver = { @Sendable (result: String) -> Void in resultContinuation.yield(result);};

                @Sendable func iteratePossibleLeftNodesFakeDispatchAsync(_ l: inout [ReadyValueNonGCD],
                                                                         _ composedCount: inout Int,
                                                                         _ otherComposedCount: inout Int,
                                                                         _ decomposedValue: inout [Bool:UInt32],
                                                                         _ downstreamSiblingsRecruitmentSkip: inout Int,
                                                                         _ kind: Op,
                                                                         _ startingFrom: Int,
                                                                         _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?) async {
                    await iteratePossibleLeftNodesAsync(&l,
                                                        &composedCount,
                                                        &otherComposedCount,
                                                        &decomposedValue,
                                                        &downstreamSiblingsRecruitmentSkip,
                                                        resultReceiver,
                                                        startGoal,
                                                        kind,
                                                        startingFrom,
                                                        walkOperands,
                                                        iteratePossibleLeftNodesFakeDispatchAsync);
                }
        
                @Sendable func iteratePossibleLeftNodesFakeDispatchPseudoReasync(_ l: inout [ReadyValueNonGCD],
                                                                                 _ composedCount: inout Int,
                                                                                 _ otherComposedCount: inout Int,
                                                                                 _ decomposedValue: inout [Bool:UInt32],
                                                                                 _ downstreamSiblingsRecruitmentSkip: inout Int,
                                                                                 _ kind: Op,
                                                                                 _ startingFrom: Int,
                                                                                 _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?) {
                    iteratePossibleLeftNodesPseudoReasync(&l,
                                                          &composedCount,
                                                          &otherComposedCount,
                                                          &decomposedValue,
                                                          &downstreamSiblingsRecruitmentSkip,
                                                          resultReceiver,
                                                          startGoal,
                                                          kind,
                                                          startingFrom,
                                                          walkOperands,
                                                          iteratePossibleLeftNodesFakeDispatchPseudoReasync);
                }

                /* We need to maintain our own task group so that we may properly signal to resultContinuation
                 when no more result is to be expected. */
                await withTaskGroup(of: Void.self) { group in
                    func iteratePossibleLeftNodesDispatch(_ l: inout [ReadyValueNonGCD],
                                                          _ composedCount: inout Int,
                                                          _ otherComposedCount: inout Int,
                                                          _ decomposedValue: inout [Bool:UInt32],
                                                          _ downstreamSiblingsRecruitmentSkip: inout Int,
                                                          _ kind: Op,
                                                          _ startingFrom: Int,
                                                          _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?) async {
                        // reseat this divergence over a copy of the whole state
                        let paramL = l;
                        let paramComposedCount = composedCount;
                        let paramOtherComposedCount = otherComposedCount;
                        let paramDecomposedValue = decomposedValue;
                        let paramDownstreamSiblingsRecruitmentSkip = downstreamSiblingsRecruitmentSkip;
            
                        let task = Task {
                            /* Note that it is safe to key on the current thread, for the reason that
                             there is no suspension point between .begin and .end: this implies that this
                             is a single partial function which therefore is still on the same thread
                             from beginning to end, and which cannot be interspesed with other such
                             partial functions on the same thread either. */
                            os_signpost(.begin, log: LogWrapper.gObservationHandle, name: "micro-batch", signpostID: OSSignpostID(log: LogWrapper.gObservationHandle, object: Thread.current));
                            
                            var childL = paramL;
                            var childComposedCount = paramComposedCount;
                            var childOtherComposedCount = paramOtherComposedCount;
                            var childDecomposedValue = paramDecomposedValue;
                            var childDownstreamSiblingsRecruitmentSkip = paramDownstreamSiblingsRecruitmentSkip;

                            iteratePossibleLeftNodesPseudoReasync(&childL,
                                                                  &childComposedCount,
                                                                  &childOtherComposedCount,
                                                                  &childDecomposedValue,
                                                                  &childDownstreamSiblingsRecruitmentSkip,
                                                                  resultReceiver,
                                                                  startGoal,
                                                                  kind,
                                                                  startingFrom,
                                                                  walkOperands,
                                                                iteratePossibleLeftNodesFakeDispatchPseudoReasync);
                            
                            os_signpost(.end, log: LogWrapper.gObservationHandle, name: "micro-batch", signpostID: OSSignpostID(log: LogWrapper.gObservationHandle, object: Thread.current));
                        }
                        
                        group.addTask {
                            return await withTaskCancellationHandler(operation: {return await task.value;},
                                                                     onCancel: {task.cancel(); return;});
                        }
                        
                        await sink.yield(microbatch: task);
                    }
                            
                    func iteratePossibleLeftNodesDispatchLoop(_ l: inout [ReadyValueNonGCD],
                                                               _ composedCount: inout Int,
                                                               _ otherComposedCount: inout Int,
                                                               _ decomposedValue: inout [Bool:UInt32],
                                                               _ downstreamSiblingsRecruitmentSkip: inout Int,
                                                               _ kind: Op,
                                                               _ startingFrom: Int,
                                                               _ walkOperands: @escaping @Sendable (_ action: (_ value: ReadyValueNonGCD, _ reverse: Bool) -> Void?) -> Void?) async {
                        /* 1: Peek a bit: if we find out from startingFrom that no further left node is
                         going to be able to be added anyway (and remember: closing the composition as
                         it stands is performed in our caller), don't bother spawning a micro-batch.
                         2: Use a workload estimator heuristic to try and spawn Goldilocks-sized
                         micro-batches:
                         - 6 may be too many computations to fit in a tick (1/60th of a second)
                         - 4 already means too few possibilities to explore
                         - 3 is right out
                         Note that among other properties, this estimator value is monotonic. */
                        guard startingFrom < (l.count - composedCount),
                              l.count + (walkOperands({_,_ in return ();}) != nil ? 1 : 0) == 5 else {
                            await iteratePossibleLeftNodesAsync(&l,
                                                                &composedCount,
                                                                &otherComposedCount,
                                                                &decomposedValue,
                                                                &downstreamSiblingsRecruitmentSkip,
                                                                resultReceiver,
                                                                startGoal,
                                                                kind,
                                                                startingFrom,
                                                                walkOperands,
                                                                iteratePossibleLeftNodesDispatchLoop);
                            return;
                        }
                        
                        await iteratePossibleLeftNodesDispatch(&l,
                                                               &composedCount,
                                                               &otherComposedCount,
                                                               &decomposedValue,
                                                               &downstreamSiblingsRecruitmentSkip,
                                                               kind,
                                                               startingFrom,
                                                               walkOperands);
                    }
        
                    await resolveCoreNonGCD(iteratePossibleLeftNodesDispatchLoop, primitives);

                    // outstanding tasks in the group are awaited at that point, according to the spec
                    // https://github.com/DougGregor/swift-evolution/blob/structured-concurrency/proposals/nnnn-structured-concurrency.md
                };
                resultContinuation.finish();
            });
        });
    });
}
//
//  Created by Pierre Lebeaupin on 13/03/2022.
//  Copyright © 2022 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation


@available(macOS 12.0, iOS 15.0, *) protocol BatchIteratorProtocol {
    mutating func provide() async -> Task<Void, Never>?;
}

@available(macOS 12.0, iOS 15.0, *) struct Computer<BatchIterator: BatchIteratorProtocol> {
    init(exec: @Sendable @escaping () async -> BatchIterator) {
        Task {
            await withTaskGroup(of: Void.self) { group in
                var outstandingTasks = UInt(0);
                var iterator = await exec();

                while (true) {
                    guard outstandingTasks < ProcessInfo.processInfo.activeProcessorCount else {
                        _ = await group.next();
                        outstandingTasks -= 1;
                        continue;
                    }
                    
                    guard let newBatch = await iterator.provide() else {
                        break;
                        /* No need to manage a wind down: outstanding tasks in group are awaited
                         before withTaskGroup() is allowed to return. */
                    }
                    group.addTask {
                        return await withTaskCancellationHandler(operation: {return await newBatch.value;},
                                                                 onCancel: {newBatch.cancel(); return;});
                    }
                    outstandingTasks += 1;
                }
        
                /* Note that as a result, the code will adapt in case activeProcessorCount changes.
                 At least, it will do so eventually, not with any sort of timeliness: for example,
                 if activeProcessorCount increases somehow, we will have to wait for one task to
                 complete before the code will work towards making outstandingTasks equal the new
                 value. */
            };
        }
    }
}
//
//  Created by Pierre Lebeaupin on 16/07/2021.
//  Copyright © 2021 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation

enum Op: String {
    case additive = "+", multiplicative = "*";
    
    func neutralValue() -> UInt32 {
        return ((self == .additive) ? 0 : 1);
    }
    
    func combine(_ a: UInt32, _ b: UInt32) -> UInt32 {
        return ((self == .additive) ? (a + b) : (a * b));
    }

    func uncombine(_ a: UInt32, _ b: UInt32) -> UInt32 {
        return ((self == .additive) ? (a - b) : (a / b));
    }
}
//
//  Created by Pierre Lebeaupin on 17/01/2021.
//  Copyright © 2021 Pierre Lebeaupin. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//

import Foundation
import os

class LogWrapper {
    static let gObservationHandle = (((getenv("net_wanderingcoder_observed")?[0] ?? 0) == 0) ? .disabled
                                        : OSLog(subsystem: "net.wanderingcoder.projects.AsyncCountdown.AlternateParallelized",
                                                category: "PointsOfInterest"));
}

os_log(.`default`, log: LogWrapper.gObservationHandle,
       "Force resolution of log object while we still have one thread");
/* Can't even solve that with pthread_once or dispatch_once:
 they are forbidden from Swift, which redirects me to lazy
 global properties, which are not thread-safe. */


let fallback = {
    let completion = DispatchGroup();

    completion.enter();

    let pseudoMainQueue = DispatchQueue(label: "pseudo main queue");
    
    let proc = ResolveObject(6372,
                             pseudoMainQueue, {print($0);}, { completion.leave();}, 10, 5, 8, 3, 25, 1, 42);

    //let proc = ResolveObject(4471,
    //                         pseudoMainQueue, {print($0);}, { completion.leave();}, 75, 10, 2, 6, 7, 1, 1);

    //let proc = ResolveObject(952,
    //                         pseudoMainQueue, {print($0);}, { completion.leave();}, 25, 50, 75, 100, 3, 6);

    proc.start();

    completion.wait();
};


#if true
if #available(macOS 12.0, *) {
    let handle = Task.detached {
        let seq = await resolveAsync(4471, 75, 10, 2, 6, 7, 1, 1);
        
        for try await result in seq {
            print(result);
        }
    }
    
    try! await handle.value;
} else {
    fallback();
}
#else
fallback();
#endif
1 Like

The principle is to take advantage of the need to explore the first alternative of a bifurcation as an opportunity to send a trainee to do so instead of doing it ourselves; rather, we will directly consider the other alternative of the bifurcation as if the first was forbidden to us. When we run out of trainees, we suspend until at least one of them comes back. A trainee will come back once he has explored his assigned alternative: when he returns to his original bifurcation, instead of continuing to the other alternative he will come back directly to us, lacking the context to explore the whole tree.

To keep complexity to a minimum, we will carefully choose the kind of bifurcation we want to support, and only consider sending a trainee when on this specific kind. Since we have three explorer functions in practice, might as well choose one of these three.

Unfortunately, while the solution to bifurcate upon considering the addition of a new floor is attractive, it would not be efficient. Why? Because bifurcations of this kind are few and in particular can be unpredictably far between. Imagine we get to one such bifurcation, and we estimate that it is still a huge part of the solution space to explore, such that we hesitate sending a trainee alone in there: he might end up taking so long that everyone else will end up waiting for him in the end, such that a significant time could be spent with concurrency reduced to a single task. At the same time, we also hesitate exploring it ourselves: while some further bifurcations of the same type (appending yet another floor) are close, some are very far, such as cases where we build complex operands first, and some of it we couldn’t delegate at all, such as when we recruit all operands in a single operation on the floor we’ve chosen to explore (remember that each operand can contribute in reverse or not…). In fact, we do not want to delegate too small explorations either, as then the overhead of sending a trainee would dwarf his contribution (even with efficient task spawning, some overhead remains, such as duplication of our own state).

Instead, we only consider spawning a new task whenever we would call iteratePossibleLeftNodes(), as we will never regret not doing so: if we choose to explore it ourselves it will soon enough delegate to iteratePossibleLeftNodes() again anyway, directly or indirectly.

But remember that iteratePossibleLeftNodes() captures mutable local variables from exploreAdditionOfRightNode(), and in fact from resolve() itself. Will be be able to make that function sendable and keep doing so? Not directly. Instead, we unnest the functions, and rather pass these variables as inout (borrowed) parameters, and further sub-borrow them as the recursive calls require. When we actually spawn a new task, we copy these parameters, then make the function inside the subtask borrow the copy.

This makes the code less straightforward, in particular resulting in a number of inout parameters being carried around. Fortunately, some of the data is immutable and does not need to be copied, this is the case for the linked list of contributions for each composition (including the currently open one) for example.

Even then, the size of the alternative to explore is rather hard to predict (it can be computed, but this is rather memory-intensive). Instead, to consider whether to spawn a sub-task or not, we compute an estimator that we will use as a heuristic; it works rather well, resulting in subtasks that run (on target hardware) for about 3ms on average, with the 95th percentile duration being less than 9 ms and the 99th percentile being about 11ms. That means we can launch subtasks without needing to bother checking for cancellation inside the subtask, as it will always finish soon enough anyway.

Since we ourselves need to suspend when we have reached our limit of subtasks in flight (besides the cancellation considerations, remember each subtask carries its own state: the copy of the state we provided it with plus whatever it has started allocating itself, so having too many of them in-flight could be a memory issue), our code needs to be async. On the other hand, the subtasks don’t need to suspend, so they can just as well run sync code. This is the point of the PseudoReasync functions: to be sync while otherwise using the same types (with the same concurrency annotations, in particular) as the async code so as to seamlessly integrate, when it comes time to generate the description for a solution tree for instance.

And the only sane way to maintain the PseudoReasync functions is by putting the async explorer functions in their own file, and run a sed script that duplicates this code into a new file with the async and await keywords removed (and the function names changed):

sed -E -e 's/ async//g' -e 's/await//gi' -e 's/Async/PseudoReasync/g' < AsyncExplorers.swift > PseudoReasyncExplorers.swift

This is the only sane way (short of having reasync) because these functions need to be kept in sync (no pun intended) in every other way: they must manipulate the same types in the same fashion and maintain the same invariants and cooperate in doing so.

Hence, the need for reasync. It is not just necessary, it is unavoidable, if you don’t want this kind of workaround to widely develop.

Now it’s time to address some likely questions:

  • reasync, like rethrows, only makes sense for higher order functions, not for having a sync copy of arbitrary async functions, so what is the relationship with reasync?

Except that you can always turn an ordinary function into an higher-order one, in the worst case by having it take a function as parameter invoked whenever, to which you pass a dummy function. In fact, this is already the case for my explorers, and I did not even need to involve a dummy function: I have a rule that whenever I am tempted to name a function according to what is expected of it, rather than according to what it performs, then I deduce that this function needs to be injected instead, either by itself or as a member function within a protocol. As a result, the dispatch function that decides whether to spawn a new subtask or not is injected, and is of course async; meanwhile, inside the subtask I inject a dummy sync dispatch function (that unconditionally delegates), since I am not going to spawn subsubtasks anyway.

Regardless, there are many other reasons for why such functions ought to be higher order functions with an injected dependency (testability could be one of these), so reasync does fulfil the need with only modest requirements on the part of functions for which we need a sync and an async representation.

  • Couldn’t you just run the async function inside the subtask, rather than involve a second representation of the same code, with the only change being the exclusive use of the architectural stack?

I could… if I was willing to take a 15% performance hit: this is the improvement I measured when going from async to sync for the subtasks. Your mileage may vary, but I wouldn’t be surprised to see a comparable difference between sync and async in other scenarios.

  • If you mind the loss of performance in the subtasks, why don’t you mind such a loss in the main task?

Because the main task, regardless of whether it is sync or async, takes so little execution time out of the total that its consumption is best considered as being epsilon anyway. Seriously, ever since [SR-15785] TaskGroup addTask is blocking with poor performance · Issue #58062 · apple/swift · GitHub was fixed, I have failed to see a single call stack being captured in the middle of the main task when running under the time profile instrument. Not a single one. I only ever catch subtasks being on a CPU.

  • OK, but do you need the main task to be async?

I do in order to use Swift-native concurrency. I have another sync copy of the code that uses libdispatch, whose main difference is that it does not make use of concurrency annotations; instead I mark the code as being preconcurrency. But this code is obviously higher maintenance (what with the completion groups and all that jazz), and less featured, e.g. it uses a semaphore whose token count is provided at main task launch, so there is no avenue for the number of subtasks to adapt to any dynamic variation in activeProcessorCount (extremely unlikely on Apple platforms, but not so unlikely elsewhere)

  • Such a text transformation step will be surprising to the unwary coder, not to mention being unportable. Should you be preprocessing source code in this way?

While unseemly, this is preferable to syncing the code by hand, by far. Besides, Windows hosts can use any of Cygwin, MinGW, or some other distribution that includes sed for use on Windows. So I can live with it in the meantime. If you’re really bothered by this hack, then keep in mind that it is gone as soon as reasync is available.

  • Couldn't the code of that study be less complex?

Complexity is very much necessary to the point of this study: if the code was significantly less complex, it would be much less trouble to keep in sync a sync and an async version, such that reasync would not be necessary in that case. Here, it very much is.

6 Likes

I have created a new issue on GitHub to formalise the request, though the big reason I did so is to provide the project as an attachment.

1 Like

I hope you gain the support needed to make this a reality. Your point is well made. And this feature is very much needed. ASAP.

In case that wasn't enough food for discussion, I thought I should mention a few additional learnings from that project:

  • As you can see in this call:

    await dispatcher(&l,
                 &composedCount,
                 &otherComposedCount,
                 &decomposedValue,
                 &downstreamSiblingsRecruitmentSkip,
                 kind,
                 /* startingFrom: */ candidateReverseOffset,
                 selfNode);
    

    one of the issues with injecting functions instead of calling them directly by name is that you lose the benefits of named parameters. Now, I can understand losing some of these benefits, such as default parameter values. And I have to admit even just having the parameter name sigil raises some issue: are the parameter names part of the function type? Part of the signature? Both? Neither? But I definitely did lose some legibility between the time my functions could just call each other to when I had to involve an injected function so as to handle subtask spawning.

  • The dependency injection does not go far enough for me. To me, the function I inject, such as iteratePossibleLeftNodesDispatchLoop() should not even need to refer to iteratePossibleLeftNodesAsync or iteratePossibleLeftNodesPseudoReasync by name, as instead receive them as parameters. But the Swift type system won't allow that:

    • if I try to directly add such a function type as parameter to the Dispatcher* typealias, I end up with a type that refers to itself, and that is disallowed in a typealias, contrary to, say, a class or an enum with indirect.
    • if I wrap the function type in a struct instead, because a struct definition is allowed to refer to itself in such a way, the issue then is that I force the function to be escaping. This is not just a performance consideration: if my function captures a structured concurrency variable that is not allowed to escape, as is precisely the case here, it can't be made escaping, even when the need is only to satisfy a recursion in the type system.

    As a result, I have to have the dispatcher functions know more about iteratePossibleLeftNodesAsync than I am comfortable with… though I will also admit it needs to have access to a lot of these parameters for workload estimation purposes anyway.

1 Like

Part of the study will be to see how we can make the code evolve (language features enabling the creation of more maintainable and mergeable code are important, after all), but in the meantime, I have one more likely question to address:

  • Is this really the target use case for async Swift in the first place? Your code doesn't even involve device I/O, directly or indirectly.

If this was something like a cellular automaton algorithm such as Conway's, which can be bitsliced an vectorized with relative ease, I would agree with you: a GPU compute API such as OpenCL or Metal compute or even a vector-based API (think HPC with 16384-element vectors) would be better suited than async Swift.

But when I developed this Countdown algorithm, I realized this was an ideal showcase for thread-pool-based APIs such as Grand Central Dispatch: when I developed the first prototype using GCD with relative ease (the first attempt was a success except for only one unit test, which was easily fixed), the performance improvements exceeded my expectations. By contrast, I triple dog dare anyone to regularize this code well enough to make it run remotely well on GPU: you'd have to remove recursion, for one, since that is either unsupported or only with limited depth by all GPU compute APIs I know of, and then you'd have to worry about all these branches and other data structure accesses that would result in late and/or not-fully-filled warps, etc.; making this algorithm work with a vector-based API is even harder to imagine.

And then the initial async Swift proposals dropped, and I ported over my code. But while with GCD I could use the same code for the main task and the subtasks, here I need to involve separate processing… unless I have reasync, or if I take a 15% performance hit compared the the GCD version. And it is clear async Swift ought to be able to subsume all GCD use cases without significant regressions: what is the point of async Swift if its isn't? So yes, this is squarely in at least one of the target use cases for async Swift: subsuming GCD.

1 Like

Are there really doubts out there? We've had to define a ton of overloads in our projects to support async and non-async contexts. Our Parsing package also already leverages @rethrows to propagate a parser's ability to throw, and we'd love to provide similar @reasync functionality.

12 Likes

Agreeeeee

Why don't you ask that question to the search function? Remember, no pileup on the person who's in all likelihood only the messenger anyway.

I’m sorry if there’s been a misunderstanding, but I don’t see a pile on and am not engaging in one. I agree with your original post and was simply asking for context that other posts typically provide. The only other reasync post that comes up for me in search is this brief one, where people seem to be unanimously in favor.

Oh, right, sorting the search results by latest can help (I always forget it's not the default).

I honestly can't find anything, though I'm probably not wielding the search tool very well :slightly_smiling_face: If you can provide links/context I think it'd be helpful. I'm curious what kind of pushback there would be on reasync, since it seems pretty useful.

2 Likes

Over the weekend, I have implemented support for interrupting the processing. But we're going to see a failed attempt at doing so first, and look at the successful one tomorrow.

The principle of the first attempt was to throw across the main task so as to terminate it, and to do so without having to wait for a subtask to complete first (here we are fortunate enough that subtasks complete within 1/60th of a second, but not all processing will be as amenable to micro-batching, so let's assume we have longer-lived subtasks, which is easily accomplished by changing the estimator value at which we decide to spawn a subtask), so I naively replaced this:

await sink.yield(microbatch: task);

by this:

raiseMultiplexingGroup.addTask { await sink.yield(microbatch: task); }
try await raiseMultiplexingGroup.next();

with raiseMultiplexingGroup having been seeded with a task that will throw upon interruption.

But the compiler wouldn't let me, complaining that the function's mutable capture of sink (necessary since it contains an AsyncStream.Iterator) cannot be sent. And then I realised that whichever way I was going to adjust this, it was never going to work, if for no other reason that the compiler cannot allow control to return to my function, even just for traversing it with a throw, as long as that sub-borrow of sink was active. And in a sense, that was a good thing, because just throwing through the main task was never going to accomplish my goal anyway, as we're going to see tomorrow.

But still, that was something I only realised after pulling my hair in order to come up with this not-obvious-at-all-even-in-retrospect piece of code:

let unblock = await withCheckedContinuation { messenger in
     b.addTask {
         return try await withCheckedThrowingContinuation { temp in
             // Use a Dispatch source so we don't have to worry about providing a raw signal handler
             signal(SIGINT, SIG_IGN);

             messenger.resume(returning: RaiseMediator(continuation: temp,
                                  source: DispatchSource.makeSignalSource(signal: SIGINT,
                                      queue: DispatchQueue.global(qos:.userInitiated))));
        }
    }
}

(RaiseMediator being an actor that wraps one of the continuations). Though maybe I'll get to reuse something like that at some point…

1 Like

As hinted yesterday, the key insight to implementing interruption is that the first order of business is to terminate the AsyncStream providing the results so as to free up the client, and unwinding the main task was not going to accomplish that: reaching resultContinuation.finish() that way would be held on in-flight subtasks terminating first themselves… which was precisely the intent so that enumeration wouldn't terminate before we could guarantee all results would be in.

Resolving that conundrum requires the introduction of an actor, not just to debounce termination, but to gate the transmission of results to the output: that way, while the subtasks in flight may keep being active for a little while, the results that they may send after (from the viewpoint of the actor: remember we can't assume a single clock everyone agrees on in SMP) the stream is terminated… are dropped.

/* Both debounces the signal and ensures only one of the signal or the resumption occurs.
 Also gates the yielding of results on termination not having been reached yet.
 The source could ensure that when paired with a serial queue, but not in a way that would
 satisfy async Swift. */
@available(macOS 12.0, iOS 15.0, *) actor RaiseMediator {
    var continuation: AsyncThrowingStream<String, any Error>.Continuation?;
    var source: DispatchSourceProtocol?;
    var cancelHandler: @Sendable () -> Void;

    init(continuation incontinuation: AsyncThrowingStream<String, any Error>.Continuation,
         source insource: DispatchSourceProtocol,
         cancelHandler incancelHandler: @escaping @Sendable () -> Void) {
        continuation = incontinuation;
        source = insource;
        cancelHandler = incancelHandler;
        
        insource.setEventHandler(handler: .init(flags: [], block: {
            Task {
                await self.raise(error: InterruptedError());
            }
        }));
        
        insource.activate();
    }
    
    func yield(_ result: String) async {
        continuation?.yield(result);
    }

    func raise(error: any Error) async {
        conclude(byThrowing: error);
    }
    
    func finish() async {
        conclude(byThrowing: nil);
    }
    
    func conclude(byThrowing: (any Error)?) {
        guard let c = continuation else {
            return;
        }
        if let error = byThrowing {
            c.finish(throwing: error);
        } else {
            c.finish();
        }
        
        source?.cancel();
        source = nil;
        continuation = nil;
        if let _ = byThrowing {
            // handler must be called last, as a general rule for callbacks
            cancelHandler();
        }
    }
}

OK, that handles the first order of business in a timely fashion; but now that no result is going to be sent, there is no point in taking up processor cycles with looking for them in the first place, so that is where the cancelHandler comes in. Since we do not want to otherwise complexify the hot code in the subtasks, the in-flight ones will keep running to their completion, so the only task we need to message is the main one. Since the aim is to relinquish non-rare resources, there is no need to wake that task before it was scheduled to do so anyway, so we can just rely on a synchronous check.

Unfortunately, it would have been too cumbersome to expose a handle to that task and make it available from the correct context in order to cancel it, so I created a dedicated actor instead:

/* Allows "cancelling" from contexts where a handle to the task is not available. */
@available(macOS 12.0, iOS 15.0, *) actor RemotePseudoCancellator {
    var raised: Bool;
    
    init() {
        raised = false;
    }
    
    func raise() async {
        raised = true;
    }
    
    func checkRaised() async throws {
        if (raised) {
            throw RemoteCancellationError();
        }
    }
}

And now the code where I ended up stuck on a compiler error last post can be:

await sink.yield(microbatch: task);
try await cancellator.checkRaised();

But wait, there is a subtlety. Now sending a result is done through an actor, which means performing an async call. But the subtasks are sync. Can't we just do { result in Task { await resultMediator.yield(result);}}? No! Because then when we don't cancel we risk a late-coming result "escaping" the guard of resultsScopingGroup, and so the AsyncThrowingStream carrying the results being finished before (from the viewpoint of the actor, again) that last result has made it. Unfortunately, this isn't as simple as changing to resultsScopingGroup.addTask(), as resultsScopingGroup is not addressable from within a subtask, so I need to create a per-subtask innerResultsScopingGroup and add to that instead.

And we should be good. Note how the only change to the explorer functions is the addition of rethrows (and the try keyword when appropriate), allowing them to behave both as throwing and as non-throwing functions depending on the call site.

1 Like

Before we get to today's work, I want to amend that proclamation that rethrows was the only change to the explorer functions: I also modified them to remove any reliance on defer. While that change was not strictly necessary, these defers ran counter to my conviction that only the strictest minimum amount of work should be done as part of a "non-local jump", and the existing code under the defers did not rise to that bar: it was merely a convenient way for me to put internal accounting and deaccounting next to each other, without any of it accounting for actual resources. While acceptable in code that did not throw anyway, this was no longer the case.

Today we will see how to take advantage of our new support for interruption. Indeed, so far our behaviour was almost indistinguishable from just allowing the process to be terminated upon SIGINT; by handling the signal ourselves, we can keep the process around for further tasks (e.g. another computation) and/or extract some data as part of handling the signal. Today we will do the latter by showing on this event the state we have reached at the time the processing was interrupted.

Given that processing happens in parallel, such a state is not trivial. In fact, we are not going to report on the full state, but a characteristic part which is "the state before which all existing results have been reported; after that state, they may have been (owing to ongoing tasks), probably weren't, but no guarantee either way". In order to materialise that state, we will dedicate a task:

Task {
    let sequentialBatches = AsyncThrowingStream(Task<String, any Error>.self) { batchesSequencer in
        /* Omitted the bulk of the code */
    }
            
    do {
        for try await batch in sequentialBatches {
            try await progressRecorder.record(batch.value);
        }
        await resultMediator.finish()
    } catch {
        /* Note that in case of an interrupt, the continuation was already finished by throwing,
         but that's OK, because it's the job of resultMediator to debounce this kind of situation.
         */
        await resultMediator.raise(error: error);
    }
}

This task pulls double duty: it takes over the role of making sure all results have been accounted for before finishing the stream, but it does so by sequentially waiting for the next subtask in strict creation order to complete, recording its output when it does so. The subtasks were modified to yield an output which is a dump of the state at their completion. That way, we know we have explored any earlier possibility, earlier in the sense of the sequential algorithm shown in the first post, since not only that task but also all those that preceded it in this algorithm are known to have reported any potential result. The progressRecorder actor is completely trivial:

@available(macOS 12.0, iOS 15.0, *) actor Recorder<Inner> {
    var stored: Inner
    
    init(_ val: Inner) {
        stored = val;
    }
    
    func record(_ val: Inner) async {
        stored = val;
    }
}

For better logic hygiene, I modified the RaiseMediator to throw instead of just dropping results yielded after interruption: given how RaiseMediator currently works (sampling first the progressRecorder, then only creating the error) I don't believe it makes a difference, but I don't want to rely on such an assumption that could become too easily invalidated by accident: there could be more sources of interruptions in the future; by throwing I make sure that a Task or TaskGroup that is in charge of accounting for results ends up being transpierced by a thrown error in case even just one result was not reported because of interruption, so that they themselves do not report success (to do so, I made sure to add try await innerResultsScopingGroup.waitForAll() at the end of the TaskGroup).

func yield(_ result: String) async throws {
    guard let c = continuation else {
        throw error ?? LateYieldError();
    }
    c.yield(result);
}

But on the whole, there is not much else to it (except of course the code to dump the state, omitted for brevity). In particular, I still do not try and make the error be thrown out of a task that was waiting for something else: as you can see, the new task does not react directly to the interruption. Instead, it will end up terminating as a side effect of the stream finishing soon afterwards, much in the same way that the now superseded resultsScopingGroup would no longer get new tasks added to it, in the previous version of the code.

((1 + (75 * (6 + 1))) * (10 + 7) / 2)
(1 + (10 * ((6 * (75 + 1)) - (2 + 7))))
^C|_a: -1)+7)_b: +2)+75)_c: +
Available primitives: []
Skipped operands for the current operation: [10]
Skipped primitives for the current floor: [6, 1]
1 Like

A simple addition for this week: use the state dump we have so conveniently kept available and display it upon SIGINFO. As you may or may not be aware, in BSD tradition, a program is encouraged to react to SIGINFO (generated by the terminal driver upon ctrl-T in the same way SIGINT gets generated upon ctrl-C) by displaying its progress information. Let's do it.

Since the lifecycle of usefulness of reacting to SIGINFO exactly matches that of keeping that progress information in the first place, and seeing no basis upon which to separate the concerns, I just added this wholesale to the Recorder actor:

@available(macOS 12.0, iOS 15.0, *) actor Recorder<Inner> {
    var stored: Inner;
    var source: DispatchSourceProtocol?;
    
    init(_ val: Inner) {
        stored = val;
        let tempsource = DispatchSource.makeSignalSource(signal: SIGINFO,
                                                         queue: DispatchQueue.global(qos:.userInitiated));

        source = tempsource;

        tempsource.setEventHandler(handler: .init(flags: [], block: {
            Task {
                await self.report();
            }
        }));
        
        tempsource.activate();
    }
    
    func record(_ val: Inner) async {
        stored = val;
    }
    
    func report() async {
        print(stored);
    }
    
    func invalidate() async {
        guard let s = source else {
            return;
        }
        
        s.cancel();
        source = nil;
    }
}

That's it. The only other change is adding the call to await progressRecorder.invalidate() at the conclusion of the Task that wraps sequentialBatches so as to break the reference cycle between that actor and the DispatchSource.

Of course, printing in this fashion means that the state in question may be printed slightly ahead of the results before that state, since those are queued in the AsyncThrowingSequence that provide them and the state isn't (here), but that's par for the course for a functionality triggered by a signal handler in the first place. In practice, no one will notice.

((1 + (75 * (6 + 1))) * (10 + 7) / 2)
(1 + (10 * ((6 * (75 + 1)) - (2 + 7))))
load: 1.37  cmd: AsyncCountdown 2321 waiting 0.99u 0.00s
|_a: -1)+7)_b: +2)+10)_c: +
Available primitives: []
Skipped operands for the current operation: [75]
Skipped primitives for the current floor: [6, 1]

(1 + (10 * ((75 * 6) - ((7 - 1) / 2))))
(1 + (10 * ((75 * (7 - 1)) - (6 / 2))))
(1 + (6 * (7 + (10 * (75 - 1)) - 2)))
((1 + (75 * (6 + 1))) * (10 + 7) / 2)
(1 + (10 * ((6 * (75 + 1)) - (2 + 7))))
load: 1.37  cmd: AsyncCountdown 2321 waiting 2.25u 0.01s
|_a: *a)*6)_b: *
|_a: -1)+7)_|
Available primitives: []
Skipped operands for the current operation: [75, 10, 2, 1]
Skipped primitives for the current floor: []

(1 + (10 * ((75 * 6) - ((7 - 1) / 2))))
(1 + (10 * ((75 * (7 - 1)) - (6 / 2))))
(1 + (6 * (7 + (10 * (75 - 1)) - 2)))
#etc.

I fully agree reasync is needed. In fact I wonder why this was somewhat ignored so far - aka no proposal or serious discussions about it (although I don't think there's any doubt it would be useful).

That being said, this post has way too many unrelated details in it. I wonder if anybody will read it fully. A more concise, to the point pitch would be welcomed and I'd really love to see what challenges the community sees ahead and perhaps even a proposal - I know it would save many boilerplate definitions for a lot of us.

10 Likes