Actors 101

It’s not uncommon for words to have a narrower meaning in a specific domain than they do in plain English. ”Heap” and ”stack” are synonyms in the dictionary too, but they don’t mean the same thing in computing.

8 Likes

Parallelism in the same universe would be useless, without the concurrency required to consume the parallel outputs. This is akin to sitting in front of two television sets and watching two different news programs simultaneously.

1 Like

The following definition is from Modernizing Grand Central Dispatch Usage:

Parallelism

Simultaneous execution of closely related computations

Concurrency

Composition of independently executed tasks

Also, out of curiosity, I asked Flexi the following question. :slight_smile:

In computing, what is the difference in the meaning of the following words? concurrent and parallel?

It gave me the following answer.

Great question! Both "concurrent" and "parallel" computing involve multiple tasks happening at the same time, but they have some differences:

Concurrent Computing:

  • Definition: Multiple tasks are in progress at the same time, but not necessarily executing simultaneously.
  • Example: Think of a single-core processor that switches between tasks quickly. It might work on Task A for a bit, then switch to Task B, and so on. It gives the illusion that tasks are happening at the same time, but they are actually taking turns.

Parallel Computing:

  • Definition: Multiple tasks are executed simultaneously, often using multiple processors or cores.
  • Example: Imagine a multi-core processor where each core handles a different task at the same time. Task A runs on Core 1, Task B runs on Core 2, and so on. This means tasks are truly happening at the same time.

Key Differences:

  • Concurrency is about dealing with lots of things at once (interleaving tasks).
  • Parallelism is about doing lots of things at the same time (simultaneous execution).

I hope this helps you understand the difference!

TN2028

Wow, I’d forgotten that I wrote that O-:

ibex10, Lemme add a text book recommendation, namely Mac OS X Internals: A Systems Approach by Singh. It’s also super old, but it’s an excellent explaining of many of the technologies that continue to underpin Apple platforms.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

4 Likes

Dear @eskimo,

Thank you for the book recommendation.

I have finished reading the TN2028, and I quite liked it.

But, how much of it is still relevant? Especially of the last section titled Mac OS X Kernel Threading?

Thank you :slight_smile:

But, how much of it is still relevant?

All the Mac OS 9 stuff is definitely irrelevant (unless your @jrose :-).

As to the Mac OS X stuff, that’s mostly still relevant, but it’s no longer the full story. Most notably, it doesn’t cover Dispatch (GCD), and Dispatch is layered on top of new infrastructure (workloops) that represent significant enhancements over what was possible on Mac OS X 10.0. For example, with this Dispatch infrastructure the kernel can start a new thread in a user space process in response to an event, which is pretty cool.

Especially of the last section titled Mac OS X Kernel Threading?

That section is still reasonably relevant. The only big change is that the kernel funnel no longer exists. Mac OS X switched to using fine-grained locks in the 10.4-ish timeframe.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

5 Likes

More Interesting Topics

Measuring Actor performance

Post

Summary
Try This
import Foundation

@main
enum Test {
    static func main () async  {
        testStateHolderLocked()
        await testStateHolderActor()
        await testStateHolderActor2()
    }
}

// [https://forums.swift.org/t/i-was-playing-with-measuring-actor-performance/75005]

final class StateHolderLock {
    var lock = os_unfair_lock_s()

    init() {}

    var sum = 0

    var onNewValueReceived: ((Int) -> Void)!

    func handleValueRecieved(_ val: Int) {
        os_unfair_lock_lock(&lock)
        sum += val
        os_unfair_lock_unlock(&lock)
        onNewValueReceived(val)
    }
}

final actor StateHolderActor {
    init() {}

    var sum = 0

    nonisolated(unsafe) var onNewValueReceived: ((Int) -> Void)!

    func handleValueRecieved(_ val: Int) {
        sum += val
        onNewValueReceived(val)
    }
}

let iterations = 1000000
func testStateHolderActor() async {
    await measure ("Actor:") {
        let actor = StateHolderActor()
        
        var sum = 0
        actor.onNewValueReceived = { val in
            sum += val
        }
        
        for _ in 0 ..< iterations {
            await actor.handleValueRecieved(1)
        }
        
    }
}

func testStateHolderLocked() {
    measure ("Locked:") {
        let actor = StateHolderLock()
        
        var sum = 0
        actor.onNewValueReceived = { val in
            sum += val
        }
        
        for _ in 0 ..< iterations {
            actor.handleValueRecieved(1)
        }
        
    }
}

// [https://forums.swift.org/t/i-was-playing-with-measuring-actor-performance/75005/2]

func testStateHolderActor2 () async {
    await measure ("Actor2 :") {
        let actor = StateHolderActor()
        
        var sum = 0
        actor.onNewValueReceived = { val in
            sum += val
        }
        
        func run (actor: isolated StateHolderActor) async {
            for _ in 0 ..< iterations {
                actor.handleValueRecieved(1)
            }
        }
        
        await run (actor: actor)
    }
}

func measure (_ prefix: String, _ f: () -> Void) {
    let d = ContinuousClock ().measure {
        f ()
    }
    print (prefix, d)
}

func measure (_ prefix: String, _ f: () async -> Void) async {
    let d = await ContinuousClock ().measure {
        await f ()
    }
    print (prefix, d)
}

Guaranteeing an actor executes off the main thread

Post

Summary

How to share isolation between two actors

Post

How to “fire and forget” work on an actor?

Post

Try This
@main
enum Actors {
    static func main () async throws {
        let u = FooActor ()
        await print (u.x)
        u.incrementEventually()
        u.incrementEventually()
        u.incrementEventually()
        for _ in 0..<5 {
            await print (u.x)
            await hibernate (seconds: 3)
        }
    }
}

func hibernate (seconds v: Int) async {
    try! await Task.sleep(until: .now + .seconds(v))
}

// [https://forums.swift.org/t/how-to-fire-and-forget-work-on-an-actor/75157]


actor FooActor {
    var x = 0

    func incrementX () {
        self.x += 1
    }
    
    nonisolated func incrementEventually() {
        Task {
            await hibernate (seconds: 3)
            await self.incrementX()
        }
    }
}

1 Like

Dude, come on. Immediately above this, you shared the following quote from Wikipedia:

CPU cores are hardware, are they not?

And what is this difference!? I keep asking that. Can you show a simple example of a program that behaves differently on a multicore machine than it does on a single core machine with multithreading?

(Performance of course is very different but we're not talking about that, there seems to be this belief that the difference is important to program logic, not processing throughput).

The reason I started arguing diction is because the argument I'm seeing seems to be entirely based on just that: a stipulated divide in terminology, using two different words ("concurrency" and "parallelism") to distinguish them, is introduced and then this choice to use different words seems to be the only basis of all the subsequent claims. Why are these two things different? Because they're defined to be different. Okay but why do we define them to be different? If you're going to base an argument on definitions, I have to respond with an argument based on definitions. I don't think the words we use are really important at all, and if that's the case, what is the point of any of this that can be made without referring back to this chosen terminology?

This is why I asked: would it be incorrect for a multicore machine to implement its "parallelism" by sending clock timing pulses serially through each core so that they also execute an instruction in a single clock cycle, but they never do so "simultaneously", it's actually serial and the hardware is rapidly switching through each core (the same thing software does on a single core with threads)? Would that computer have to be reclassified as a "concurrent" but not "parallel" computer?

But you cut off the rest of the paragraph that quote appeared in:

I didn't say "wrong", I said suspicious. I only mentioned this at all because people started arguing about word choice. If you follow back to the other thread where this originated, the post where I mentioned "parallel" was about there being no guaranteed order of execution between different lines of code (particularly tasks starting), and that's true of "concurrency" the way it's been defined too. This is evidence that this use of terminology is actively confusing people about what the order guarantees of "multithreaded" code are (is this why people think "tasks should start in order", because it's called "Swift Concurrency" instead of "Swift Parallelism"?). There's a reason I'm pushing back against it and it's not because I've never encountered the concept of academic jargon before.

Understanding the difference between threads and cores is important, but for the opposite reason people here are saying: it removes a distinction we might imagine exists that isn't there. Multiple threads on a single core computer run in parallel. They aren't forced to run serially (one finishes before the other starts) just because there's only one core. Hence, threads and cores are not the same. There's a difference between multithreaded and multicore, and what this entails to the programmer is that the cores really don't matter (except for throughput). The number of parallel streams in your program is set by the threads you create, not by the cores you have available. If I should have said they run "concurrently", okay fine, the point is still that there's no guaranteed order of execution between them, whether they run on different cores or not.

3 Likes

Coming from GCD, something that helped with my understanding was the fact that the new system sticks to one thread per core. That makes sense.

I’m on my third try with it so all the documentation links are a welcome assist.