Distributed Worker Pool Example - Support for Specific Hardware

I'm not quite sure what you're asking for with:

I mean, you write the code in the actor, so it's up to you to write whatever "this code needs gpu stuff" in an actor that will be run on a node that "has the gpu" (whatever that specifically means, e.g. a high-gpu instance on EC2 or something else etc).

This is perhaps either weirdly phrased, or misunderstands what actors are doing (or I don't understand the sentence)? Actors are not going to "use the gpu" just in some magical way. Distributed actors are just a communication mechanism, whatever code you have that already does require/make-use-of gpu acceleration would just be sitting there as usual ("in the actor"), and the actor only serves as nice way to discover and communicate with any such actor(s).

In practice the actor code is just:

if <I'm a high-gpu instance> { 
  ... = HighGPUWorker(system: cluster, ...)
} else {
  // normal node... don't spawn high gpu workers
}

distributed actor HighGPUWorker { 
  init(system: ActorSystem, ...) {
    system.receptionist.register(self, withKey: .highGPUWorkers)
  }
}

// others are listening for .highGPUWorkers

You may want to watch the talk about this we did recently: [Video] Distributed Actors announced at Scale by the Bay maybe that'll help with wrapping your head around the usage patterns :slight_smile:

1 Like