Proposal for Swift Actors and performance concurrency futures

We at Brighten Ai would like to advocate for an iterative step towards the actor model proposed by several people around Swift. We've implemented a large, distributed Ai system based on our actor model, and gotten a ton of performance and scale wins by doing three things:

  1. Provide a course grained actor model first, which includes separate of concerns of actor functions, whose definitions live separately from Actors that register to receive those function calls async.

  2. Provide a fine grained concurrency model allowing queue safe very high frequency lock free allocations, similar to a generational GC.

  3. Provide a concurrency safe logging model. Concurrency safe logging isn't just not crashing- its safe and performantly managing high frequency logging on n concurrent threads.

Happy to chat more as people are interested in our work

1 Like

Further, we propose a future swift version could include a 2nd invisible ptr in self method innovations, that points to an operation class that would help manage concurrent memory allocations, and logging. It's that sugar, and some API affordance, that we believe could unleash swift performance by moving ARC away from high frequency allocations, and into a role for cross actor allocations only.

We have the actor model and concurrency now, and have ultra high performance, but we believe the syntactic sugar around this and 2nd invisible ptr would ungate high concurrency swift performance for the community, without resorting to rampant use of "unsafe", etc. We cover ours with API, and are 3 years into have a stable implementation.

The core team are working on a concurrency proposal which should be available “soon”.

Bits and pieces have landed in the compiler repository already, and it appears to be designed around actors. I’m sure your experience will be very valuable once that discussion starts.

So: yes, it’s a good idea and being worked on right now.

EDIT: By the way, it’s really cool that you’ve been doing this. I had no idea that forks/extensions of Swift like this even existed! And you’ve been doing it for the last 3 years? That’s super interesting, and I’m sure you’ve collected a lot of insight in that time which will be useful when the proposal is revealed.

1 Like

We've indeed already been speaking with a few of the core team members in the past few weeks, and would be happy to share our ideas with anyone else on the core team that would like to chat.

Think of this are part of that engagement.

What have been chatting about is the 3-4 modalities of concurrency we use within our system: concurrent streaming, actors, and ultra high performance "go wide" parallel CPU compute, and what we had to do to get ARC out of the way, and then we figure its good food for thought for the core team as they imagine the language sugar that goes across these ideas to promote simple ultra high performance in swift.



I can't seem to find much documentation on the website you linked to. It's very brief, high-level stuff about patented AI services (which is fine) rather than technical info and API examples for the concurrency stuff. So... it's kind of difficult to find anything to discuss, to be honest.

Are you able to share any more information about the modifications you've made to the language/compiler to support concurrency, what it looks like and why it was designed that way?


If you are a core team member, let's just voice chat about it sometime, that's higher bandwidth. Our product is an Ai product, we mention Swift because we love the language, and want Swift widely adopted as a replacement for C++ in big codebases.

1 Like

If this post is aimed at the core team, that's totally fine*. I just want to point out that that's not the impression of the first post.

If it's aimed at the larger community (to gauge interests), I think you can still post some information on this thread (or better yet, links to always-up-to-date documentations). Or if the feature is more holistic than that, a manifesto (to be added to We've seen a fair share of megathread before, so it wouldn't be anything new.

Then again, asynchrony is currently in flux, so I can't really recommend what to do.

* Maybe? I dunno :woman_shrugging:. Not core team, not admin.

I’m not on the core team. It sounds like you’re already engaging with them, but it would be helpful to have some kind of documentation for the wider community.

There are lots of ways to do that - traditional written manuals are great for reference, but video presentations on YouTube can also be good (doesn’t need to be fancy - just talking over a PowerPoint if you like). The point is that there are lots of ways to get this information out there, you can choose one that suits you.

1 Like

Ok sure, lets take Actors or "fine grained same compute, go wide on many cores" first. Any preferences on which we talk about ?

Just to avoid any misunderstanding, I'm not aware of any conversations anyone on the core team has had with John outside of public threads on these forums, and the project is not engaged with him or his company in any official way.


Hey John! Was hoping you would pop up! Happy to show you around our code base and what's been working for us actor-wise, and really importantly would like to advocate some work in memory allocation along side your actor work.

I hadn't reached you yet with our more private advocacy, and zoom calls, etc. The idea we've been doing is just talking about what we had to do to make things fast, and what kinds of things have worked for us.

We're an open-source project; if you have ideas or code you'd like to contribute, there are regular procedures for that that don't require a sales pitch. I cannot look at proprietary code that you haven't contributed under the normal project license, and we will never make that sort of dependency part of the project. You are, of course, welcome to build proprietary systems on top of what we offer.


I think we should spend time here discussing the idea then. Its much lower bandwidth, but i want to encourage your work, and am happy you are here.


Great, I'm always happy to talk about concurrency.


Alright, let's break it down if you don't mind, into 3 piles.

Course grained multi compute,
Fine grained same compute go wide on many cores
Streaming async compute.

We can talk about other things, but these three are resonating with us.

For course grained multi compute, I mean our use of Actors.

We made a DSL, that makes it easy to bind a struct describing the actor to a set of functions, and associate a queue with that.

1 Like

What we've noticed, is in our big code base this really really helps us get some concurrency with very little effort by our team. It's course grained, module level. But I want to raise another point: We let each actor sign up for whatever collection of Actor Functions they want to, and those are defined separately, and far away from the producer or consumer of the API, alot of time. We find thats great because we can sign up several actors to receive the same actor function, and just as importantly, in big projects, we dont get massive recompiles as often that way. Like protocols in a way, but method by method.

We wanted you to hear that- that we are getting big wins by that simple stuff, and that we love the consumer of actors not really being dependent on the actors themselves.

What do you think so far ?

Terms of Service

Privacy Policy

Cookie Policy