Ah I see what you mean... I'll admit I've forgotten about this part of erl on localhost, it's been like a decade since I poked around with erlang in practice 
I can explains what's going on then... and let's use this to improve the docs while at it.
I'll post here and then follow up with documentation improvements.
I think this will be of benefit to everyone interested in distributed systems in Swift. I'd also love to take any contributions you might want to come up with, thank you in advance I look forward to using these challenges as a reason to drive our docs and user experience to the next level 
I can explain a bit what's happening first I guess.
First, there is no "privileged" node in the ClusterSystem, they're all equal. So the "model" how erlang nodes, and swift cluster nodes work is really the same -- it's a bunch of nodes listening on some tcp ports.
I see what you're refering to though and it's an interesting thought but we'd have to write a deamon (like erlang does) or have some external synchronization mechanism... Long story short:
Erlang does the same thing, but thanks to some tricks, you may not have even realized. Erlang starts nodes on a randomized port, and when you use short names e.g. erl -sname a and erl -sname b it binds them to the following ports:
// erlang
-> % epmd -d -names
epmd: up and running on port 4369 with data:
name b at port 53335
name a at port 53282
^ This is perhaps an idea we should totally steal... Even Akka didn't have this, so we could definitely have a leg up and have as-good-as-a good experience as erl here.
Then erlang port mapper by default runs on port 4369 and since there is a single of them, all short name nodes connect to it.
We could do the same, provide a clusterd which binds on a well known port, and then we'd spawn all nodes in "local clusterd discovery"
We have plug-in ability for node discovery: Documentation so we'd just cluster.discovery = .clusterd -- I very much like this idea, here's a ticket for i.
which we can find out by inspecting the erlang port mapper daemon. It's true we don't have that equivalent, so the local "connect the local ones" is up to us. I think this is as simple as having a small "app node" which binds on a "well known port"
That is really the same in the swift cluster -- just that we don't have the "daemon equivalent" so at startup yes you have to write a line or two of "join" commands, but after that it's all just cluster events.
To be clear, this would be the case for erlang as well -- that's just how tcp ports work.
And to repeat this again, there is no strict requirement to be "THE seed node" -- literarily all nodes can try to join all other ones -- even "racing" the joining, and the cluster will form properly.
You can have them start concurrently; there is no ordering requirement. I'd structure this as:
app --port 7337 --name a --seed-port=7337,7338,7339
app --port 7338 --name b --seed-port=7337,7338,7339
app --port 7338 --name c --seed-port=7337,7338,7339
// however many nodes you want here in "seed ports" ^^^
You see I'm being lazy and didn't even filter out the "self" port from the list, because the cluster will know "no need to join myself", and in code I'd do this:
let system = ClusterSystem(name) { ... }
for port in seedPorts {
system.cluster.join(Endpoint(host: "127.0.0.1", port: port)
}
// this waits until this node becomes "up" i.e. has "joined"
// since we have "joining" -> "up" node status
try await system.cluster.joined(within: .seconds(5))
^ note also that the try await cluster.joined is a bit simpler than what you have with the ensure... I believe this is again insufficient docs perhaps -- you probably saw this in the distributed philosophers which spawn many nodes from the same process. We should improve docs on this as well, and I made a ticket for it.
I think your feedback is actually very valuable... we seem much more difficult and having some special nodes, but that's not the technical reality of it. But erlang's superior UX made it seem like this is much harder than it actually is... I'd definitely be on board with doing the helper daemon process so then we would:
clusterd & // or maybe your "app --clusterd"?
./node --name A
./node --name B
and their join code would become:
ClusterSystem(name) {
$0.cluster.discovery = .clusterd
}
This post got pretty long so I'll make another one for the other question.