November 10, 2021

Attendees: @kmahar, @patrick, @adam-fowler, @tomerd, @fabianfett, @tachyonics, @varland

Action Items

  • [carry over] @0xTim GitHub actions
  • [all] If you have any opinions on the static linking pitch, please chime in on thread
  • @tomerd to write a proposal for static linking
  • @adam-fowler and @patrick to kick off a design for Swiftly, hoping to have something in review for next meeting
  • @adam-fowler to move to a proposal for MQTTNIO
  • @tomerd talk to @ktoso to try to figure out what his existing meeting schedule is like and if we could adjust meeting to suit him better

Discussion

  • MultipartKit accepted at incubating level
  • Graphiti - @adam-fowler reached out and authors were positive on idea of pitching but are prioritizing adding async/await support first
  • Missing Foundation APIs on Linux
    • Let's discuss more with @0xTim here since he brought this up, but it is a known issue actively being worked on.
  • XCTest support for structured concurrency
    • This was merged last week and should come out in the next patch release for Linux. However, there is another PR pending to allow automatically discovering async tests so we'll need that one as well.
  • FYI, SwiftNIO Swift support policy
  • UUID support without Foundation?
    • there is a swift-extras library for this, though it is not currently maintained.
  • Generic connection pool
    • Should we revive this? There was a pitch a while back here.
    • What would the ideal pool like? Perhaps some mix of the AsyncHTTPClient pool, Redis driver pool, PostgresNIO pool
    • Some differences between existing pools in terms of connection reuse strategy across event loops.
    • Every use case has some specific needs that may make it hard to do something totally generic. We might be able to have a shared pool for some cases, but more difficult things like multiplexing might not be able to fit into a shared model.
    • The end result could just be a document / white paper that describes different implementations and aspects to consider, and links to all the different examples.
    • @patrick is starting on a pool for the MongoDB driver soon and so will be thinking about this.
  • Meeting time
    • Now that everyone but @ktoso had a time change, meeting time is not great. We may need to adjust
  • FYI: @tomerd will be traveling some in Dec/Jan, may miss some meetings throughout.
3 Likes

Foundation-free is nice, would also point out GitHub - karwa/uniqueid: Random and time-ordered UUID generation in Swift from @Karl as a candidate there.

2 Likes

Right, UniqueID does not depend on Foundation, and it would be nice if it found some use on the server.

Support for time-ordered IDs (UUIDv6) could be particularly interesting. Because they have good locality, they allow you to replace hash tables and hash lookups with simple arrays and binary search, which can have significant time and memory savings. Hash lookups are O(1) on average, but the hash function has a non-trivial cost that you can just totally avoid with a time-ordered UUID.

They're currently in RFC draft form, but they're compatible (will not collide) with existing UUIDs and there are plenty of implementations in other languages. The difference to UUIDv1 is so small that IMO they are very unlikely to change (it's literally the same timestamp data as UUIDv1, including the weird October 15 1582 epoch, just with the bits in a different order).

If we can get some good usage data, I'd be happy to present that to the IETF in hopes of pushing UUIDv6 towards formal standardisation.

Currently, there is one issue with the implementation - it uses atomics to create a spinlock in userspace, which, as Linus Torvalds will tell you, is something you should never do:

I repeat: do not use spinlocks in user space, unless you actually know what you're doing . And be aware that the likelihood that you know what you are doing is basically nil.

There's a very real reason why you need to use sleeping locks (like pthread_mutex etc).

...

Because you should never ever think that you're clever enough to write your own locking routines.. Because the likelihood is that you aren't (and by that "you" I very much include myself - we've tweaked all the in-kernel locking over decades , and gone through the simple test-and-set to ticket locks to cacheline-efficient queuing locks, and even people who know what they are doing tend to get it wrong several times).

There's a reason why you can find decades of academic papers on locking. Really. It's hard.

I have a draft which switches to a pthread_mutex/os_unfair_lock. I'll clean it up and push it... hopefully sometime this week.

I'm not sure whether unfair locking is necessarily the best thing to use on a server - they have good throughput because a single thread can keep the lock for longer, but higher latency, because other threads spend longer waiting for the lock. But it's better than a spinlock.

3 Likes

Update has been pushed. 1.0.3 uses os_unfair_lock/pthread_mutex, and DocC-based documentation.

1 Like

Did you do any performance comparisons?