Right, UniqueID does not depend on Foundation, and it would be nice if it found some use on the server.
Support for time-ordered IDs (UUIDv6) could be particularly interesting. Because they have good locality, they allow you to replace hash tables and hash lookups with simple arrays and binary search, which can have significant time and memory savings. Hash lookups are O(1) on average, but the hash function has a non-trivial cost that you can just totally avoid with a time-ordered UUID.
They're currently in RFC draft form, but they're compatible (will not collide) with existing UUIDs and there are plenty of implementations in other languages. The difference to UUIDv1 is so small that IMO they are very unlikely to change (it's literally the same timestamp data as UUIDv1, including the weird October 15 1582 epoch, just with the bits in a different order).
If we can get some good usage data, I'd be happy to present that to the IETF in hopes of pushing UUIDv6 towards formal standardisation.
Currently, there is one issue with the implementation - it uses atomics to create a spinlock in userspace, which, as Linus Torvalds will tell you, is something you should never do:
I repeat: do not use spinlocks in user space, unless you actually know what you're doing . And be aware that the likelihood that you know what you are doing is basically nil.
There's a very real reason why you need to use sleeping locks (like pthread_mutex etc).
...
Because you should never ever think that you're clever enough to write your own locking routines.. Because the likelihood is that you aren't (and by that "you" I very much include myself - we've tweaked all the in-kernel locking over decades , and gone through the simple test-and-set to ticket locks to cacheline-efficient queuing locks, and even people who know what they are doing tend to get it wrong several times).
There's a reason why you can find decades of academic papers on locking. Really. It's hard.
I have a draft which switches to a pthread_mutex/os_unfair_lock. I'll clean it up and push it... hopefully sometime this week.
I'm not sure whether unfair locking is necessarily the best thing to use on a server - they have good throughput because a single thread can keep the lock for longer, but higher latency, because other threads spend longer waiting for the lock. But it's better than a spinlock.