As Swift evolves, more and more use cases arise. One of these is compute, or performant mathematics. Having the abilty to quickly use linear algebra and numerical methods is very important, specially in areas such as HPC, graphics and real-time applications. Moreover, advances in these technologies usually benefit systems programming as well (like loop vectorization).
Which leads us to a crossroads. There are two possible ways to take for Swift here:
Wrap performant libraries such as BLAS implementations and offer a Swift interface to them. This is possible today, and doing so reaps the power of SIMD-aware libraries to do computations as quickly as the hardware below them can.
Have direct support for SIMD compiling whenever the hardware is capable of it. This would allow for pure-Swift compute libraries to arise, leading to a revolution in low-level libraries all around.
Moreover, this would increase our confidence in the correctness of these tools: Swift is a much stronger language for the compiler to statically analize, and an easier language for test suites to check effectively.
Which one should it be? I know that LLVM exposes SIMD capabilities in Clang. Can we have the same in Swift? Or is SIMD awareness too hard to implement, and we should just wrap existing C libraries?
CC @huon (I know you worked on simd for Rust, therefore this conversation could interest you :)
The part of me that just wants to get my computing done, says wrap around existing packages because I personally don't want to be writing an eigenvalue solver, for example. But I agree with the sentiment that wrapping a C library can be more work than recreating something that matches your current design. I still dread interfacing with LAPACK, BLAS, and even FFTW when I have to.
I guess then, I'd lean towards re-building these tools from the ground up. As you say, modern languages like Swift will make such code more sound---but perhaps even better than that, it provides an opportunity rethink how these algorithms get used with Swift data structures and programming practices.
More explicitly: it's essential to have both in a language targeting serious computation. You can't ignore the enormous wealth of existing Fortran and C and C++ numerical software. You also need to be able to implement new operations efficiently within the language.
Wrapping BLAS and LAPACK is not terribly painful. It's a lot of boilerplate, but it's largely mechanical. You really want to do this before you undertake writing your own stuff from the ground up (I've worked on Apple's BLAS for a decade, so I have some experience here). Being able to easily write tests against an existing wrapped implementation is invaluable, and it takes a few engineer-years to bring a BLAS to maturity, so you need something to use in the meantime.
On that topic: the simd module is only available on macOS. If that is the way forward for the simd vector types in Swift, are there plans to open source the relevant parts of Accelerate? Currently, Swift maths libraries also targeting Linux and Windows either have to maintain two implementations or not use simd at all.
If the plan is not to use simd, then providing at least first-class aligned vector types (but not methods) on all platforms may be useful.