As Swift evolves, more and more use cases arise. One of these is compute, or performant mathematics. Having the abilty to quickly use linear algebra and numerical methods is very important, specially in areas such as HPC, graphics and real-time applications. Moreover, advances in these technologies usually benefit systems programming as well (like loop vectorization).
Which leads us to a crossroads. There are two possible ways to take for Swift here:
Wrap performant libraries such as BLAS implementations and offer a Swift interface to them. This is possible today, and doing so reaps the power of SIMD-aware libraries to do computations as quickly as the hardware below them can.
Have direct support for SIMD compiling whenever the hardware is capable of it. This would allow for pure-Swift compute libraries to arise, leading to a revolution in low-level libraries all around.
Moreover, this would increase our confidence in the correctness of these tools: Swift is a much stronger language for the compiler to statically analize, and an easier language for test suites to check effectively.
Which one should it be? I know that LLVM exposes SIMD capabilities in Clang. Can we have the same in Swift? Or is SIMD awareness too hard to implement, and we should just wrap existing C libraries?
CC @huon (I know you worked on simd for Rust, therefore this conversation could interest you :)
can compiler team tell me if I’m wrong, but doesn’t LLVM already support first-class SIMD and vectorization? we would just need to expose it in the language then
publishing wrappers of C libraries kind of takes away any community motivation to develop Swift replacements. this is why i always say Swift’s excellent C interop has been both a blessing and a curse
layout is definitely a big hurdle. As someone who works with wrapped C libraries a lot,, not enough people think about layout when they go “oh i’ll just use a C library”
The part of me that just wants to get my computing done, says wrap around existing packages because I personally don't want to be writing an eigenvalue solver, for example. But I agree with the sentiment that wrapping a C library can be more work than recreating something that matches your current design. I still dread interfacing with LAPACK, BLAS, and even FFTW when I have to.
I guess then, I'd lean towards re-building these tools from the ground up. As you say, modern languages like Swift will make such code more sound---but perhaps even better than that, it provides an opportunity rethink how these algorithms get used with Swift data structures and programming practices.
More explicitly: it's essential to have both in a language targeting serious computation. You can't ignore the enormous wealth of existing Fortran and C and C++ numerical software. You also need to be able to implement new operations efficiently within the language.
Wrapping BLAS and LAPACK is not terribly painful. It's a lot of boilerplate, but it's largely mechanical. You really want to do this before you undertake writing your own stuff from the ground up (I've worked on Apple's BLAS for a decade, so I have some experience here). Being able to easily write tests against an existing wrapped implementation is invaluable, and it takes a few engineer-years to bring a BLAS to maturity, so you need something to use in the meantime.
On that topic: the simd module is only available on macOS. If that is the way forward for the simd vector types in Swift, are there plans to open source the relevant parts of Accelerate? Currently, Swift maths libraries also targeting Linux and Windows either have to maintain two implementations or not use simd at all.
If the plan is not to use simd, then providing at least first-class aligned vector types (but not methods) on all platforms may be useful.
Fwiw, you can use the underlying LLVM simd builtins on Linux by passing the -parse-stdlib-flag to swift.
It's quite unwieldy to use of course, but could be used to build a nicer-to-use simd library usable on Linux (although that might not work with SwiftPM right now: Compiler flags in Package.swift).
I'm not saying wether or not Swift should come with simd support built-in, but you can play around with this or use it if you really need it.
I have imagined designing fixed-sized arrays as a SIMD primitive. (My last design had a @vector attribute for array types.) Right now I’m waiting for a data-parallelism manifesto to go further.
I don't know, but @CTMacUser is probably right: a manifesto would almost certainly answer the questions asked in this thread.
(Maybe we can even make one. But I'm no expert on simd nor llvm. I can contribute on the usability side of things, though, if we decide to work on one)