What should be the road forward for compute with Swift?

As Swift evolves, more and more use cases arise. One of these is compute, or performant mathematics. Having the abilty to quickly use linear algebra and numerical methods is very important, specially in areas such as HPC, graphics and real-time applications. Moreover, advances in these technologies usually benefit systems programming as well (like loop vectorization).

Which leads us to a crossroads. There are two possible ways to take for Swift here:

  1. Wrap performant libraries such as BLAS implementations and offer a Swift interface to them. This is possible today, and doing so reaps the power of SIMD-aware libraries to do computations as quickly as the hardware below them can.

  2. Have direct support for SIMD compiling whenever the hardware is capable of it. This would allow for pure-Swift compute libraries to arise, leading to a revolution in low-level libraries all around.
    Moreover, this would increase our confidence in the correctness of these tools: Swift is a much stronger language for the compiler to statically analize, and an easier language for test suites to check effectively.

Which one should it be? I know that LLVM exposes SIMD capabilities in Clang. Can we have the same in Swift? Or is SIMD awareness too hard to implement, and we should just wrap existing C libraries?


CC @huon (I know you worked on simd for Rust, therefore this conversation could interest you :)

7 Likes

can compiler team tell me if I’m wrong, but doesn’t LLVM already support first-class SIMD and vectorization? we would just need to expose it in the language then

These options seem orthogonal. We can start by wrapping BLAS, then if and when it becomes feasible we can consider implementing a pure-Swift version.

3 Likes

publishing wrappers of C libraries kind of takes away any community motivation to develop Swift replacements. this is why i always say Swift’s excellent C interop has been both a blessing and a curse

9 Likes

Good point. But it'd change the whole approach and motivation for library makers if we knew whether or not SIMD was in the cards for Swift.

To me it's much easier to work under Swift:

  • Wrapping a C or Fortran compute library isn't trivial. They have large API surface areas and have very specific layout requirements.
  • The design of the Swift library has to fit the design of the wrapped implementation.
  • Testing a C wrapper doesn't give me as much confidence as when testing a pure Swift library.
1 Like

layout is definitely a big hurdle. As someone who works with wrapped C libraries a lot,, not enough people think about layout when they go “oh i’ll just use a C library”

That's a really tough question.

The part of me that just wants to get my computing done, says wrap around existing packages because I personally don't want to be writing an eigenvalue solver, for example. But I agree with the sentiment that wrapping a C library can be more work than recreating something that matches your current design. I still dread interfacing with LAPACK, BLAS, and even FFTW when I have to.

I guess then, I'd lean towards re-building these tools from the ground up. As you say, modern languages like Swift will make such code more sound---but perhaps even better than that, it provides an opportunity rethink how these algorithms get used with Swift data structures and programming practices.

1 Like

Some support already exists, e.g.:

import simd

func add(lhs: float4, rhs: float4) -> float4 {
  return lhs + rhs // generates a single vector add instruction
}

print(add(lhs: float4(1, 2, 3, 4), rhs: float4(4, 3, 2, 1)))

let value = float2(1, 2)
let first = value[0]
let second = value[1]
1 Like

More explicitly: it's essential to have both in a language targeting serious computation. You can't ignore the enormous wealth of existing Fortran and C and C++ numerical software. You also need to be able to implement new operations efficiently within the language.

Wrapping BLAS and LAPACK is not terribly painful. It's a lot of boilerplate, but it's largely mechanical. You really want to do this before you undertake writing your own stuff from the ground up (I've worked on Apple's BLAS for a decade, so I have some experience here). Being able to easily write tests against an existing wrapped implementation is invaluable, and it takes a few engineer-years to bring a BLAS to maturity, so you need something to use in the meantime.

7 Likes

On that topic: the simd module is only available on macOS. If that is the way forward for the simd vector types in Swift, are there plans to open source the relevant parts of Accelerate? Currently, Swift maths libraries also targeting Linux and Windows either have to maintain two implementations or not use simd at all.

If the plan is not to use simd, then providing at least first-class aligned vector types (but not methods) on all platforms may be useful.

6 Likes

Fwiw, you can use the underlying LLVM simd builtins on Linux by passing the -parse-stdlib-flag to swift.

It's quite unwieldy to use of course, but could be used to build a nicer-to-use simd library usable on Linux (although that might not work with SwiftPM right now: Compiler flags in Package.swift).

I'm not saying wether or not Swift should come with simd support built-in, but you can play around with this or use it if you really need it.

1 Like

if it is that simple i don’t understand why this is not already exposed in the stdlib,, can someone who works on it please explain

I have imagined designing fixed-sized arrays as a SIMD primitive. (My last design had a @vector attribute for array types.) Right now I’m waiting for a data-parallelism manifesto to go further.

Is anyone working on one?

I don't know, but @CTMacUser is probably right: a manifesto would almost certainly answer the questions asked in this thread.

(Maybe we can even make one. But I'm no expert on simd nor llvm. I can contribute on the usability side of things, though, if we decide to work on one)