TaylorTorch: A modern Swift wrapper for LibTorch

I’m thrilled to introduce TaylorTorch: a modern Swift wrapper for LibTorch, designed to resurrect the vision of a powerful, end-to-end deep learning framework in pure Swift! :rocket:

Inspired by recent deep dives into "differentiable wonderlands" (a nod to the excellent book by Simone Scardapane), I challenged myself to see if we could bring back the spirit of Swift for TensorFlow, but this time powered by the battle-tested PyTorch backend.

TaylorTorch is the result: it bridges the elegance of Swift's first-class automatic differentiation with the raw power of LibTorch's C++ engine.

This project stands on the shoulders of giants. It wouldn't be possible without the pioneering work of the Swift for TensorFlow team. Crucially, it's also thanks to the continued efforts of the Swift community and the dedicated team at PassiveLogic, who have matured the language's auto-diff capabilities into what they are today. :folded_hands:

What's inside this experimental, but "batteries-included" alpha? :battery:

  • A Familiar, Swift-Idiomatic API: Compose complex models using a protocol-oriented design and a Sequential builder that feels right at home in Swift.

  • Rich Set of Layers: Linear, Conv2D/3D, Multi-Head Attention, BatchNorm, GNN layers, and more are ready to go.

  • First-Class Graph Learning: Built-in components for Graph Neural Networks, inspired by DeepMind's Graph Nets.

  • Working Examples: Get started immediately with examples for MNIST (vision), sequence-to-sequence translation, and the Karate Club problem (GNNs).

This is a passion project and a potential testbed for Swift's compiler, but I'm excited about the road ahead—expanding operator coverage, adding GPU/Metal support, and building a richer model zoo.

If you're interested in the future of differentiable programming in Swift, I'd love your feedback and ideas. Check out the project: GitHub - pedronahum/TaylorTorch

13 Likes

@pedronahum nice work! I’m trying to build this project (on Linux) but I see you put some hard code paths in the Package.swift. I already forked the repo and started some work to define those paths using environment variables. However, could you provide some instructions on where you get the pytorch headers? Are you using the pytorch git repo? I didn’t find the directory structure present in the Package.swift file.

Hi @tjadejong ,

I havent tested the library on Linux yet (only Mac so far). But I compiled Pytorch from source (CPU only). There are 3 pytorch paths in the package.swift file. pytorchInstallDir is basically the most important. Then two extra paths are derived from pytorchInstallDir. For the headers, have a look at pytorchApiIncludeDir.

For Linux, however, I think I would also need to change the „Common compiler & linker settings“. So some extra work may be needed to make it work on a Linux box. Will also take a look.

Best,

Hi @tjadejong,

I started to add CI (mac and ubuntu) and codespaces. May need a day or two to wrap it up. Will confirm here when it is ready.

Thanks for your feedback!

1 Like

@pedronahum a that would be great! Please let me know if you need any help setting up the Ubuntu one. I have been trying to get the pytorch build going for cpu only but am having some problems. The makefile picks up the AMD ROCm install automatically and ignores all compiler flags, i.e. USE_ROCM=0. Have been trying to build with ROCM enabled but this crashes. Also the documentation in the pytorch readme is a bit on the short side. Can you maybe share the build parameters that you provide to make ? That would already help me a lot :slight_smile:

1 Like

@pedronahum would it also be possible to enable the Discussions part on GitHub? Might be handier to keep discussions like this close to the code?

Hi @tjadejong ,

Github discussions are now open. Below please find the cmake command I used. Please note that I used clang/clang++ compiler (due to C++ interop). Although this worked on my mac, may need extra adjustments on Linux. But I am on it.

export CC=clang
export CXX=clang++

mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release
-DBUILD_SHARED_LIBS=ON
-DUSE_CUDA=OFF
-DUSE_MPS=OFF
-DUSE_DISTRIBUTED=OFF
-DBUILD_TEST=OFF
-DCMAKE_INSTALL_PREFIX=../install
-DPYTHON_EXECUTABLE=$(which python3)
..

1 Like

Thanks! Didn’t have time today but will try soon and will let you know the results!

Hey @pedronahum sorry it took me so long to comment.

I looked over your repo! I was curious, since you have the MNIST example. Would you have a performance comparison to like a python mnist run with similar parameters?

Would also be nice to see the code side by side as well, to see what Swift provides on top of what the python api can do!

2 Likes

Hey @JaapWijnen,

No worries at all. The work you guys have been doing with the compiler motivated me to learn more about Swift :slight_smile: , so all good.

I just started to benchmark the results (TaylorTorch and SwiftIR). Hopefully I can summarize all the findings soon. Overall, I am learning a lot about how pytorch and jax are being compiled and integrated to python. So lots of work is needed to reduce some overheads these python libraries have mastered.

In the case of TaylorTorch (MAC, libtorch cpu), to my surprise the forward pass of TaylorTorch is 20% faster than pytorch, yet the gradient is 5x slower. With the help of @tjadejong, TaylorTorch is now available in Linux. But more work is needed to bring GPU capabilities.

In the case of SwiftIR, the performance (forward and gradient) is approx 10% slower than Jax. But this is more about how Jax integrates the runtime. Currently testing SwiftIR with TPUs, and I am observing a similar pattern versus Jax. Also, in case you are interested, OpenXLA Shardy has been implemented in SwiftIR, so we can test now in clusters.

Let me also think about examples where we can illustrate the benefits of having the front-to-back support of the Swift compiler.

Best,

Pedro N.

1 Like

Quick update, MINIST performance gap versus pytorch is being tracked here: MNIST Benchmark vs Pytorch · Issue #6 · pedronahum/TaylorTorch · GitHub. Already found some quick-wins (not yet deployed in main). Getting closer to pytorch for the backward pass :slight_smile: