I’m thrilled to introduce TaylorTorch: a modern Swift wrapper for LibTorch, designed to resurrect the vision of a powerful, end-to-end deep learning framework in pure Swift!
Inspired by recent deep dives into "differentiable wonderlands" (a nod to the excellent book by Simone Scardapane), I challenged myself to see if we could bring back the spirit of Swift for TensorFlow, but this time powered by the battle-tested PyTorch backend.
TaylorTorch is the result: it bridges the elegance of Swift's first-class automatic differentiation with the raw power of LibTorch's C++ engine.
This project stands on the shoulders of giants. It wouldn't be possible without the pioneering work of the Swift for TensorFlow team. Crucially, it's also thanks to the continued efforts of the Swift community and the dedicated team at PassiveLogic, who have matured the language's auto-diff capabilities into what they are today.
What's inside this experimental, but "batteries-included" alpha?
A Familiar, Swift-Idiomatic API: Compose complex models using a protocol-oriented design and a Sequential builder that feels right at home in Swift.
Rich Set of Layers:Linear, Conv2D/3D, Multi-Head Attention, BatchNorm, GNN layers, and more are ready to go.
First-Class Graph Learning: Built-in components for Graph Neural Networks, inspired by DeepMind's Graph Nets.
Working Examples: Get started immediately with examples for MNIST (vision), sequence-to-sequence translation, and the Karate Club problem (GNNs).
This is a passion project and a potential testbed for Swift's compiler, but I'm excited about the road ahead—expanding operator coverage, adding GPU/Metal support, and building a richer model zoo.
If you're interested in the future of differentiable programming in Swift, I'd love your feedback and ideas. Check out the project: GitHub - pedronahum/TaylorTorch
@pedronahum nice work! I’m trying to build this project (on Linux) but I see you put some hard code paths in the Package.swift. I already forked the repo and started some work to define those paths using environment variables. However, could you provide some instructions on where you get the pytorch headers? Are you using the pytorch git repo? I didn’t find the directory structure present in the Package.swift file.
I havent tested the library on Linux yet (only Mac so far). But I compiled Pytorch from source (CPU only). There are 3 pytorch paths in the package.swift file. pytorchInstallDir is basically the most important. Then two extra paths are derived from pytorchInstallDir. For the headers, have a look at pytorchApiIncludeDir.
For Linux, however, I think I would also need to change the „Common compiler & linker settings“. So some extra work may be needed to make it work on a Linux box. Will also take a look.
@pedronahum a that would be great! Please let me know if you need any help setting up the Ubuntu one. I have been trying to get the pytorch build going for cpu only but am having some problems. The makefile picks up the AMD ROCm install automatically and ignores all compiler flags, i.e. USE_ROCM=0. Have been trying to build with ROCM enabled but this crashes. Also the documentation in the pytorch readme is a bit on the short side. Can you maybe share the build parameters that you provide to make ? That would already help me a lot
Github discussions are now open. Below please find the cmake command I used. Please note that I used clang/clang++ compiler (due to C++ interop). Although this worked on my mac, may need extra adjustments on Linux. But I am on it.