SwiftIR: Revisiting "Swift as Syntactic Sugar for MLIR" with Modern Swift Features

Hi all,

Remember the 2019 discussion about "Swift as Syntactic Sugar for MLIR" Swift as syntactic sugar for MLIR ? I've been exploring this vision using modern Swift features that didn't exist back then.

Background: From Passenger to Driver

Some of you might know my previous work on TaylorTorch, where I added automatic differentiation to LibTorch for Swift. While that project works, Swift is ultimately just a passenger - we could call LibTorch's APIs, even add differentiation protocols, but how tensors were optimized, compiled, or executed was entirely at LibTorch's discretion. Swift had no say in the compilation pipeline.

What I Built

SwiftIR is a Swift framework for building, manipulating, and executing MLIR code with full type safety. Instead of wrapping someone else's compiler, Swift now owns the compilation pipeline - from IR generation through optimization to hardware execution.

Key achievements:

  • Swift controls the entire pipeline - Not just wrapping, but owning compilation

  • OpenXLA/PJRT integration working - Same battle-tested runtime powering JAX/TensorFlow

  • 20+ working examples (CNNs, ResNets) executing on CPU via XLA compilation

  • SPIR-V generation in progress - MLIR opens doors to Vulkan/Metal compute shaders

  • Type-safe MLIR construction - catch errors at compile-time, not runtime

  • Multiple abstraction levels: string-based MLIR, type-safe DSL, and macro vision

Quick Example

```swift
import SwiftIR

let tensorType = TensorType(shape: [4])

let module = StableHLOModule(name: "add_module") {
    StableHLOFunction(
        name: "add",
        parameters: [
            Parameter(name: "a", type: tensorType),
            Parameter(name: "b", type: tensorType)
        ],
        returnType: tensorType
    ) {
        Add("a", "b", type: tensorType)
    }
}

let mlir = module.build()  // Generate MLIR from DSL
let client = try PJRTClient.createCPU()
let result = try client.compile(mlir).execute(
    inputs: [Tensor([1,2,3,4]), Tensor([5,6,7,8])]
)
print(result) // [6, 8, 10, 12]
```

Unlike TaylorTorch where we worked within LibTorch's constraints, SwiftIR gives Swift developers direct control over the entire stack - we decide the optimizations, the lowering strategies, and the execution model.

Looking for Feedback

This is a research project exploring what's possible when Swift owns its ML stack. Coming from TaylorTorch, where Swift was limited by C++ design decisions, I'm excited to explore what we can achieve with full control. I'd love to hear:

  • Is OpenXLA integration valuable for Swift developers?

  • Interest in SPIR-V/GPU compute beyond ML?

  • What limitations in wrapped C++ frameworks have you hit?

  • Anyone interested in collaborating on the macro system or GPU backends?

  • Thoughts on the API design and the "levels of abstraction" approach?

  • Should Swift aim to own more of its ML/compute stack?

GitHub: https://github.com/pedronahum/SwiftIR 20+ working examples included (all using OpenXLA's runtime).

Best,

Pedro N.

15 Likes

Interesting, have you written more about the technical reasons why someone would want to use this as opposed to your TaylorTorch, or PyTorch itself?

I think potential users would be interested in a blog post exploring those likely tradeoffs, particularly if you could benchmark the three approaches and show some results. Maybe even something you could submit to the official Swift blog, to bring more attention and help to your effort. :smiley:

2 Likes

Hi @Finagolfin,

Thanks for the quick feedback – much appreciated! You've given me some good points to consider about how to better articulate the benefits of Swift-MLIR integration. I’ve got some homework to do :slight_smile:

For context, it's worth noting that the PyTorch team has already gone down this path with Torch-MLIR, which demonstrates the value of these integrations beyond just theoretical benefits.

For those unfamiliar, MLIR (Multi-Level Intermediate Representation) is a compiler infrastructure framework that excels at handling complex optimizations and code generation across diverse hardware targets. While it's gained prominence in machine learning circles, its real strength lies in representing computations at multiple abstraction levels – making it useful well beyond ML applications.

What makes MLIR particularly compelling is its dialect system, which lets developers define custom operations and transformations. This makes building domain-specific compilers with operation fusion much more straightforward. Additionally, MLIR provides a unified approach to targeting specialized accelerators, which is increasingly important in today's heterogeneous computing landscape.

The ultimate goal here is to enhance Swift's capabilities for use cases where custom compilation and optimization are critical to performance.

2 Likes

I am familiar with MLIR at a very zoomed out level, but not with Torch-MLIR, as I have nothing to do with ML.

I think it may help your effort to explain two aspects to your ML audience, as an outsider peering from afar:

  1. What are some concrete benchmarks of SwiftIR that show better performance or power usage, or at least the potential for it, particularly on CNNs or other currently popular ML models?
  2. Why would someone want to use SwiftIR instead of Torch-MLIR, ie what are the exact research directions you have already explored and others you hope to explore?

These kinds of questions are similar to what the Android workgroup, which I'm involved with, are thinking about articulating right now, so we're in the same boat. :wink:

We are looking to blog about these topics for Android; I'm suggesting you do the same somewhere more public than these forums.

2 Likes

Update: Graph Compilation for Differentiable Swift

I've been working on graph compilation via tracing—using Swift's existing autodiff as-is, but capturing the computation into MLIR for XLA compilation:

Swift Source (@differentiable)
↓
DifferentiableTracer (captures ops)
↓
Swift's AD generates pullbacks (unchanged!)
↓
Tracers emit MLIR operations
↓
Complete forward+backward graph
↓
XLA compilation + fusion
↓
Single optimized kernel via PJRT

The DifferentiableTracer rides along with normal @differentiable code, observes Swift's AD generating pullbacks, and hands the complete forward+backward graph to XLA. No compiler changes needed—it builds on top of the existing autodiff foundation.

I've started measuring the benefit of this approach, and initial results are very encouraging. For example, in a building simulation exercise, this approach shows significant improvements for high iterations.

On the project health side: GitHub Actions CI is now in place (including installation scripts for dependencies), and unit test coverage has improved significantly—making it easier for others to get started and contribute.

2 Likes

Update: TensorBoard Profiling, Functional Transformations & Jupyter/Colab Support

Three major additions since last update:

1. TensorBoard Profiling

SwiftIR now integrates with XLA's profiler. Add custom trace annotations and visualize execution in TensorBoard:

let profiler = try PJRTProfiler.create()
try profiler.start()

for epoch in 0..<numEpochs {
    try pjrtTrainStep(epoch) {
        try pjrtTraced("trace_graph") { /* build computation */ }
        try pjrtTraced("xla_execution") { /* run on device */ }
    }
}

try profiler.stop()
try PJRTProfiler.exportToFile(profiler.collectData(), 
    filepath: "/tmp/profile/host.xplane.pb")

Then tensorboard --logdir=/tmp/profile shows the Profile tab with custom annotations alongside XLA's internal metrics. See the Examples/ folder for complete working demos.

2. JAX-Style Functional Transformations

Now implemented and tested:

  • vmap - automatic batching

  • scan - sequential ops (RNNs, cumsum)

  • cond - differentiable conditionals

  • Functional PRNG with splittable keys

  • Tree operations for nested parameter structures

These compose with @differentiable and compile to efficient StableHLO. The Examples/ folder includes a physics simulation demonstrating these transformations with gradient computation through control flow.

3. Jupyter/Colab Support

New SwiftIRJupyter module—pure Swift implementation with no C++ dependencies. Works in Jupyter notebooks and Google Colab via a revived Swift kernel (GitHub - pedronahum/swift-jupyter). Same AD, same StableHLO output, just string-based MLIR generation instead of C API bindings. Makes it easy to prototype and share notebooks.

Next up: Shareable Google Colab notebooks with TPU support.

1 Like