Hello! My name's Eugene Burmako. I work at Google on the Swift for TensorFlow team. I'd like to share something that we've been working on recently.
Swift works great as an infinitely hackable syntactic interface to semantics that are defined by the compiler underneath it. The two options today are LLVM (there's a running joke that Swift is just syntactic sugar for LLVM) and TensorFlow graphs (which is the contribution of early versions of Swift for TensorFlow).
Multi-Level Intermediate Representation (MLIR) is a generalization of both the LLVM IR and TensorFlow graphs to represent arbitrary computations at multiple levels of abstraction to enable domain-specific optimizations and code generation (e.g. for CPUs, GPUs, TPUs, and other hardware targets).
In https://docs.google.com/document/d/1UIPWl4lvBTozBD5OQ9SrxgcM7rA4pODMOjqQv3tm57w/edit, we've written down some thoughts on several ways to metaprogram MLIR in Swift - starting from treating MLIR programs as strings and then gradually increasing the level of language integration with Swift.
Seeking to obtain experimental evaluation of the designs explored in the document, we've prototyped quasiquotes, a language feature that allows "quoting" snippets of code which are then transformed into data structures available to Swift programs. These data structures can then be used for all sorts of purposes, including translation to MLIR: https://github.com/apple/swift/pull/26518.
This code doesn't fully implement the theorized design yet, but we believe that it is already useful for experimentation. We are evaluating available approaches and garnering community feedback. Please let us know what you think!