SwiftAI — One API for Any LLM

Hi everyone! I built SwiftAI, an open source library for building LLM features in Swift.

I got into this, when our team started building features using the FoundationModels SDK, and if you’re doing the same, you’ve probably wondered how to handle cases when Apple’s LLM isn’t available (Apple Intelligence is off, old device, etc) without adding complexity into your codebase. You can check our motivation section to see why shipping production features with FoundationModels can be tricky.

SwiftAI’s first release solves this with a unified API and pluggable LLM backends. You can switch between Apple’s local models and cloud LLMs like OpenAI, Anthropic, Mistral, and Gemini using one simple interface, with type-safe structured output and built-in tool support.

> The vision is to make building GenAI in Swift as simple as SwiftUI made building UIs.

If you’re building AI features, I’d love to hear your pain points and ideas. You can also contribute in any way — code, design, docs, or examples.

:building_construction: Repo

:page_facing_up: System Design — for anyone who wants to understand the internals of SwiftAI.

3 Likes

Thanks! I’ve updated the post to make things clearer. Right now, SwiftAI supports only two backends: Apple’s on-device model and OpenAI. I plan to add more backends once I implement streaming and prewarming. Contributions are very welcome—if you’d like to build a new backend, that would be fantastic. Otherwise, which backend would you like to see added next?