All fair points.
We are solving for our business needs first of course, while also contributing back to the community. We trying encourage more engagement in open source — but understand that is dual expectations (so thank for the efforts guys!).
On complier features. It depends on what you want to do. If you want to make a distributed network traffic optimizer, for example… just write your normal swift code + a 20 line atom implementation. There are likely 100 such everyday code problems to be solved for every deep learning model.
As for what we do in AI, we plan on open sourcing more of our toolkits. But these aren’t focused traditional homogeneous deep learning model training, but in broader compute use-cases focused on heterogenous compute graphs, composability, shaped edge learning, physics-defined world models, ontological graphs, graph rag, and formal inferencing.