Can my technology company use only Swift?

You can write it in asm. Or on a Turing machine simulator. Doesn't make it a wise choice.

So is SwiftUI internals... written in C++, I wonder why... :thinking:

With swift it's just too easy to shoot yourself in the foot and hit a lock (which can take an unbound amount of time, hence audio glitch). retain/release or mutex lock, or many things in swift runtime, - it's a minefield in those regards. Unless you don't mind the glitches (say, an in-house experimental app), or extra carefully avoid pretty much all swift and limit yourself to a very limited portion of it (plus check the asm afterwards to be sure.. and check again in a new compiler version).

On the contrary, this is typically a very good indication of the road to avoid.. unless for some very curious types ("why exactly is that road blocked") and among those who have too much free time on their hands.

A good language for the task at hand means ... you don't need any clever tricks to do.

For questions about issues specific to Kotlin apps deployed to iOS that would be, say, 1/100 of the overall iOS / Kotlin community who can help.. for Swift apps deployed to Android - divide that by another factor of 10. I'm petty sure I am overly optimistic with the number.

1 Like

That certainly hasn't been my experience with Swift. I don't like where this thread is going.

Let's do some fact-checking before making a conclusion based on numbers. Do you have a source showing the number of iOS apps made using Kotlin vs. the number of Android apps made using Swift?

Programming language is the least important factor in software development :slight_smile:

What you need is people with good communication skills and high level of emotional intelligence, with basic knowledge of computer science and math, and people with a strong desire to learn new things.


That was a fermi estimate ballpark figure. google agrees:

"Android app in Swift" - 10 results
"iOS app in Kotlin"  - 10 results
"Android app in Kotlin" - About 150,000 results
"iOS app in Swift" - About 103,000 results
1 Like

I know but since Swift seems to be capable for anything and easy to learn and use, I would want to try to use the same language for everything if that language is legendary.

My goal is to create a software infrastructure for my company that will remain up-to-date for as long as possible. I believe that in the long run, it's more important that the language is easy to learn and use. After all, ease of usage is what makes it easy to edit and maintain it right?

Those numbers show relatively equal numbers of Kotlin + iOS vs. Swift + Android. In the earlier comment, you said that Swift + Android had 1/10 as may users as Kotlin + iOS.

I don't mean to be nit-picky about specifics of an argument. But multiple assumptions were made about how Swift is "less popular" for specific use cases (including a unique form of mobile development), or that Swift will be more problematic to use.

This is also an example of an assumption being made, then used as the premise of an argument. While it is possible that Apple used C++ instead of Swift because it's more ergonomic, you didn't give a source to back that up. Then with "I wonder why... :thinking:", you implied that we all "know why Apple chose C++". Is this a correct interpretation of your statement?

That involves hiring people, no? Would be really challenging hiring a Swift developer for Windows / Android project (unless you are top 20 company). Or hiring a Windows/Android person who is passionate about swift. Perhaps not impossible but reduces a hiring pool by a factor of 100 (another ballpark figure - my estimate).

1 Like

@realkevinthegreat's argument is that Swift is easy to learn and use. If a Windows/Android developer is experienced enough to work professionally, couldn't they learn how to program in Swift? Then you don't need to pick from the small pool of developers who already have experience with the language.

I really don't know why. If I had to guess - that would be "for performance reasons".

They could, they just won't. If you ever hired for your team you would know that the post "Looking for an experienced developer willing to learn and write in Swift for Windows/Android" would attract significantly fewer relevant CVs (1/100 - would be my optimistic estimate) than a more traditional "Senior C# Developer for Windows" / "Senior Kotlin Developer for Android". Unless you are one of the top companies, that divide by 100 may well give you zilch or few CVs to choose from. Because of this then you hire some junior developer, teach them, and in a year's time they jump the ship for a more traditional project. Or three years from now the "peculiar choice of language done by the old decision maker" is corrected anyway "to match industry standard" or for another reason. YMMV.

That is an appeal to authority, a logical fallacy. I have never owned a company or employed other people, but I don't think someone needs that experience to make a valid conclusion here. PassiveLogic has grown rapidly to around ~100 employees, with many using Swift without prior experience with the language. Looking at the careers page, a variety of jobs do not require prior experience with Swift. For example, AI Graphical Application Developer only mentions Swift once:

Yet the AI graphical application will likely interface with Quantum Solver, a gigantic Swift code base. The employee could be brand new to Swift, but they can type import Quantum into a file and make some basic code. The most important skills are general development skills, like "3+ years experience in software development" and "experience with OO and MVC design patterns", which are language-agnostic. While this is one job example, it disproves the assumption that:

I understand that it would be somewhat difficult to hire people for Swift development. However, for now I plan to do all of this mostly by myself. Maybe in the future I will worry about hiring people to maintain my Swift code.

1 Like

this is unnecessarily negative towards swift.

no single language is the best language for doing everything, and you shouldn’t try to look for one language that “does everything”. different languages are different tools and different tools are for different things.

for example, python is a great tool for doing things that have to interact with files and operating systems. you can write the same code to do the same thing in swift, but it’s going to be way easier to write it in python.

on the other end of the stack, CUDA/C++ is a great tool for doing things that have to run on a GPU. you could write the same code to do the same thing in swift, but it’s going to be way easier to write it in CUDA.

swift is a systems programming language. it mainly competes with (in order of popularity):

  1. C++
  2. C
  3. go
  4. rust

it would not be a good idea to use swift as a front end language unless you are doing something that really requires close-to-the-metal performance, like a browser game, in which case you could use something like SwiftWasm. in most cases, you’re better off using a common front end language, like typescript.

among the languages that swift competes with, i think that swift is a better option than many of the alternatives. although nobody here likes to admit it, swift is a C-based language, and it can interop with just about anything speaks C. since swift compiles to machine code, it can also call assembly instructions, which the swift-png library uses to power hardware accelerated PNG decoding in pure swift.

unlike rust, swift has a large intrinsic user base, since Apple protects and promotes it within its (largely closed) ecosystem.

it’s hard to hire swift developers, and it’s very hard to hire experienced swift developers. this is a reason to learn swift, because swift is a valuable skill, which is in very high demand right now.

this is likely why swift developers are among the most highly paid specialists among major languages, and why it was one of the fastest-appreciating languages by wage growth from 2021–22.

unfortunately many of the most experienced members of our community have an excessive amount of familiarity with the cutting edge of the language, where compilers and apps break, and have opted to take an intellectually honest approach to the shortcomings of the language, which all programming languages have.

this is good for the long term health of the language, but counterproductive (and misleading) to tell beginners just getting started with systems and server-side programming. maybe this is why new developers are 35% less likely to choose swift, but 72% more likely to choose C++, when compared to professional developers.


Then perhaps we should be discussing the pros and cons of you personally using Swift.

Swift's main benefit is being easy to understand and debug. If anything, it should excel in areas prone to bugs, which includes audio processes according to @tera's statement. Someone made a really cool audio framework in Swift and I had some discussions with him. Regarding mutex locks, I can't remember the last time I used one. The paradigm that works best for me is a DispatchSemaphore, which you can either signal or wait on.

Please elaborate, because this is the first I'm hearing about that (despite ~3000 hours of working with Swift and Apple platforms).

It depends what your company is about. I think the important thing in a new team with a greenfield project is to set some guidelines for your growing team. You can almost use any language but what you don’t want is too many languages in production at the same time unless you have teams focused on each ecosystem. Pick one or two main languages and don’t look back until you have a product.

Without Apple's support, what would you choose?

Take this simple C++ ring buffer implementation:

#define count 10*512

class CRingBuffer {
    float samples[count];
    int readPos = 0;
    int writePos = 0;
    void put(float sample) {
        samples[writePos++ % count] = sample;
    float get() {
        return samples[readPos++ % count];

If you naïvely convert it to swift:

class RingBuffer {
    static let count = 10*512
    var buffer: [Float]
    var writePos = 0
    var readPos = 0
    init() {
        buffer = [Float](repeating: 0, count: Self.count)
    func put(_ v: Float) {
        buffer[writePos % Self.count] = v
        writePos += 1
    func get() -> Float {
        let v = buffer[readPos % Self.count]
        readPos += 1
        return v

you will have an implementation which has retain/release in its get/put methods (which can take a mutex lock) -- not good for use in real time code: see old but good links: link1 (49:12 -- 50:50), link2 and link3. That's the kind of minefields I am talking about. If you are extra careful (switch to unsafe pointers and verify the resulting asm (which can change from one swift version to another)) - yes, you can link4 (29:00 -- 32:40, 37:50 -- 42:25).

Relevant thread from the past.

1 Like

I non-naively converted the buffer to Swift, for reference. It is translated as literally as possible from C++. The problem was allocating a Swift array inside a class, which is already reference counted. Either change RingBuffer to struct, or change buffer to a pointer that deallocates upon RingBuffer.deinit. I chose the latter. Second, force-inline put and get if you're exporting this type as part of a public library. You don't want the overhead of extra function calls when it's obvious the assembly of RingBuffer.put and RingBuffer.get will be tiny.

Putting that aside, the main point is you should avoid Swift ARC types like Array and Dictionary inside an audio kernel function. The 2016 WWDC video (37:50 - 42:25) described a render function made entirely in Swift, using just pointers. This function touches only upon RAM reads/writes and ALU instructions, with all memory pre-allocated and no calls into swift_retain. It is easy to restrict yourself to these kinds of ASM instructions in plain C, which is probably why Apple recommended to use exclusively C/C++ for audio until 2016.

But to be helpful to someone trying to learn Swift, we can teach them about these performance characteristics of Swift. They might not find it super problematic. For audio code, you need to think like a C programmer whether you're writing the kernel in Swift or C++. The statement above gives the connotation that you "the reader" can't learn how to effectively work within this mindset. And that if they do, there's no way it's sustainable in the long term.

To restate the concerns from that paragraph:

  1. When working with audio, avoid synchronization primitives. This can be managed by pre-allocating your memory into raw pointers. In real-time code, only work with memory backed by those pointers.
  2. Experimental, custom code will bring glitches. This is something all programmers face, in every context. Bugs are no fun, but there are many ways to search for bugs or performance regressions. Swift package tests make it easy to test whether a code base runs correctly, but other options are also available.
  3. The compiler's internals can change at any time. In general, the compiler gets faster over time. If a section of code runs fast now, it will probably run fast in the future. New features like explicit stack allocations are subject to change, but the documentation/forums are good at explaining these areas for change when they do appear.

Nice. You may want to change from Int32 to a bigger type or to UIn32, otherwise the buffer will overflow in a few hours and array index will go negative and crash. Also worth changing += to &+=.

I don't think that per se will fly: you need the same struct for reader and writer (which are in different threads), meaning it has to either be in a class or in a global variable. In both of these cases get/put will have retain/release if the underlying storage is left as swift array.

Very true.

That was the question I had at one point: what materially changed between WWDC 2015 and WWDC 2016 in this regard... I never found an authoritative answer.

I brought up realtime audio as an answer to "can I do everything in swift". The answer I was conveying "perhaps you could, just there are better tools for specific jobs".

A few things for your third bullet point:

  • realtime programming peculiarity is that it's not about overall speed or average latency -- it's about worst case behaviour. Often times latency is sacrificed for speed. Speed can be much faster but if there is an extra lock (where there was no lock previously) or, say, if the worst case behaviour changed from O(N) to O(N²) (even if the average behaviour got improved from O(N) to O(log₂N) - that would be a problem for realtime code.
  • Just recently we were discussing a speed regression reproduced in SwiftUI(Seven seconds to render SwiftUI - #33 by tera) observed on new macOS versions. Perhaps there was a good reason for this slowdown, just note: newer is not necessarily faster.
  • "The compiler's internals can change at any time" - that's why the idea of @realtime keyword discussed in the other mentioned thread is so appealing, it would remove any guesswork or trial and error. With this keyword swift could become better than C IRT realtime programming.

You started to sound reasonable, good job.

This is an unwarranted, sarcastic remark about my character, not a refutation of my argument. Please make a best effort to stick by this document. It's okay to feel strongly about a particular stance, but that statement didn't contribute anything of value to this discussion.

I would have used Int, but stuck to literally translating the C++ code. In C, C++, and Java, int means a 32-bit signed integer. This was an intentional choice for the sake of demonstration. Good catch on using &+= though, which should reduce an extra clock cycle. To prevent overflows, I prefer to reset the counter of a ring buffer whenever it reaches the maximum.

This is my fault for being a bit vague. I often use "fast" to describe latency, throughput, and worst-case complexity collectively. When working with just C pointers for audio kernels, there are no operations that can block. Worst-case behavior should be immune to changes in the compiler regarding synchronization, as no locks are present.

Setting that aside, in real-time computing environments where you can expect synchronization locks, absolutely optimize for worst-case performance. I did a ton of that in ARHeadsetKit and AR MultiPendulum, even making a frame take longer on average by offloading certain work to the GPU. In the end product, the only time my framework/app went under 60 fps was when the OS froze the run loop.

I think this is far enough off a tangent that it's not contributing to the main purpose of the forum thread anymore. I don't think the unique performance patterns of real-time computing is even a concern for the poster, unless I am mistaken.


Perhaps this is a productive talking point to restart discussion from. Swift happens to not be used much outside the Apple ecosystem, but that doesn't mean it should be avoided. Saying so is an argument to the people, the idea that because everyone does something, it must be the best option. Here, it's in the reverse form: Swift is "unpopular", therefore it is the "worst" option.

The statement characterizes everyone who deviates from the norm - by using Swift on Linux/Windows/Android - as "hav[ing] too much free time on their hands". I happen to have lots of free time (this forums thread is proof) and use Swift extensively on Linux/Colab, but not everyone is that way. Perhaps the discussion here could be: could someone use Swift for X specific project, given a finite amount of time?

1 Like