Can we make Swift 6 easier?

Apologies in advance if this has already been debated, but I didn’t see it if so.
For those new to the language (v6), all the various protections for concurrency / data races will appear very confusing, and in fact are often unnecessary for a simple app that invokes no actions off the main actor.

Assuming the compiler can detect that the app only runs in a single isolation level, I.e MainActor, can we silence a lot of the warnings and errors?
E.g. Global variables can then be allowed without confusing (to a novice) error messages since the compile can assume @MainActor isolation.

Otherwise the improvements made to swift risk making the language too esoteric/ hard to get into.

10 Likes

This is very annoying in the embedded mode, where MainActor appears to be unavailable. I don't think my code could compile with -swift-version 6 no matter what I do.

5 Likes

Don't want to sound pessimistic, but this process began even before Swift 6. IBM abandoned their server-side Swift efforts along with their Kitura framework, and JetBrains sunsetted both their AppCode IDE and CLion Swift plugin.

Please don't make Swift 6 hard to get into.

3 Likes

If your app doesn't have any concurrency, you'll be fine. You can write simple CLI app without any concurrency, and Swift won't bother you with this at all. Yet once you start using it, you'll start getting protections. And in that case you also can just isolate to main actor everything you have, and be OK.

9 Likes

It depends on what the app is doing and on what platform. Part of me says that on some platforms and target types (e.g CLI on Linux or macOS) something like --assume-mainactor would be helpful. But another part says global vars are to be avoided anyway, so maybe it's a good thing that the compiler doesn't let you define them without being explicit about their isolation.

Other than that, if you don't define global vars in a simple CLI app, then you should be fine as @main will take care of default isolation.

For GUI apps though, writing single-threaded code means not taking advantage of the available multicore CPUs, which is a shame. In fact Swift makes the entry into concurrency so much easier and safer compared to the previous gen languages.

1 Like

I don't think IBM abandoned server-side Swift because it has became too complex, and any guesses why they did so would be a speculation, since we don't know their reasoning. And JetBrains sunsetted AppCode because keeping up with Xcode is a nightmare (they always behind) and simply not that profitable. Also, they focused on the Fleet, which has a great support of Swift and Xcode projects as well, so saying that JetBrains abandoned Swift is also not true.

Anyway, putting in business decisions in relation to language and its complexity (I'd say C++ is extremely complex, for instance) is probably not the best indicator.

14 Likes

I’d argue that starting out with a simple SwiftUI app , you shouldn’t have to care the least little bit about isolation levels. - and it should be possible to use “simple” asynchronous requests to get data from an image or data from a url where it is absolutely clear their is no possibility of a data race.

1 Like

You cannot do that without knowing anything at all about concurrency, and that’s never was the case.

You still can do that pretty easy though: you have everything isolated to main actor (with SwiftUI finally by default in Xcode 15), and then network requests aren’t isolated — and you even don’t need to care about that.

But is it changed from before-Concurrency? I’d say not a lot, and where it did, it got better. Let me explain what I mean by that.

Back with libdispatch all these bits were kinda hidden and not obvious for developers, and probably a significant part hasn’t bothered with getting into details for a pretty long time. But if you wanted to learn decently, you have to get an idea that all UI code (in some magic way tbh for a novice) should be on main thread and you are really concerned to ensure that. You don’t exactly see where it is running, but you have to operate under this assumptions. You might get off with a few basic things at first, but I still remember my first thread explosion trying to async some loading in a first month of work and that was a total chaos.

Now, with new concurrency in SwiftUI you still concerned with main thread — which is now main actor. Instead of being magical creature, it is here explicitly marking everything you touch. You get acknowledged with the state of things that crucial to understanding early on and more likely understand your code better. You still can async request in a safe way and load as much as you want without any concerns about thread explosion. So while it require a bit more mindfulness on the start, it is for better.

Also, I’d say that many has fall into illusion that writing apps for iOS is extremely easy, which is true only to some extent — you still work with a highly sophisticated system. If you a complete novice to a programming at all I’d say it is much better to start with the official book of Swift and learn how to program just using Swift, then get to iOS, because otherwise everything will be complex, no matter how simple programming language is — you can’t remove complexity from the system.

7 Likes

Would anything in SE-0420 help (leveraging support for passing isolation without using a global actor)?

I am not using Swift concurrency in the first place. In wasm I can't run for too long or the browser will freeze, so I use the requestAnimationFrame api to call into Swift every frame; it keeps some global state which Swift 6 is not happy about.

It's a completely single threaded program, the compile errors are only getting in my way, I don't want to have them unless my program actually uses concurrency.

Do you mean that Swift starts doing this if even a single line of code vaguely relates to concurrency? That's not good, I mark trivial types as Sendable in case I want to reuse some of the code in a context with concurrency without having to go and modify everything. No concurrency is possible but I'm getting all the errors anyway, and I don't see a flag to turn it off.

3 Likes

Embedded systems are inherently concurrent systems - and I think Swift concurrency could also be valuable there.

I still need to find time to play around with it so I'm not sure about @MainActor specifically, but generally speaking embedded devices may not even have a main thread. In traditional desktop applications, the main thread is the thread on which main is called, and where its run-loop executes (because when main ends, the program ends). But on an embedded device, your program doesn't end when main ends: interrupts directly invoke functions from hardware events rather than from main's run-loop. They can also be nested, so interrupt handlers can themselves be interrupted by higher-priority interrupts.

Let's say you have a large struct which requires several stores to copy to memory - maybe an interrupt occurs between two stores, and your value is left in a torn state until the new interrupt is serviced and the processor resumes the original sequence of stores.

The handler for that new interrupt shouldn't be allowed to read that torn value; that's undefined behaviour. It's valuable to say that a piece of data belongs to one execution context and shouldn't be accessed from other contexts without some kind of serialisation.

I'm not sure what the plan is for all of this. It's possible we will need new language features to model this conveniently. Point is - concurrency and data isolation are not irrelevant to embedded systems, and in fact I would even argue the opposite - that they are exposed to the hardware's natural concurrency to a far greater degree than desktop applications typically are.

3 Likes

I think this statement is overly broad. People have ported Swift to microcontrollers, which are not intrinsically multiprocessors.

1 Like

Microcontrollers are concurrent systems, too. That's the point I was making.

Pre-emptive multitasking is all about code being interrupted so other code can run. Even if you don't have an operating system, you'll still have hardware interrupts (e.g. a button was pressed, a hardware timer triggered, or some buffer is empty and needs to be filled).

An interrupt is a signal (generally called an "interrupt request") to the CPU to immediately begin executing different code, code that is written to respond to the cause of the interrupt. "Immediately" can be as soon as the end of the current instruction, in the best case. The time between the generation of the interrupt request and the entry into the ISR is called the "interrupt latency," and faster (lower latency) is always better. The CPU will remember the location of the next instruction it was going to execute by storing that instruction address in a register or memory location, and will then jump directly to the code designated by the programmer for that particular interrupt. [...]

A microcontroller CPU will be designed to respond to a number of different interrupt sources (perhaps 10 to 100 sources, typically), and each source can have specific user-written code which executes when that interrupt triggers. The code that executes for an interrupt is called the "interrupt service routine" or ISR. The "now you see it, now you don't" heading [of this article] refers to what you would see if you were watching the code execute in slow motion. The program counter would be moving from one instruction to the next, and then when the interrupt triggered the PC would suddenly end up in some totally different area of the program (the entry point of the ISR). Then, when the ISR was complete, the PC would just as suddenly be pointing back to the next instruction, as if nothing had happened.

Embedded Related: Introduction to Microcontrollers - Interrupts

Nothing about this requires multiprocessors.

For instance, consider the PIC10F320 - a 6/8 pin MCU which you can buy for 59 Euro cents even without a bulk discount:

  • Only 35 Instructions to Learn:
    • All single-cycle instructions, except branches
  • Operating Speed:
    • DC – 16 MHz clock input
    • DC – 250ns instruction cycle
  • Eight-Level Deep Hardware Stack
  • Interrupt Capability :point_left:
  • Processor Self-Write/Read access to Program
  • Up to 512 Words of Flash Program Memory [Note: that's on the 322. The 320 only has 256]
  • 64 Bytes Data Memory
  • High-Endurance Flash Data Memory (HEF)
    • 128B of nonvolatile data storage
    • 100K erase/write cycles

Even this is a concurrent system (it will also probably never run Swift, either - it's designed for hand-written assembly; hence the "only 35 instructions to learn").

6 Likes

I didn't say concurrency doesn't apply to embedded systems, but since I can't use concurrency features on wasm I would like to turn these errors off, or be able to take advantage of these features so at least the errors are useful.
I wasn't able to find any documentation on using concurrency with Embedded Swift, only that it's "partially implemented".

2 Likes

Is Swift 6 really significantly harder?

I tend to think that qualifying how "hard" the language is mainly relevant for the beginning: how much concepts you have to gather before you are able to write some basic code? Take Rust for example, where you have to get borrow checker before you'll be able to do so: it is steep at first, making it definitely harder to learn, then you can cruise for a while just with that knowledge, unless you hit next (e.g. lifetimes). Swift still has progressive complexity exposure "built-in", which means you can start writing the code (even concurrent-one) pretty fast: if you write synchronous code, everything goes "as is", and with concurrency for starters you need mostly "main actor + async/await code" and compiler will take care of most of that.

There are two differences that can be seen if we compare with earlier versions (which for a newcomer has much less sense still):

  1. Adoption for old projects.
  2. Exposing concurrency earlier in more explicit way.

The first case is for project that has been existing for a long time, and haven't been using the whole spectrum of latest language features. They might be struggling to adopt it in one take, but they aren't expected to do so within language release anyway. For relatively new projects, especially ones that have had strict concurrency checks on all the time, the transition will be much easier, there will be breaking changes, but they more likely would be easy to address. And brand-new will be OK.

The second case is that once you reach concurrency, Swift exposes it to you in a more explicit manner that it was before with GCD. See this comment earlier in the discussion with more details.

6 Likes

One more company (RemObjects) found Swift to be "difficult" (warning: harsh criticism there): Swift Evolution .

Some context: RemObjects maintains a collection of their own compilers (codenamed Elements) for several languages, including Swift (codenamed Silver).

1 Like

Complexity for new programmers (or simply new language users) strikes me as a very different problem from complexity for a parallel implementation maintainer. Apple has obviously thrown a huge amount of resources behind the canonical compiler, and implementation complexity has not generally been a top-level concern for language evolution, so its not surprising to me that an independent Swift compiler has struggled to keep up with the more implementation-heavy features. Subjective judgements about the value of those features aside, I do think its the right balance to treat the user model (and experience) of the language as a more primary concern.

13 Likes

I wonder how they really feel ; )

2 Likes

Silver has never been Swift, fundamental types like Array and Set have drastically different semantics in Silver to the extent that i would consider it a separate language that has only superficial similarities with Swift.

11 Likes