I have never really participated in the evolution process, but this limitation has nearly brought me to tears, and frankly, I'm mostly doing this out of desperation. Hopefully the informality here is ok :)
It is currently difficult to use SPM to distribute binary targets that have visible dependencies to other packages. This is because package dependencies in the textual interface are not built first by the build system.
Here's a link to a bug describing the issue, which includes a reference to a forum post on the subject:
There are a number of workaround proposed, including one that does indeed get builds to work. But, in all my testing, it also results in duplicating the contents of dependencies into the final target as well. This is very much not ideal.
Normally, a dependency would also affect linkage. But in this case, it's important that not happen. This is really just to give the system enough information to successfully import the module.
It could be that the ideal solution is encoding these dependencies into the binary itself. But I worry about the practicality and scope of a change like that. And, since this is affecting users today, I think it still makes sense to provide this escape hatch even if one day such a feature became a reality.
I think generally speaking this makes sense, I know that workarounds for this have been discussed before and used by people (typically they involve creating a dummy source target which is pretty ugly).
I am wondering about this, though:
If e.g. I have an app that depends on the "top-level" binary target, don't I want linkage of its dependencies to happen as well? Otherwise, the app wouldn't build and users would need to manually depend on the other required dependencies. Did you just mean we wouldn't link the dependencies to the binary target itself?
Since Target.Dependency could be other source-based targets or products from other packages, I think we would need some validation here to ensure people are only expressing dependencies on other binary target or products that only consist of binary targets to avoid mistakes.
Thank you for this comment. You are definitely right. I think the general case should handle dependencies like all other packages, including linkage. I was getting a little too aggressive about solving my particular issue.
So, what if the binary has already linked in the package? I understand doing this is very problematic, and should be discouraged. But, it is certainly a thing that can be done. Could you live with something like this?
I'm building an SDK to support clients for my app's upcoming ExtensionKit support. It's kind of a beast.
I will have 3 (and growing) extension executables within my own app
I expect at least some other apps to have > 1 extension executable
The SDK itself makes use of 6 packages, some of which themselves have transitive dependencies
Types from those packages must be used by the SDK's public API
Many extension authors will need a specialized XPC service to deal with a macOS sandboxing issue
The combination of all of these things has lead to me build a dynamic framework, with the pre-built service included. It hard to pull this off with SPM because of the dependency types used by the SDK's public API. These packages have already been linked into the framework. So, it cannot build, but it can link - a strange place to end up. Hence the pitch.
I fully acknowledge the fragility of such a setup. But, I really don't want to force every client to configure and build their own XPC service and I also really would like to use SPM for delivery.
Swift programs will not run correctly if there are two different versions of the same module loaded at the same time. That’s why closed-source libraries can’t have arbitrary external dependencies: because there’s no way to enforce that the client won’t also depend on those libraries, and pick a different version. This applies whether or not the dependency’s types appear in the library’s public API.
This is not great! It’s definitely limiting! But if you’re going to distribute an ABI-stable library, all of your dependencies need to be ABI-stable too. And you can’t just turn on library evolution mode for those dependencies and call it a day; as discussed in the other thread, that does not magically make a package stable. The package authors have to include binary compatibility as part of their SemVer guarantees, an additional burden that most package authors do not want to take on (or haven’t even thought about, let’s be honest).
I don’t have an answer for this, but if you just focus on this proposal, your “dependencies” array would have to include exact versions, not just semver minimums, and that would make your library very difficult to work with anyway.
I’ll say it again: I don’t know what to do about this. Right now I think it’s going to have to be more package developers saying “this package supports library evolution mode”, meaning “we promise our semver includes binary compatibility, and SwiftPM has checked that our dependencies do too”, as an extra flag in Package.swift. There may be simpler schemes for private dependencies, where the dependency authors can promise that it’s okay to have more than one copy in a process, and SwiftPM can avoid collisions accordingly. And of course I could be missing something. But this is a Feature That Needs Design, not something that can just be turned on.
(This was less of an issue with Objective-C because in Objective-C there were fewer possible changes that preserved source compatibility but broke binary compatibility. Adding defaulted arguments to a function is the most obvious one, but it’s not the only one; renaming types can break NSCoding archives in both ObjC and Swift. Still, you are totally justified to criticize Swift for not making library evolution simple enough to be the default, or for encouraging compiled-from-source packages even though those can’t be safely used as dependencies in all circumstances. Closed-source framework authors are a small minority among Swift developers, but still an important one—at least partly because these same issues affect Apple as well.)
IIUC limiting to exact versions sounds reasonable as a limitation to enforce on binary target dependencies because although it would make it harder to work with, this proposal would allow people to distribute close sourced binary packages that has dependencies which is already a win for those libraries which right now cannot use SPM for distribution.
We hit this is some way and our example was posted here a couple months ago and I think this proposal would help us as well to be able to distribute binary packages.
One could think that it is even desirable to restrict the version in that way because at the moment clients may use the binary framework and install the dependency package using SPM(or other management tool) with another version into their apps anyways which could also be an issue IIUC.
I think it's worse than that, actually: I think you'd have to pin to exact versions of recursive dependencies, because your binary target needs not just a fixed API but a fixed ABI. Basically, ship your lockfile with the binary target. And that's going to run into version incompatibilities very quickly.
Just quickly: this library im working on is not closed-source. I’m attempting to use a binary distribution for the same reason Sparkle does it (I think) - bundling XPC services. Though in practice I’m not sure that changes anything.
That is a significant overstatement. There are obviously very simple scenarios which will run into version incompatibilities very quickly. There are also perfectly reasonable real-world scenarios that never will. This isn't something which can just never actually work in practice. The flip side of "what if two people did this" is that you can get away with saying "we don't support two people doing this" a lot of the time. .exact() already exists and doesn't work outside of very constrained scenarios, but it does work in those scenarios.
That is a very interesting point @jrose. I guess we cannot assume that once we pin a version of a library the dependencies should be pinned as well because it should change in the within the same version. But I guess that falls into the assumption that the dependency author follows semantic version and is careful about ABI breaking changes because source dependency will always be recompiled but binary target wouldn't. I guess we could say that even pinning dependencies would dependencies for binary target would come as an "unsafe" feature because there is no way to ensure compatibility in some cases, even if library authors are careful about those things.
From experience developing iOS apps, I agree with @jrose here. We had a few third party frameworks that shipped as xcframeworks. Some of them have been using third party dependencies and were shipping a lock file along with their framework which the adopters needed to make sure to link into their final product.
While this was not only painful to setup and maintain, it locked all of the versions for that dependency down until the initial third party had time to upgrade. Sometimes, these frameworks depended on very common dependencies which locked our whole app down to make further progress.
What I have been advocating for back then to our vendors that provided these xcframeworks. Was to either internalise all of their dependencies by vendoring them themselves or by dropping them completely.
In general, I would be very very cautious of anyone using a source package as a dependency for something that they want to distribute as an xcframework build with library evolution. Jordan already pointed at all the problems that setup brings along and from experience I can only say that your consumers will most likely run into them.
This makes sense in the general way and definitely something to think about for a proposal like this that is a general feature.
I still think that something worth supporting even with limitations because there are simple cases like ours where is a binary package with a single dependency that doesn't have sub dependencies and author follows semantic version so is fine pinning version because source compatibility is ensured and we only support Darwin so ABI is not an issue from source based and it has been working for us now using cocoapods. But, I can see why this as a general feature has to be carefully considered :)
I am a bit confused by your statement here. If I get your use-case right you have a library that you distribute as an xcframework using library evolution. That library is a product of an SPM package which has a source based dependency.
You are stating that you are only supporting Darwin where ABI is definitely an issue and in the above case you would need to pin to a specific version. Otherwise your users could link a different version of the source based dependency which might be API compatibly but not ABI compatible.
Maybe I am misunderstanding you here, if so please correct me! :)
Ah I meant that with the same version of the dependency ABI is NOT a problem even the dependency being source based which would be always built from source meaning that could be built by a different version of compiler, while binary target could have been built with an older version, but that shouldn't be a problem on Darwin because ABI is supposed to be stable. At least that is my understanding of an stable ABI... correct me if I am missing something.
It should be also be "safe" to link against minor updated versions as long as it source and ABI compatible.
But main point here is that we pin version of source based that can be built with a different version of compiler than binary framework so I was just saying that it is ABI compatible on Darwin as far as my limited understanding goes.
The Darwin ABI is stable with library evolution enabled. I don’t think there was a formal commitment to stability without library evolution enabled, but you are correct that that would need to be guaranteed as part of all this. (For example, we could never pick a better layout for structs; all structs in “normal” mode are effectively frozen, like C structs declared in header files.)
I'd like to thank everyone for their comments here. They've all be really helpful.
Binary targets are an escape hatch. They provide a means of delivering an artifact via SPM that it cannot produce (my use-case), or that the authors do not want it to produce (closed-source). I'm sure there are some other handy uses as well.
But of course binaries can be built without any of the restrictions or safety-mechanisms discussed here. Everything bad that has been described in this thread can and probably does already happen today.
I don't mean to say this should just be turned on, without any kind of careful design. Of course it needs that! But, I think there are cases where this can and will work correctly. I also bet it will allow the system to produce warnings/errors for many situations where it will not.
I believe that providing the ability for binaries to declare dependencies makes SPM, on the whole, better. And further, I think the problems presented in this thread are actually arguments for it.
(In case you are wondering, I have refactored my design to put my XPC service into a dedicated binary target that does nothing but deliver the service. This requires the use of dylib, and is slightly more complex for consumers of the SDK, but removes all other uses of binary target dependencies. I doubt I would have thought of it without this discussion.)