Because extensibility is a broad topic and there are a wide variety of needs, this proposal takes the approach of:
Providing a way for packages to implement extensions and letting them define the capabilities of those extensions
Providing a way for packages to vend extensions to other packages (or choose to keep them as a private implementation detail of the package)
Providing an initial set of extension capabilities that is narrowly focused on source generation and analysis, which seems to be one of the most common needs today
This will hopefully focus the discussion and allow something useful to be put in place soon, while providing a path for defining additional extension capabilities in the future. So besides the technical merits of the initially proposed capabilities, feedback and ideas about how this approach to extensions could scale to other kinds of capabilities would be very appreciated.
To keep the scope reasonably bounded, this proposal is deliberately not a reconsideration of the existing package manifest format or a proposal for large changes to SwiftPM's build system. While all of those things are open to change, this proposal intentionally keeps the scope fairly narrow and implementable.
Thank you for that new proposal, I really appreciate any work in this direction. I just skimmed over it and found the text hard to understand, so I focused more on the examples. From the examples, I have the impression that this will only add support for (1. limitation) code generators (2. limitation) written in Swift.
How about linters? Isn‘t that the most common use case in builds scripts of them all? I‘d love to be finally able to see linter warnings right within Xcode for Swift packages that I open by double-clicking on the ‚Package.swift‘ file.
And what about command line tools that are not written in Swift? Or they may even be written in Swift, but their usage might be restricted to a command line interface. Do I have to write a Swift package that is able to invoke command line tools and forward their outputs so I can use them? Why don‘t we build this in?
Sorry if I misunderstood the proposal, but as I said, I also find it hard to follow and understand. Maybe it should have more usage examples more at the top. Or maybe the detailed design just is way too complex as a first step.
Maybe it should be as simple as executing commands as in ‚.buildScript(command: „swiftgen“)‘ first and the installation of the tool itself can be added in a later proposal. Don‘t know ...
My mouth is watering at the though of finally having this capability. I also really like the direction you’ve taken that encourages tools vended as Swift packages (clients don’t have to go chasing other tools to install), while still enabling arbitrary commands where they are needed.
Isn’t all that’s missing for this the contract that SwiftPM (and supporting IDEs like Xcode) will listen for diagnostics during execution of a build tool the same as it does during execution if the Swift compiler? How hard would this be to add to the proposal?
It could also help to have an official package that vends the diagnostics API so that producing well‐formed diagnostics is easier for any tools that will be written in Swift.
The way I understand it, if you were okay with assuming all your clients have the tool installed already, it would be as simple as this:
While looking things up again to construct the above example, I did notice that the draft examples sometimes use different symbol names than the API elaboration. e.g. arguments vs commandline, addCommand vs createCommand. These errors aren’t really meaningful to the discussion of the design, but they will need to be fixed before the proposal is finalized and becomes perpetual documentation.
At the moment, Xcode objects to the presence of a tool and blocks the build. I work around this with environment variables that remove the tools from the manifest when building for iOS. But that won’t work if the tools are still needed for a build step...
So the proposal is intentionally very limited in scope, however that's not the limitations
It is not just code generators, but there is a big focus on them. You can absolutely build a linter plugin using the prebuild hook. The same with generating documentation, which can be installed as postbuild extension if you wanted to.
The proposal is not limited to tools implemented in Swift. Notice that one of the main use cases shown is protoc (protocol buffers) which definitely is not written in Swift What has to be written in swift though is the extension definitions -- these only construct the "commands" that determine how to run such tool.
Binary tools can be automatically fetched thanks to the binary targets, notice this:
/// Binary target that provides the prebuilt `protoc` executable.
which will make the protobuf extension download protoc for you before the build kicks off.
They're supported -- you can write a prebuild extension that runs swiftlint etc. If you're asking about "it should show up natively in Xcode" that's not part of Swift evolution really, so we can't really discuss this here.
But yes it is understood and an accepted limitation that these integrations are a bit bare bones right now -- they are a tremendous help for a large amount of SPM projects though, and we would not want to delay adopting this feature as a first step only because Xcode specific details and things we can do in the future to improve the integration. Hope this helps understand what this proposal is -- a first, small, step on a long road towards improving the entire experience.
@ktoso@SDGGiesbrecht Thanks for those clarifications. I obviously didn’t understand the descriptions correctly, now it‘s much clearer to me.
Just one suggestion: What about discussing linters and how they can be already integrated and what the limitations are currently right within the proposal?
The questions I still have are:
Will the whole build of a target where I set up a linter fail if the linter produces a non-zero exit code?
Can a linter send outputs (I‘m thinking errors and warnings) to a place where an IDE (such as Xcode) could read them to show them right within the IDE?
Of course, we do not have any influence on if Apple will add this feature to Xcode. But I think we should at least think about it and either mention in the proposal already, that build tools can forward messages to stdout for example and SwiftPM will ensure to forward that output so the tool that invoked SwiftPM builds (e.g. Xcode), or at least such a forwarding functionality should be listed as future additions that were explicitly considered.
This looks very good already and I am happing seeing this move forward. It is one of the key missing pieces for SwiftPM.
After reading the proposal I have a couple of questions left. If I understand the difference between prebuild() and buildTool() correctly. Then when using the former the tool itself needs to make sure it is not running when the inputs did not change and the latter the build system aka SwiftPM will make sure to only execute it when the inputs have changed. Is that assumption correct?
What I found a bit surprising with the examples is that there is zero input when using extensions but rather the author has to encode the expected structure in his plugin. In the SwiftGen example it is expected that the swiftgen.yml is located at the root of the package.
I saw a bit of discussion in the Alternatives considered section with the reasoning that an API based approach is clearer than defining input and output files in the clients package. However, isn't the API based approach a lot more limiting since tools can then only be used in the way the package extension author expects them to be used.
If it would be possible to pass in the a list of input files and a list of output files would this not open up more use causes for these tools?
When thinking about the above it came to my mind that this could be a no problem if you are able to define local package extensions with third party tools. If I need some customisation on how an extension is working for my specific use case, is it possible to write a local extension that uses a third party tool?
Lastly, is it possible to use generated files from extensions of dependencies? From my feeling this proposal allows that but I just want to double check if it really does allow it. An example could be a tool like mockolo. This tool is generating mocks for protocols. An important feature for that tool is to be able to generate mocks with protocols that inherit from other protocols across module boundaries. A setup could look like this:
A --> B --> C
Where a protocol X is declared inside C and A defines a protocol Y that inherits from X. Mockolo would now generate a Mocks.generated.swift for module C. Then it would need to generate a Mocks.generated for module B using the generated file from module C and in the end it generates a file for module A using both generated files from B & C.
Is it possible with the current proposal to access the generated files from the dependencies easily?
I'm very excited to see this proposal, and appreciate the pre- and post-build structure that's proposed. Are there any details about how failure conditions will or should be handled? I presume, for example, that if the relevant tool errors out (non-zero return or such) that the build will fail.
Is there a specific structure or pattern to how those failures would be exposed upward? For example, a typo in a OpenAPI/protoc/etc spec that results in a failure to parse and generate relevant source files - does the resulting error need to be returned from the extension code as a specific Diagnostic in order to be usefully exposed to the consumer of the swiftPM library? (either CLI or presumably IDE tools such as Xcode)
This is incredibly convenient and useful! Does any other Package Manager offer this? Not NPM afaik, Ruby Gems? Cargo? Or is it first of its kind? (Sorry if it was stated in the proposal, I skimmed through it)
Overall this looks really good and a promising start to a complex area.
I have a question about Example 2. In the example the Package that declares the extension doesn't define any dependencies. Is it valid in the initial proposal for other SwiftPM dependencies to be declared here for an executableTarget depended on by extension to use (for example a JSON/YAML parser dependency)?
If so, how is this reconciled with the dependency graph for the package actually being built? Can they potentially independent of each other? For example, if a code generator extension depends on MyYAMLParser 1.0.0<2.0.0 but the package being built depends on MyYAMLParser 2.0.0<3.0.0, will SwiftPM be smart enough to understand that these two dependency requirements aren't conflicting with each other (as they are used in different stages of the build process)?
I think the answer to your question is in the proposal as:
Package extension targets will not initially be able to depend on library targets or products. Note that the extension itself is only expected to contain a minimal amount of logic to construct a command to be run during the actual build. The tool that is invoked during the build can depend on an arbitrary number of other SwiftPM targets, or can indeed be provided as a binary artifact. It is a future goal to allow package extensions to depend on libraries, but that will require larger changes to how SwiftPM (and IDEs that use libSwiftPM) create their build plans and run their builds.
So extension targets cannot depend on things — yet.
If the “tool” that the extension uses tho would depend on some yaml lib, it would be the same versions and general build context as the entire build itself.
@abertelrud explained this to me a few times and I believe this is because how how there is no way to express such “tiered” builds if you will. And making that happen will require us to deeply change Xcode as well… so it’s something we’d like to allow, but is a huge amount of work so it’s outside of the scope of getting the ball rolling here.
As a workaround if you needed such isolation today one could provide the tool as binary dependency and then it does not matter what it’s using internally. But yeah, if built together they currently can’t have isolated dependency trees.
Anders is the expert on this tho so maybe I’ve gotten the details here slightly wrong, but that’s the general idea
Maven (for java) has some constructs akin to this, but I it's rigidly structured in how it expects the build process to go - I appreciate this plugin and pre-build/post-build model significantly more - the critical aspect is "do this before the compilation" (source generation) or "after" (docs/linters)
I have some specific replies to folks in the thread, but first some high level notes.
Generally I think this is really good! It's an important forward step and this definitely provides a bunch of useful tools. Unfortunately I also think it's probably not quite far enough along to tackle anything more complex than the simplest of use-cases, and it also has some areas where things are highly underspecified.
Here are some of my notes.
First, a couple of nits. Should the ExtensionCapabilitystatics be functions, or lets? That is, should we be writing capability: .buildTool() or capability: .buildTool? The latter seems moderately nicer to me. Secondly, the argument label using: is extremely general. Do we want to be consuming it here? Or should we provide a more specific label, e.g. usingExtensions.
More substantively, how do we intend for users to develop against the new PackageExtension API? Right now when writing a Package.swift it can be quite painful to get autocomplete on many platforms, as the library we're writing against doesn't necessarily existing in an easy-to-consume way. Xcode deals with this today: do we intend for Xcode to do the same for extension targets? What about the Swift LSP implementation?
I also have concerns about tool distribution. While it's somewhat elegant to force tools to be distributed within the SwiftPM package graph, it does make some build tools exceedingly painful to use. In particular, tools with complex dependencies will either need to be built as giant static executables or have unstated dependencies on the OS. As an example, consider a tool that used perl to generate source files. How would we distribute such a dependency today?
I think the above complaint is part of a broader deficiency in SwiftPM which is that it still does not handle the interface between SwiftPM-land and OS dependencies very well. System library targets already don't work very well, and here we will be adding another interface that is likely to make things difficult. I would like us to consider whether we need a way to specify what the dependencies are for various projects from system package managers more formally.
Finally, and this is the biggest one for me, there is this line in the proposal:
The proposal is very light on details here, and I think we need to expand on it. For example, currently binary targets are supported only on Apple platforms, and only as .zips containing XCFrameworks (yes, I am aware there is an exception for those distributed as paths instead of URLs, it's immaterial to this discussion).
Does this proposal plan to lift the Apple-platform-only limitation? If so, how? How does it plan to tackle the need to express what the other platforms it supports are? How will it interact with architecture differences on those platforms (e.g. 32-bit vs 64-bit Windows, ARM vs 86)? What about dependencies on the system: do we have any guidance for how to build tools so that they are most likely to work on multiple Linux distributions?
The proposal as currently written alludes to some of this complexity with:
This is, again, extremely light on detail. Is the plan that this support will follow in a different pitch? If so, when?
This proposal also fails to address the failure modes of this support. What happens if the binary build tool does not contain a binary for a given Swift platform? How will that manifest to the user? Should they be able to override the binary with one they know is equivalent on their platform?
If we shipped this without Linux support for binary targets, what will happen if you declare a manifest that uses package extensions on a platform that doesn't support them? Will SwiftPM fail? Or will it silently ignore the extensions?
I think we need substantially more detail on how this is going to work to understand the effectiveness of this proposal as a whole. Right now, as specified, it seems hard for Swift Protobuf to adopt it, and impossible for grpc-swift to adopt it (see below). swift-nio-ssl is a very long way from being able to adopt it due to the complexity of its build systems and the desire to avoid foisting them on all users.
This does not address @tachyonics' concern. To explain why, we need to disambiguate two different uses of "dependency" in SwiftPM.
SwiftPM uses the word "dependency" in two places. One is as a package-level modifier. Here you express a dependency on a package, e.g:
So you're right that extension targets cannot depend on libraries (either targets or products). However, there are two ways in which you're missing important details. First, they can (and indeed in many cases must) depend on executable targets or products. This means that Swift Packages that provide extensions may themselves depend on other packages.
To consider an example, consider grpc-swift. This provides a source generation tool that plugs into protoc, just as Swift Protobuf does. The package that vends that target (protoc-gen-grpc-swift) depends on SwiftNIO and other libraries for its runtime functionality.
The problem here is that SwiftPM has to produce a buildable package graph, not separate for each target. This means that package dependencies need to be sympathetic. We cannot get around this with target-based dependency resolution: if the individual packages declare incompatible dependency constraints, the entire graph is unbuildable, even if none of the specific targets we're trying to build hit the problem.
More broadly, grpc-swift demonstrates another limitation of the proposal: protoc-gen-grpc-swift has library dependencies. It does this because the interface between protoc and its plugins is itself a protobuf interface, and so it depends on protobuf for the serialisation and deserialisation. As a result, the only way to use grpc-swift as a build extension today is to ship protoc-gen-grpc-swift as a binary executable as well, in a separate target. That's not really ideal. @abertelrud how complex is lifting this constraint?
I should note that in my section above I said that @tachyonics concern was valid. I should stress that while it's valid I don't think it's particularly bad. Having the same dependency graph for your build tools and your binary products is not that big a constraint. Indeed, Linux package manager ecosystems do this all the time. So I don't personally consider the constraint @tachyonics found particularly problematic.
The limitations around library targets are much more important to my mind.
It's great to see a pitch tackling this problem and I'm excited to see how it unfolds!
It took a few passes for me to wrap my head around dependencies for package extension targets. My understanding, from reading the pitch and comments in this thread are:
a package extension target may have two types of dependencies: binary targets and executable targets
a package extension with 'prebuild' capabilities may only depend on binary targets
an executable target may not depend -- either directly or transitively -- on a library or executable from another package
Is this correct? If so the final point relating to dependencies was not clear from reading the pitch. Indeed I have the same concerns as @lukasa around this, I won't repeat those here.
The pitch seems a little light on details around passing options to package extensions. It suggests that configuration files will be the only way to handle this initially:
This initial proposal provides only limited ways for a package target to configure the build tool extensions it uses...they can read custom configuration files as needed. Future proposals are expected to let package extensions define options that can be controlled in the client package's manifest.
Yet the Protobuf example makes use of targetBuildContext.options -- which isn't declared as part of the TargetBuildContext protocol. These options seem to not come from a configuration file. I'd like this area expanded upon as I don't think configuration files work well when configuration is different per-target (as the configuration format must be rich enough to support per-target option or the author of the package extension target must define some convention on naming configuration files in a per-target way).
A few other minor comments:
packageExtension, packageExtensionTarget and extension are all used pitch to refer to the same thing although the detailed design refers to it as extension.
The detailed design refers to dependencies for package extensions targets as executables (i.e. extension(name:capabiliy:executables)), this seems to be dependencies in usage elsewhere.
The SwiftProtobuf example uses targetBuildContext.inputPath, I think this is meant to be targetBuildContext.sourceFiles
Just to +1 your sentiment on it not necessarily being bad - it's actually usually something you want, especially when it comes to codegen.
For example, Apollo for GraphQL. They offer a codegen capability, and you generally want to use the same version of the codegen library that shipped with the runtime version you're running, just in case there is any difference in expectation or how the library works.
The dependency related things I'll leave Anders to get back to here soon (I'd probably add to the confusion rather than resolve it ). gRPC definitely is one of "the" use cases so we'll want to make sure it can work, even in this very limited proposal so we were asked to scope it out.
I can answer the options bit though:
It's one of the things that got scoped out as "can be added later", so that remaining targetBuildContext.options we need to trim from the proposal...
Here's the story about it:
The actual goal is to have them be nice and type-safe, but to achieve that the package manifest needs to be able to get types from an extension, and that's a bit much to solve in the initial proposal.
Rather than adding an untyped [String:String] which we would have to deprecate very soon and replace with some type-safe API for options, the current proposal tries to be very minimal, and just define those very limited hooks for now... We know it is limiting and we'll need to follow up soon.
We are aware that many plugins will want to allow specific per-target configuration which would be best done in the package manifest of the project using the extension.
We are purpusefully leaving options out of this first proposal, and are going to revisit and add these in a future proposal.
In theory options could just be done as a dictionary of string key/values, like this:
// NOT proposed
.extension("Foo", options: ["Visibility": "Public"]) // not nice, not type-safe!
however we believe this yields a pretty sub-optimal user experience. It is hard to know what the available keys are, and what values are accepted. Is only "Public" correct in this example, or would "public" work too? Thus, we would like to rather explore a type-safe take on options, and allow plugins to defined some form of struct MyOptions: ExtensionOptions type, where ExtensionOptions is also Codable, and SwiftPM would take care of carrying this options type to the extension. This is a slightly difficult design to pull off well, because it requires the extension adding a type being accessible to the Package Manifest, and it also opens up considerations about
Designing this type-safe options is out of scope for this initial proposal though, as it carries many complexities wrt. how the types are made available from the extension definition to the end-users package manifest etc. It is an area we are interested in exploring and improving in the near future, so rather than lock ourselfes into supporting untyped dictionaries of strings, we suggest to introduce target specific, type-safe extension options in a future swift evolution proposal.
Please note that we absolutely agree about the options... it's definitely something needed, it's more of a question if we can incrementally get there, rather than do it all in one proposal.
Perhaps this concern is too far off topic, but let me at least toss it out.
The recent spate of cyberattacks demonstrates a need to make validating a build all the way from the most removed tools and 3rd party components through product final test something that's straightforward and secure. Before we go down the build tool road too far, I'm asking that we consider how to build security into the overall Swift supply chain / build / test / deliver architecture.
I'm specifically not trying to address the issue of protecting against attack code intentionally embedded by a developer into their product - I think that's in the "too hard" bucket. If we can reach a state where we can protect against attacks by others, we'd have taken a giant step.
To that end, it's my opinion that certificates aren't enough, even after ignoring that they themselves get spoofed now and then. Without a way to recursively validate all elements back to bare metal - specifically including related test suites and the supporting test environments - we're open to attack. An identity certificate supports only an assumed level of trust that the associated product hasn't been tampered with, because the composite chain proving the state of the build's components isn't verified. Instead, you have to trust the certificate holder did that to the necessary degree.
By analogy, consider that although developer certificates are required in the App Store, all they establish is proof of the developer's identity, and otherwise they're content-free. If a Swift "Component Store" supporting a framework marketplace uses nothing more than the same identity-based security model, it won't be able to guarantee there's been no tampering. Instead, we need a way to know that the entire supporting tool and component chain was what the developer thought it was.