Package Manager Extensible Build Tools


This draft proposal introduces extensible build tools in the package manager. It will allow package authors to integrate community build tools in the build process!

Package Manager Extensible Build Tools


This is a proposal for adding package manager support for extensible build
tools, i.e. executing tools at build time which were themselves produced by some
other package, and whose behavior the package manager does not understood
directly, but rather interacts through a well-defined, Swift, protocol.

We expect this behavior to greatly enhance our capacity for building complex
software packages.


There are large bodies of existing, complex, software projects which cannot be
described directly in SwiftPM, but which other projects wish to depend upon.

The package manager currently supports two mechanisms by which non-package
manager projects can be used in other packages:

  • The system module map feature allows defining support for a package which
    already exists in an installed form on the system. In conjunction with system
    package managers, this can be used to configure an environment in which
    a package can depend on such a body of software.

  • The C language targets feature allows the package manager to build and include
    C family targets. This can provide similar features to the previous bullet,
    but through use of the standard C compiler features (like header and library
    search paths). The external project again needs to be installed using a
    different package manager.

These mechanisms have two major downsides:

  1. The usability of a package depending upon them requires interaction with a
    system package manager. This increases developer friction.

  2. The system packages are global. This means project builds are no longer self
    contained. That introduces greater complexity when managing the dependencies
    of multiple projects which need to run in the same environment, but may have
    conflicting dependencies. This is compounded by the package manager itself
    not understanding much about the system dependencies.

The package manager also currently has no support for running third-party tools
during the build process. For example, the Swift protobuf compiler is used to generate
Swift code automatically for models defined in ancillary sources. Currently
there is no way for a package to directly cause this compiler to be run during
the build, rather the user must manually invoke the compiler before invoking the
package manager.

Proposed solution

We will introduce a new type of target and product called “Package Extension”.
A Package Extension target should contain non-executable Swift source code,
which will be compiled into a dynamic library. This target will have access to
a new runtime module called PackageExtension.

A Package Extension target should declare dependency on all executable products
that it needs. The executable products can be in the same package (as the
package extension) or they can be an executable product in one of the package
dependency. It is not required to declare an executable as a product if the
executable is in the same package.

In order to allow other packages to depend on a Package Extension, it must be
exported using the new Package Extension product type. This export is not
necessary if the Package Extension is used within the same package.

Initially, only executables will be allowed as tools to create build commands,
but we do plan on adding support for defining in-process build tools. This is
dependent on SwiftPM adopting llbuild’s C API, which is a very desirable goal.
Similarly, we will not allow a Package Extension target to depend on a library
target or product until we add the support for in-process build tools.

We will start with a very strict and minimal API for the new PackageExtension
module and evolve as we discover new requirements. The process for evolving the
API will be same as that of the Package.swift API, i.e., we will use the Swift
Evolution process.

The API in PackageExtension module will be tied to the Swift Tools Version
declared in the manifest of the package that extension is in. This means, to use
an API that was added in Swift version X.Y.Z, the tools version of the package
should be at least X.Y.Z.

Detailed design

To allow declaring Package Extension targets and products, we will add the
following API in the Package.swift manifest:

extension Target {
    static func packageExtension(
        name: String,
        dependencies: [Dependency] = []
    ) -> Target

extension Product {
    static func packageExtension(
        name: String
    ) -> Product

We will add a new array parameter buildRules to regular and test target types
to allow declaring custom build rules. The initial API is described below:

final class BuildRule {
    /// The source files in this build rule.
    /// This is an array of glob patterns used to identify the input files for this
    /// build rule.
    /// Paths specified must be relative to the target path.
    var sources: [String]

    /// The package extension, this build rule is defined in.
    // FIXME: Should we allow package extensions to declare more than one build
    // rule? If so, we should add another parameter for declaring the build rule
    // name in addition to the package extension.
    var packageExtension: String

    /// The options dictionary that will be available to this build rule.
    var options: [String: Any]

    /// Create a new build rule.
    static func build(
        sources: [String],
        withPackageExtension PackageExtension: String,
        options: [String: Any]

We propose the following API for the initial version of PackageExtension
runtime. These APIs are not final and will probably need some refinement
once we try some community build tools with an actual implementation of this
proposal. However, we hope the refinements will be minimal and will not require
another round of review.

/// Describes a custom build rule.
/// Package extensions must implement this protocol and create 
/// an instance using the convention described below. Currently, 
/// there can be only one build rule in a package extension.
/// FIXME: @_cdecl is not officially supported or documented. We need a 
/// supported method or introduce @cdecl in language through swift-evolution.
///     @_cdecl("createCustomBuildRule")
///     func createCustomBuildRule() -> Any {
///         return MyCustomBuildRule()
///     }
protocol CustomBuildRule {

    /// Called to construct tasks.
    func constructTasks(target: TargetBuildContext, delegate: TaskGenerationDelegate) throws

/// Describes the context in which a target is being built.
protocol TargetBuildContext {

    /// The name of the target being built.
    var targetName: String { get }

    /// The inputs to this target.
    var inputs: [Path] { get }

    /// The build directory for the target.
    /// Custom build rules are not allowed to produce outputs
    /// outside of this directory.
    var buildDirectory: Path { get }

    /// The custom options defined in the manifest file for this target.
    var options: [String: Any] { get }

    /// Finds the given tool.
    func lookup(tool: String) throws -> Tool

/// Interface used to generate create custom tasks by a build rule.
protocol TaskGenerationDelegate {

    /// Creates a command which will be executed as part of the build process.
    /// The tool and node instance must be created by the respective APIs.
    /// Custom implementations will be rejected at runtime.
    func createCommand(tool: Tool, inputs: [Node], outputs: [Node])

    /// Creates a node for the given path.
    func createNode(_ path: Path) -> Node

    /// Adds a derived source file, which will be input to other build rules.
    func addDerivedSource(_ path: Path)

    /// Returns the diagnostics engine used for emitting diagnostics.
    var diagnostics: DiagnosticsEngine { get }

/// Represents a build tool.
/// The tools can be looked up from the build context.
/// Currently, a tool must be an executable dependency
/// to this package extension.
protocol Tool {}

/// Represents a build node.
/// Nodes should only be created using the task generation delegate.
protocol Node {}

/// Represents an absolute path on disk.
// FIXME: Should this be a struct instead?
protocol Path {

    /// The string value of the path.
    var string: String { get }

    /// Returns the basename of the path.
    var basename: String { get }

    /// Creates a new path by appending the given subpath.
    func appending(_ subpath: String) -> Path

/// An engine for managing diagnostic output.
protocol DiagnosticsEngine {

    /// Emits the given error.
    /// Note: Emitting an error will abort the build process.
    func emit(error: String)

    /// Emits the given warning.
    func emit(warning: String)

    /// Emits the given note.
    func emit(note: String)


Consider an example version of the swift-protobuf package:



let package = Package(
    name: "Protobuf",
    products: [
        .packageExtension(name: "PBPackageExt"),
    targets: [
            name: "PBLib",
            dependencies: []),
            name: "PBTool",
            dependencies: ["Lib"]),
            name: "PBPackageExt",
            dependencies: ["PBTool"]),


import PackageExtension

struct ProtobufBuildRule: CustomBuildRule {

    func construct(target: TargetBuildContext, delegate: TaskGenerationDelegate) throws {

        // Create a command for each input file.
        for inputFile in target.inputs {

            // Compute the output file.
            let outputFile = buildDirectory.appending("DerivedSources/\(inputFile.basename)")

            // Construct the command line.
            var commandLine: [String] = []

            // Add the input file.
            commandLine += "-c" + inputFile.string

            if case let extraFlags as [String] = target.options["OTHER_FLAGS"] {
                // Append any extra flags as-is.
                commandLine += extraFlags

                // Inform `-v` is deprecated.
                if context.options.contains("-v") {
                    delegate.diagnostics.emit(warning: "-v is deprecated; use --verbose instead")

            // Add the output information.
            commandLine += ["-o", outputFile.string]

            // Create the command to build the swift source file.
                tool: try target.lookup(tool: "PBTool"),
                inputs: [delegate.createNode(inputFile)],
                outputs: [delegate.createNode(outputFile)],
                description: "Generating Swift source for \(inputFile.string)"

            // Add the output file as a derived source file.

func createCustomBuildRule() -> Any {
    return ProtobufBuildRule()

My package



let package = Package(
    name: "MyPkg",
    dependencies: [
        .package(url: "", from: "1.0.0"),
        .package(url: "", from: "1.0.0"),
    targets: [
            name: "Tool",
            dependencies: ["SwiftyCURL"],
            customRules: [
                    sources: ["misc.proto", "ADT/*.proto"]
                    withPackageExtension: "PBPackageExt",
                    options: [
                        "OTHER_FLAGS": ["-emit-debug-info", "-warnings-as-errors", "-v"],

Alternatives considered

We considered allowing a more straight-forward capability for the package
manager to simply run “shell scripts” (essentially arbitrary invoke command
lines) at points during the build. We rejected this approach because:

  1. Even this approach requires us to either explicitly or implicit document and
    commit to supporting a specific file system layout for the build artifacts
    (so that they scripts can interact with them). Adding support in this way
    makes it hard for script authors to know what is explicitly officially
    supported and what simply happens to work due to the current implementation
    details of the tool. That in turn could make it hard to evolve the tool if we
    wanted to change a behavior which numerous scripts had grown to depend on.

  2. It is hard for us to enforce that scripts don’t do things that are
    unsupported, since the script by design is intended to interact directly with
    the file system. This has similar problems as #1 and makes it harder for
    package authors to write “correct” packages.

Another alternative is to do nothing, and requiring all behaviors be explicitly
supported through some well-modeled behavior defined in the package
manifest. While we aim to support as many common behaviors as possible, we also
want to support as many complex behaviors as possible and recognize that we need
an extensible mechanism to support “special cases”. Our intention is that even
once we add extensible build tools, that we will continue to add explicit
manifest support for behaviors that we see become common in the ecosystem, in
order to keep packages simple to understand and author.

Although it is possible to the more straightforward “shell script” capability is
simpler to implement and could be added to the existing package manager without
significant implementation work, we have felt that this would ultimately be more
likely to harm than help the ecosystem. We have several reasons for believing

  1. We know the package manager is currently missing critical features which
    would be needed by many packages. One of our design tenants has been that we
    should design so that roughly 80% of packages can be written with a
    straightforward, simple, and clean manifest that does not require
    advanced features. If we were to add a straightforward, but complex, script
    based extension mechanism, we expect that far too many packages would begin
    to take advantage of it due to these missing features. Due to the opaque
    nature of shell-script extensions, this would be very hard to then migrate
    past once we did gain the appropriate features, because the tools would be in
    a poor position to understand what the shell script did.

  2. The package manager currently always builds individual packages into discrete
    sandboxes, including the transitive closure of the package
    dependencies. While this works well for sandboxing effects, it is inherently
    not scalable when many separate packages are being worked on by the same
    developer. This approach also makes it hard for continuous integration
    systems which need to perform very reliable tests on many different packages.

    Our intention is to solve these problems by leveraging
    reproducible build techniques to expose
    the same user interface as we do today, but transparently cache shared build
    artifacts under the hood. This will rely on the ability of the package
    manager to have perfect knowledge of exactly what content is used by a
    particular part of a build. Shell script based hook mechanisms make this very
    difficult, since (a) by their nature they pull in a large number of
    dependencies (the shell, the tools used in the shell script, etc.), and (b)
    it is hard for the tool to reason about them.


Pardon the confused post above. I’m still parsing this.

My first impression is that while it provides a pathway for extensibility, which is a win, I’m against this particular implementation:

  • the API and the data model feel very complex for what it is doing
  • I don’t have a lot of comfort with the idea of putting in one partial-measure now (executable-only tools) and leaving the door open for more extensions (libraries) later. How much thought has gone into library-only tools?

I feel like the two-step approach that was being discussed in the SPM Static dependencies thread has a far smoother user experience.

My take on this proposal is generally negative. I’d favor this above NPM-style shell scripts, but not:

  • custom loaders that allow alternate handlers for differing file patterns
  • a two step process as described in the linked thread above
  • local dependencies allowing authors to define and share their own build/test libraries instead of using SwiftPM as a build tool for larger, non-Package projects
  • a clearer separation between SwiftPM the package spec and resolver, and SwiftTools, an interface for building Swift code – which would help open the door to consistency and flexibility, while not making the package spec so complex

Edit: This is a complex problem and a complex domain, so I don’t mean to sound dismissive when I express concern over the package spec’s complexity. My concern is that locking into an API (and a general sub-system) with this much “concreteness” feels like it might cause a rigidity problem down the line – so I’m offering a few solutions which feel like steps backwards and at least create a few potential lines of discussion around several potential approaches to this solution. But I admit that I’m biased towards preservation of a very clean package spec.

Part of me feels like the last bullet point above is worth expanding upon, is this the right thread for that? Limiting what packages can do w/r/t their own builds (and passing that build configuration on to the package consumer) feels like it gives flexibility while also providing a sane, usable default experience for the majority of packages. If Package.swift describes dependencies and, say, “build.swift” describes your app and how to build it, you could also expose an API for building external packages:

// build.swift -- SwiftTools version specified by Package spec
// or defaulted to 'system' version
import SwiftTools
// enable a ProtobufTools plugin, also specified in Package spec
import ProtobufTools

let MyLib = SwiftTools.library(name: "MyLib", dependencies: [
  // where "abc" is a package and "xyz" is a product
  // with deps defined in respective Package.swift files

// local dependencies by reference
let MyApp = SwiftTools.executable(name: "MyApp", dependencies: [ MyLib ])

// defines `swift build MyLib`, `swift build MyApp`, `swift run MyApp`[MyLib, MyApp])

// defines a task to compile protobuffer definitions
// ProtobufTools exposes options in pure Swift
// it's responsible for translating potential CLI calls, not SwiftPM

// defines a task called run-my-app, `swift run run-my-app`

Again just trying to think of ways to give some degree of flexibility to this system that keeps the package spec really simple. I know this is a wild departure from the thread at hand, but I’m concerned about lock-in to an API that might be better served by a slightly different approach, so at least it’s worth discussing.

This model has a bit more boilerplate but the mental model is way simpler and leads to a lot more flexibility. You can basically define a whole middleware chain around builds and tests, expose Foundation, LibC, XCTest and other system libraries from SwiftTools, and most build complexity is opt-in.

I am assuming you’re referring to PackageExtension APIs. They need to be complex because different tools have different requirements and we need to allow the tools to express those requirements. The example in the proposal is a simple one but there are complex build tools. If you’re interested, you can look at this CMake module that implements Swift compiler support for building dylibs.

Yes, it will be very easy to extend this API to support tools that don’t have executables. We need to integrate llbuild into SwiftPM to support that, which is non-trivial amount of work but highly desirable. This is mentioned in the proposal:

Initially, only executables will be allowed as tools to create build commands,
but we do plan on adding support for defining in-process build tools. This is
dependent on SwiftPM adopting llbuild’s C API, which is a very desirable goal.

We absolutely want the package manifests to be clean. Can you explain why you think introduction of a new class (BuildRule) will make the package manifest unclean?

Your build.swift example looks more like a task runner to me. I think task runners should be part of package workspaces in future. We haven’t really had any discussions about task runners but this is a random example:

import PackageWorkspace

let workspace = Workspace(".")

// Defines a task called "publish".
workspace.define(task: "publish") {
    // Clean the workspace.
    try workspace.reset()
    // Build the package.
    let builtPackage = try workspace.packages[0].build(config: .release)

    // Upload the binary!
    try Process.execute("upload-script", "--file", buildPackage.binary)

// Defines a task called "pull" to update all packages from remote.
workspace.define(task: "pull") { args in
    for package in workspace.packages {
        try Process.execute("git", "pull", "origin", args["branch"] ?? "master")

Note: We might want to do something completely different for task runners, this is just a random example.

So far I don’t really see a solution for how I can use a library to extend my Package.swift file while also declaring the library within that file.

The API complexity and model complexity is increased quite a bit. Core concepts went from packages, targets, products, and dependencies to packages, targets, products, dependencies, package extensions, custom rules, and tools. And that’s keeping the fringe stuff like pkg-config for system libraries, package providers, modulemaps, testing, etc, off the table.

It’s a lot. As the Package spec gets more and more specific, it feels like maybe we’re fighting a losing battle against the complexity of the problem, hence the desire on my side to expand the scope of the conversation to include a different approach that may enable more flexibility more simply, by combining both the workspace concept and the extensibility concept into one (kinda major) shift in direction.

Maybe a new proposal is in order. Either way, I’ve voiced my objections.

This whole paragraph is not clear to me. Can somebody explain?

Hey David!

Currently, SwiftPM generates an llbuild manifest and builds it using the swift-build-tool. This means, all build tools must be defined inside llbuild or they must be invoked using the shell tool. llbuild is written in C++ but it has a C API (and there are Swift bindings now!). We can use these bindings in SwiftPM to provide the implementation for tools from package extensions (see this). So, the first step is to switch SwiftPM from using swift-build-tool to the C API (or rather the Swift API).

PS: This is going to be awesome!


This proposal looks sound, and I can see the need for the complexity (which isn’t too onerous I think, and in any case shouldn’t get in the way when not required).

I’m wondering though about the way that custom build rules receive their inputs and report their outputs.

You seem to have gone for a model where the inputs are given explicitly in the manifest when the rule is used:

                    sources: ["misc.proto", "ADT/*.proto”]    <-- explicit inputs
                    withPackageExtension: "PBPackageExt”,

as opposed, for example, to an approach where the tool simply declares a class of files that it operates on (*.proto, for example) and leaves it to the build system to throw inputs at it.

I have two possible issues with that.

  1. In complicated project setups it may lead to unnecessary boilerplate.

    When all you really want to be able to say is “any time you find a .proto file, run it through this tool please, it would be better if the tool itself could express this, and then a project wanting to use it could just say “use this tool please”.

  2. I don’t see any obvious way in which custom tools will be able to operate on the output of other custom tools if you have multi-pass transformations.

    Let’s say I have one custom tool which outputs .proto files, and then I want a second tool to transform them into .swift files. I can see that the first tool reports its output with addDerivedSource (and I can see that’s essential for the underlying build system to be able to cache etc).

    How do you specify the inputs to the second phase as being the outputs from the first one? The manifest is real swift of course, so perhaps there is a way to programatically obtain the output of one .build item and set it as the sources: parameter of another - but even if this is possible, I can imagine that it might become messy.

Am I missing something here?

1 Like

Great proposal, I’m excited to delete some checked-in generated code!

I took a stab at designing a hypothetical SwiftProtobuf PackageExtension using the APIs in this proposal with the real SwiftProtobuf package and came up with a few questions.

For background, the SwiftProtobuf project contains a plugin executable protoc-gen-swift, implemented in pure Swift and exported as an executable product by Swift Package Manager. Adding a new packageExtension product to that package implementing should be relatively straightforward. However, that package is actually not enough to generate Swift source code from a .proto file, it needs the Protobuf Compiler tool (protoc) as well.

To illustrate, a typical invocation of protoc to generate Swift source files looks like this (with variables filled in by the build system):

${ProtocToolPath} \
    --plugin=protoc-gen-swift=${ProtocGenSwiftToolPath} \
    --swift_out=${TargetGeneratedSourcesDir} \
    --swift_opt=ProtoPathModuleMappings=${ModuleMappingsFilePath} \
    --swift_opt=Visibility=Public \
    -I ${TargetSourcesDir}/Proto \
    -I ${TargetDependencyIncludePath} \
    -I ${PackageDependencyIncludePath} \
    -I ${SystemIncludePath} \
# Produces ${TargetGeneratedSourcesDir}/example.pb.swift

Looking over the proposed API I’m not sure that all of these variables can be filled in.

ProtocToolPath is the path to the protoc tool executable. The protoc tool is typically installed on the system somewhere in PATH or downloaded into a project build dir and run from there [1]. If the tool is on the system PATH we need a way to express this (and escape the sandbox for it). If the tool is to be downloaded into the build dir we need a way to express that so that the package extension can find the tool.

[1] The Protobuf Gradle Plugin allows this to be configured.

ProtocGenSwiftToolPath is the path to the protoc-gen-swift tool, which is an executable product from the SwiftProtobuf package. This tool be obtained with TargetBuildContext.lookup(tool: "protoc-gen-swift"), but a new API will be needed to the Tool protocol to get its path.

TargetGeneratedSourcesDir is a directory for generated sources that will be compiled into the current target. This can be obtained with TargetBuildContext.buildDirectory.appending("ProtobufGeneratedSources").string.

ModuleMappingsFilePath is the path to a generated file containing metadata for the Swift Protobuf plugin. It contains mappings of .proto file names to their corresponding Swift module names so that generated code contains the correct import statements. It will need to be generated prior to the above protoc invocation and take metadata for the current target and its transitive dependencies as arguments. It seems possible to create another Tool to generate this file, and extend the TargetBuildContext protocol to have a new property var dependencies: [TargetBuildContext] { get }. This tool would have an empty set of inputs in the current target.

TargetSourceDir is the root of the sources directory for the current target, for example Sources/ExampleAPI. It’s required to allow protos to write their imports without regard to which target they’re in (similar to allowing chevron-includes in C projects). It’s unclear if this is easily attainable from TargetBuildContext.inputs.

TargetDependencyIncludePath is a path to a directory in a target dependency containing .proto files that can be imported similar to include in a C target. It will require an API to walk a target’s dependencies and query metadata about attached Build Rules and Package Extensions.

PackageDependencyIncludePath is the same as a TargetDependencyIncludePath, except that it points to a directory in a package dependency’s checkout.

SystemIncludePath is a path to a directory containing “well-known” protos, similar to /usr/include. If protoc is installed on the system this will probably be a nearby system directory and it will thus need to be allowed from the sandbox. If protoc was downloaded into the build directory this will be an adjacent resource directory and the tool lookup API will need to be able to find it.

Summarizing my main questions:

  • How do you imagine supporting “system” tools and their associated resources?
  • How do you imagine supporting “downloaded” tools and resources (that SwiftPM can’t build)?
  • How do you feel about extending the API to allow querying transitive dependencies?
  • How do we express llbuild-level dependencies on inputs in transitive dependencies?
1 Like

I feel like this is the main point of objection I have with this proposal. It doesn’t have a mechanism to configure tools beyond passing them CLI arguments. It doesn’t have a mechanism that allows authors to provide a library or API to end-users who might use these tools. This pushes the complexity outward to developers using these libraries, just like rpath and the like.

While we’ve been reassured that importing custom APIs into SwiftPM would be “easy”, I’m not so sure… and therefore I think pushing this type of workflow out of the package and into the workspace proposal is a far better approach.

Is this still in the works? I'm looking to use swiftlint with a project that relies on SPM and someone pointed me to this thread.

Specifically the issues I'm running into right now are:

  • There doesn't seem to be a way to generate Xcode projects that include the Run Script build phase that is required by swiftlint. Will this proposal include something like that, or would that have to a separate proposal?
  • There's no way to swift build and run other tools at the same time AFAIK. I think this proposal covers that feature if I'm reading it correctly though.
  • Having access to the environment variables that Xcode provides would be exceptionally convenient. The one I'm utilizing right now is DWARF_DSYM_FOLDER_PATH, because that's the path where the built swiftlint executable is placed. Would we retain access to those environment variables under this proposal?


One major issue with this proposal (and indeed with most build systems in general, not just SwiftPM) is that for many build tasks it is not possible to statically know the paths of the output files produced by a task from the set of input file paths alone; there is also a dependency on the contents of those inputs.

For example, implementing a C compiler task is easy: there's always one input file, and always one output file. We can compute a suitable output file path based on the input file path, i.e. input.c produces output.o, and we pass both of these paths to the compiler. The contents of input.c are completely irrelevant when constructing the build graph.

However, other tools can be problematic, such as the protobuf compiler. Given a file such as input.proto (depending on the output language), any number of output files may be generated. You can only control the output directory, but you can't know which files the tool will generate there (from the file paths alone). To know this, you must also understand the content of input.proto.

With a solution requiring outputs to be listed at task construction time, you either have to provide a provision for developers to hardcode which output files a given protoc invocation + input file will generate (this is not scalable and pushes the problem to the wrong audience), or you have to forgo declaring some of the outputs to the build process (this harms parallelism and correctness, if it's even possible at all in a given scenario).

Essentially, we need some sort of two-part solution: a mechanism for rule authors to declare what WILL happen, to the build system, and a mechanism for the build system to report back to rule authors what DID happen, providing the opportunity to cycle back additional information into the build graph (i.e. newly discovered output nodes that now need to be attached to the task we just ran). This also makes ordering more difficult (how do you guarantee the discovered outputs don't affect tasks which already ran, or how do you know to defer tasks which might have been or will be affected?) but will need to be solved for proper integration of arbitrary build tools.


Rules should have dependency checks that allow skipping things, this could be either implicit done by the build system, MSBuild do that quite well by tracking all the inputs used to create a set of outputs, and they do that by setting a File tracker that inspect files read and write by a given process (the rule tool), this could be tricky in some cases.

There is also the possibility for the tools to provide this info using a standardized format or API.

I working in a custom compiler that output Swift code in my case having the compiler (a C++ tool) run from Swift Package Manager would be great

let package = Package(
name: "MyPkg",
dependencies: [
    .package(url: " zeroc-ice/Slice2Swift", from: "1.0.0"),
rules: [.build(withPackageExtension: "Slice2Swift")], // Install this rules with all targets
targets: [
  // Compile all Slice files (.ice) with default options
  .target(name: "MyPkg", dependencies: []),
  // Override the rule for "Other" target
  .target(name: "Other", dependencies: [],
          rules: [.build(withPackageExtension: "Slice2Swift", args: ["-I.", "-DFOO"])]),
  // Compile with -DXXX and compile remaining Slice files (.ice) with -DNO_XX
  .target(name: "More", dependencies: [],
          rules: [.build(withPackageExtension: "Slice2Swift", options: ["-DXXX"], inputs: ""),
                     .build(withPackageExtension: "Slice2Swift", options: ["-DNO_XX"], exclude: "")])

I think for simple cases installing a rule that applies to all targets will be nice, and allow to override per target. Having implicit inputs it is also nice, for the outputs the build system must query the tool.

How will I make the outputs of my tool the inputs of another, maybe allow query the installed rules, and then the tool can add its buildDirectory to the inputs of a second rule, but I think it will be best if that can be automatically discovered would be much better, even if it is not for all cases.

A way to make these build rules even more useful - especially in the server-side world - is to have the ability to move the binaries post-build. Consider the following structure for a server:


So the server might import the view, for server-side rendering, though the view might also be built into wasm for client side rendering. Everything in Public/ gets served to the public first. So if when building the seperate views into their view.wasm (or whatever the case may be) could then be moved to the Public/ directory.

I'm soon experimenting with a makefile, which feels dirty in a Swift context, and thoroughly hope this proposal gets further!

1 Like

I put down some thoughts around SPM plugins here and got directed to this thread.

I'd be very interested in code-generation and this seems like a workable approach. A couple of things to consider-

In the example MyPackage -> Package.swift file provided, the SwiftyCURL dependency is probably a requirement of the generated code rather than the static code in the package itself. In such a situation, this dependency is not really modelled correctly-

  1. MyPackage is making an assumption that the generated code requires SwiftyCURL as a dependency. This requires some kind of knowledge of the code that is going to be generated - probably through documentation of the generator which is not a strong contract.
  2. MyPackage is making an assumption that version 1.0.0 of SwiftyCURL is acceptable for the generated code. Conversely, the generator has no way of safely using non-breaking API additions introduced in minor versions of its runtime dependencies.
  3. MyPackage is making an assumption that the generated code doesn't use any other dependencies than SwiftyCURL. Conversely, the code generator has no way of adding dependencies even if those additions are considered non-breaking.

This proposal provides a mechanism for generating the actual code but it leaves dependency management to be handled by the consuming package rather by the generator package which seems more appropriate.

Is there any update on this? Really need to be able to call cmake from SPM for the project I am working on currently at AWS.

No updates right now, but we are aware that extensibility is an important missing part of the Swift package story.

Terms of Service

Privacy Policy

Cookie Policy