I'm Arihant, a Swift developer and a WWDC25 Swift Student Challenge Winner. As an iOS Developer, I recently integrated automated testing using XCTest into our CI/CD pipelines. This hands-on experience has made me deeply appreciate robust testing infrastructure, and I am really eager to contribute to the core developer tools we rely on every day.
I am highly interested in the Globally Scoped Traits via runtime configuration schema project for GSoC 2026. Applying traits like .timeLimit or .retry globally across CI pipelines is a massive developer experience improvement, and I would love to help build it.
I've been looking through the swift-testing repository, specifically around how Trait and Configuration are currently handled during runner initialisation. As I draft my proposal, I wanted to get your initial thoughts on the override hierarchy for these global traits.
I will be looking for any good starter issues or PRs I can make that may help me with familiarising myself with the codebase at hand, and maybe perhaps get a few ideas for myself as to how we can implement the said project plan
I'm currently setting up my local build and hunting for a good first issue to get familiar with the codebase. I look forward to your feedback and to formalising a proposal around this schema!
I have been poking around the codebase and have had a few questions.
Should global traits act as a fallback (used only if @Test has no trait) or an enforcement (a hard cap that always wins). I see we have both defaultTestTimeLimit and maximumTestTimeLimit. Should I mirror this two-level approach for all global traits?"
this is defined in configuration.swift file .
For the GlobalTraits schema, should we use a strongly-typed struct (with explicit fields like tags and timeLimit ) or a more dynamic array like [any Trait] ?
A struct is easier to discover and document, but an array would allow us to add new traits in the future without changing the Configuration API.
also I wanted to know if once I make my full proposal you could take a look at it and finalize it.
While I'm writing my proposal, are there any Good First Issues or small bugs in the Configuration or Runner areas of the repo that I could help with? I’d love to get a small PR in early to get familiar with the project’s code style and CI process. Because when I opened up the Repo I found to have none
I think a fallback would make the most sense for something like timeouts, but it's worth exploring this a bit more in your proposal. For Swift evolution proposals, we often have an "Alternatives Considered" section which could be a good option to share your thoughts on which choice wins out here.
Again, I'd encourage you to lay out the pros/cons of each approach. You get to choose and justify your approach in the initial proposal
Happy to answer any questions you have but I likely won't have bandwidth to review a full proposal prior to submission, nor would it be fair to other applicants. Sorry!
We should be able to offer a few more options for first issue here. I'll discuss with the team about this next week and keep you updated (I've been out on vacation for this week).
I too think a fallback method will be better because it gives the person in front more flexibility and as usual less code to write upfront for the person using the traits. I can add it to the alternatives to implement.
Again, I'd encourage you to lay out the pros/cons of each approach. You get to choose and justify your approach in the initial proposal
Will try to explain my approaches in depth within the proposal.
Happy to answer any questions you have but I likely won't have bandwidth to review a full proposal prior to submission, nor would it be fair to other applicants. Sorry!
Completely understandable. Just need a little help regarding the structure of the proposal as to little things like number of pages , depth of approach and yes the deliverables timeline that needs to be put in the proposal.
We should be able to offer a few more options for first issue here. I'll discuss with the team about this next week and keep you updated (I've been out on vacation for this week).
Starter issues will be a massive help in any case. But if not that to is absolutely fine. I have spent the later part of days trying to understand the Configuration , Runner.Plan and other files which may need modification to make the said project work.
Hi @jerryjrchen ,
I hope you had a great vacation!
I’ve spent the last few days finalizing my first full draft of the proposal, and I’ve made sure to address the specific trade-offs we discussed:
Alternatives Considered: I’ve added a dedicated section comparing the Fallback vs. Enforcement models. While I agree that a fallback is more flexible for local development, I’ve actually proposed a Dual Model in my draft. By using the boxed-storage pattern in Configuration.swift, I believe we can support both defaultGlobalTraits (fallback) and enforcedGlobalTraits (hard caps for CI). This mirrors how the library already handles defaultTestTimeLimit and maximumTestTimeLimit.
Struct vs. Array: I’m moving forward with a Struct-based API (GlobalTraits) in the proposal. After digging through Tag.swift and Runner.Plan.swift, I found that a struct provides much better type safety and discoverability for tools using @_spi(ForToolsIntegrationOnly). I've documented the "Extensibility" trade-offs you mentioned, explaining how we can still add new trait types in the future without a breaking change to the Configuration layout.
Injection Point: I’ve identified Runner.Plan._constructStepGraph as the primary injection point. I've mapped out how additive traits (tags) will resolve vs. environment mutators (timeouts).
My full proposal is now sitting at about 8 pages with code sketches for these areas. I’m really excited about the technical direction.
Whenever the team has had a chance to identify any "Good First Issues" in the Configuration or Runner areas, I’m ready to jump in! I’d love to get a small fix or test addition landed before the submission deadline to prove I can work effectively with the framework’s internals.
Hi Arihant, it looks to me like you've thought this through, so I don't have any concerns based on what you mentioned.
Just to reiterate, I don't have bandwidth to review full draft proposals in detail. Is there a specific area or question you still have questions about?
Thank you for the quick reply! That’s very encouraging to hear.
Since you asked about specific areas, I have two technical questions I’d love to get your perspective :
Thread Safety & Sendability: Since Configuration and the test tree are Sendable, my current plan is to perform the global trait injection as a purely synchronous, deterministic step during Runner.Plan._constructStepGraph. Does the team have any concerns about mutating the test graph’s local trait arrays at this specific point, or should I be looking at a more "Lazy" resolution model during the test-action phase?
Experimental Flag Naming: I want to make sure I’m following the library’s preferred evolution pattern. Is there a specific naming convention I should use for the experimental flag (e.g., ExperimentalGloballyScopedTraits) to ensure it integrates well with the current build system?
Also , just wanted to get an opinion on the boxed-storage pattern and constructStepGraph approach.
And I completely understand that you might not be Able to get a full rundown on my proposal. It's just the length of it is what concerns me and wanted to get your opinion on it.
Mutating the local traits during the step graph construction phase makes sense to me, although I'm not seeing how thread safety is relevant here.
Where are you planning on introducing the flag? I'd look for other examples in the planned domain to figure out what it should look like. If you haven't already included it, I'd also recommend including in the proposal an example of what a user will need to do to actually enable and configure a global trait.
Worth also noting that users are effectively "opting-in" to global traits when they enable one, so perhaps a flag might not be strictly necessary depending on the approach.
I'd recommend avoiding existentials if possible, especially since something like any Sendable which is open-ended. I believe it would be easier to start with some stricter types and then loosen the definitions later as needed when you actually need them.
I think I covered the step graph question above but please clarify if I missed something.
Thanks for the very specific feedback! I’ve spent some time digging into the ABI implications of your suggestions, and I’ve refined the technical approach in my draft to align with those principle
That's a fair point. Since _constructStepGraph is a synchronous, deterministic phase that occurs before the multi-threaded test execution begins, "Data Races" aren't the primary concern. I was initially over-cautious because swift-testing is so fundamentally built on Sendable types and parallel execution, but I see now that performing the injection during the planning step ensures the test tree remains immutable and safe once execution starts.
I found that the library already uses two specific SPI patterns that fit perfectly:
@_spi(Experimental): I'll use this for the definition of the GlobalTraits struct and the internal storage logic.
@_spi(ForToolsIntegrationOnly): I’ll use this for the mutating functions on Configuration (like applyGlobalTags)
Since these APIs are SPI-gated, I agree that a separate build-system flag is redundant.
here is a proposed example as to how a user may call the global trait via configuration.
var configuration = Configuration()
// The user "opts-in" by explicitly setting a global cap
configuration.applyGlobalMaximumTimeout(seconds: 60)
// This configuration is then passed to the Runner
// into every discovered test function automatically.
let runner = try await Runner(plan: Runner.Plan(configuration: configuration))
await runner.run()
I see your point regarding any Sendable being too open-ended and the performance hit that comes with existential boxing. I can shift the design to use a concrete GlobalTraits struct.
I plan to use an internal storage classprivate final class _Storage: Sendable to hold the traits. This allows us to keep a stable 8-byte pointer in the public struct while enjoying the benefits of stricter, non-existential types internally. But if you have a different thing in mind we can do that as well.
also I little question about edge cases of the global traits application :
For traits like .tags , the 'union' logic is clear. But for future traits like .retry(count: N) , should the global enforcement always be a 'Maximum' (strictest wins), or should we allow the global configuration to decide the resolution strategy on a per-trait basis?"
Thanks for the very specific feedback! I’ve spent some time digging into the ABI implications of your suggestions, and I’ve refined the technical approach in my draft to align with those principle
jerryjrchen:
Mutating the local traits during the step graph construction phase makes sense to me, although I'm not seeing how thread safety is relevant here.
That's a fair point. Since _constructStepGraph is a synchronous, deterministic phase that occurs before the multi-threaded test execution begins, "Data Races" aren't the primary concern. I was initially over-cautious because swift-testing is so fundamentally built on Sendable types and parallel execution, but I see now that performing the injection during the planning step ensures the test tree remains immutable and safe once execution starts.
jerryjrchen:
Where are you planning on introducing the flag? I'd look for other examples in the planned domain to figure out what it should look like. If you haven't already included it, I'd also recommend including in the proposal an example of what a user will need to do to actually enable and configure a global trait.
Worth also noting that users are effectively "opting-in" to global traits when they enable one, so perhaps a flag might not be strictly necessary depending on the approach.
I found that the library already uses two specific SPI patterns that fit perfectly:
@_spi(Experimental): I'll use this for the definition of the GlobalTraits struct and the internal storage logic.
@_spi(ForToolsIntegrationOnly): I’ll use this for the mutating functions on Configuration (like applyGlobalTags)
Since these APIs are SPI-gated, I agree that a separate build-system flag is redundant.
here is a proposed example as to how a user may call the global trait via configuration.
This example doesn’t make sense to me. The user I’m referring to is someone writing tests using Swift Testing and interested in adopting a global trait. They won’t be able to construct a runner to manually invoke tests.
var configuration = Configuration()
// The user "opts-in" by explicitly setting a global cap
configuration.applyGlobalMaximumTimeout(seconds: 60)
// This configuration is then passed to the Runner
// into every discovered test function automatically.
let runner = try await Runner(plan: Runner.Plan(configuration: configuration))
await runner.run()
jerryjrchen:
I'd recommend avoiding existentials if possible, especially since something like any Sendable which is open-ended. I believe it would be easier to start with some stricter types and then loosen the definitions later as needed when you actually need them.
I think I covered the step graph question above but please clarify if I missed something.
I see your point regarding any Sendable being too open-ended and the performance hit that comes with existential boxing. I can shift the design to use a concrete GlobalTraits struct.
I plan to use an internal storage classprivate final class _Storage: Sendable to hold the traits. This allows us to keep a stable 8-byte pointer in the public struct while enjoying the benefits of stricter, non-existential types internally. But if you have a different thing in mind we can do that as well.
also I little question about edge cases of the global traits application :
For traits like .tags , the 'union' logic is clear. But for future traits like .retry(count: N) , should the global enforcement always be a 'Maximum' (strictest wins), or should we allow the global configuration to decide the resolution strategy on a per-trait basis?"
One potential challenge with rules like “strictest wins” is that it can be hard for a user to override a global trait they don’t want with a less strict version. Consider also how traits application behaves today with Suite vs Test level traits.
we create this macro that is read for by the compiler at runtime to see if we have defined any global traits or not. Also we will need to make it so that this macro only has the ability to be called once per the project because it is a global scoped macro.
When the user's project will compile, the @TestGlobalTraits macro creates a Metadata Record in the binary.This will then copied into the configuration. After that that I have explained how we have how we modify the configuration file and Runner.Swift file. So injecting the global trait via _constructStepGraph.
If you have any other suggestions as to how may we apply global traits please feel free to tell me. one other way I though we could do this is by defining the traits in the Package.Swift file of the Xcode project.
I think here the Fallback Vs the enforced model works because if a person has not set a trait for a test it applies the fallback global trait on the other hand we can have the enforced global trait being applied to this
@TestGlobalTraits(
// 1. A DEFAULT (Suggestion): "Use 1 minute if I forgot to set a limit."
.default, .timeLimit(.minutes(1)),
// 2. An ENFORCED (Hard Cap): "Do not let ANY test run over 10 minutes."
.enforced, .timeLimit(.minutes(10))
)
struct MyProjectSettings {}
this flow might help you see my vision clearly :
Phase 1: Declaration
A new @TestGlobalTraits macro allows developers to declare project-wide traits in a single file (e.g., MyProjectConfig.swift).
Phase 2: Discovery (The Link)
Compile-time: The macro generates a Metadata Record in a dedicated data segment (__DATA, __swift5_test_traits).
Runtime Start-up: The Runner (via Test+Discovery.swift) scans the binary for these records and populates the library's internal configuration.
Phase 3: Storage (The Vault)
We use a private _Storage class inside the Configuration struct. This ensures ABI Stability (fixed pointer size) even as we add new global trait fields.
Phase 4: Injection (The Engine)
Traits are injected into every test during the Deterministic Planning Phase (_constructStepGraph in Runner.Plan.swift).
I mean speaking in simple words a global trait is just a suite level trait and acts like how suite would react with a single test task , so the way we give precedence to a trait is the same. Logic remains the same. I will be leveraging _recursivelyApplyTraits to do this.
if I can put out an example :
lets assume we have an additive trait like .tag or .bug or perhaps .comment they get added to the array of traits. Thats very simple to implement because if someone is trying to use a global triait for the types they don't want to override them, because that is the entire point of having global traits.
On the other hand I think handling the Execution Traits are a bit more complex to handle because a user might want to have a global min limit for a timer but not a max one so in that case I think the "Default" and "Enforced" logic will be amazing. Gives the user enough flexibility but also has them on a set path. we can discuss the finer details like does the maximum timer takes precedence or not and other little details if and when the project gets taken up.
As for the macro it will be a slightly complex task but I am sure we can enforce it to an extent that only one declaration is allowed per project. So let's say someone tries to declare them twice we can just give out a compiler error.
I had a doubt with how we can handle .enabled (veto) types of traits because I think lets say we take a case where there are 30 tests written , we apply this on global Lebel and then test level is the opposite. in this case I think the lower level that Is the test level should get precedence ?
There are a lot of traits but most of the logic will be repetitive for handling these.
As we approach the deadline I have almost finalized my proposal and all we have discussed has been put into it. I have tried to make the proposal easy to understand and have taken all the feedback you have given me so far into consideration.
I would love to hear your thoughts on this approach because this way "custom logic" is very less and it will easily adapt to the current system.
Speaking from personal experience, I don't know if I would want to rely upon effectively having a global trait take precedence over a local test trait in certain circumstances. I think it would be fine here to start simpler here and add more detail as needed.
Thank you for the detailed feedback! I completely agree with your perspective on keeping the precedence model simple. Establishing the straightforward 'Local Wins' logic (where global traits act purely as fallbacks/defaults) is definitely the smartest and safest initial step. and if time allows we can discuss further on the complexity of the enforced idea as well and how can a person override it.
As today is the final day for GSoC submissions, I had already uploaded my formal proposal to the portal prior to this exchange. Therefore, the PDF still includes the discussion on the 'enforced' behavior approach. However, I have made note of this simplification, and I am fully on board to scope the implementation down to this simpler 'Local Wins' model if the project is selected.
I really appreciate all the discussions and guidance I've received from you so far. I’ve learned a ton about the swift-testing architecture, and I sincerely hope we can make this project a reality!