Evolution process discussion

I guess not all of us feel that it was so drawn-out. Perhaps by swift's standards.

Personally, I've felt for a while that swift moves too quickly. Things go almost immediately from "good idea" to "let's ship this in the next release of the standard library". That will hopefully be addressed by the new preview library:

Adding these packages serves the goal of allowing for rapid adoption of new standard library features, enabling sooner real-world feedback, and allowing for an initial period of time where that feedback can lead to source- and ABI-breaking changes if needed.

At the same time, you can make a reasonable argument that Swift also moves too slowly (without it being a contradiction). We still have a lot of work to do in order to provide a pleasant interface to basic functionality.

Personally, I think the major job of the standard library is to provide interface types. Whatever other types or algorithms your module uses internally is its own business, but when it comes to interfacing with other modules, everybody needs to agree on what an Array is, for instance. However, it's (more-or-less) OS neutral, so it doesn't help with lots of the code we actually write. With that in mind, if you think Swift moves too slowly, I think you need to look beyond the standard library:

  • Foundation needs a radical refresh. It has an important role in the ecosystem, as it provides the interface types for OS-level functionality (like the filesystem), like the standard library does for generic types and algorithms. You definitely want all modules to have the same understanding of FS primitives. However, the current interface doesn't fit well with modern Swift, it's not very pleasant to use, and there's nothing we (the community) can do about it. I think Apple's arguments against an open evolution process have become weaker, since the Swift standard library is now also a Darwin system library. There's no reason for the new Foundation to be less open than the standard library. Maybe this time we'll even get an OrderedSet :scream:

  • We also need a new library (let's call it Basic), for important non-interface types and functionality. This would include useful collections like BTree, and modern interfaces for things like command-line parsing. The reason this is not part of Foundation is so that its version can float, and so that it can be back-deployed to older Darwin OSes.

In other words:

Stable Floating
Generic stdlib stdlib-preview
OS-level Foundation "Basic"

The major difference between a "Basic" package and stdlib-preview, is that things in the preview package are expected to become stable and migrate to the stdlib eventually. Things in Basic wouldn't necessarily have that expectation.


Sorry all - this is a bit long...

think the central issue is that we want to front load as much as possible all the OhCrapIDidn’tThinkOfThat problems that require the proposer (and community) to work through thorny issues. And we need the proposer and community at large to be able to correctly identify those issues that the core team thinks need more work earlier. What draws reviews out is when legitimately important to deal with issues trickle in little by little over longer time periods or the proposer and community don’t realize that something is going to be an issue to the core team.

So the operative task is getting the people capable of identifying the OhCrapIDidntThinkOfThat issues into the thread as early as possible, and making sure the core team gives feedback early to shape the direction of the conversation.
A couple of spitball thoughts:

What about a formal review cadence? Like reviews always drop on the first week of the month? That way infrequent but insightful posters can know the time when their presence could add the most value to the community.

And perhaps the review manager could be more proactive about soliciting feedback. Perhaps spend a few minutes looking up who has ever commented on a similar proposal and mentioning them so they get a notification.

To that end, since it is the job of the proposer to do their due diligence before proposing perhaps we could make it required that the proposer list the threads that they’ve read through for background. Then the review manager can just pull the posters from those threads for notification come review time.

Finally, while I get the core team not wanting to poison the process by weighing in too early, I think it would be useful to at the very least get a Top 5 Issues list before the review begins so that posters know which part of the proposal it’s most important to kick the tires on.


Moderator note: this post was originally the first in a separate thread; with Dave's permission, I've merged it and my response into this thread.

When SE-270 was accepted, the review manager invited a discussion about the review process:

From the full text of the acceptance announcement, it's clear that the core team is dissatisfied with something about how the review went. I interpret what I read there as follows—which may be inaccurate; please feel free to correct me:

  • The core team thinks the author's response to counter-proposals could have been more complete.
  • The core team is concerned that when reviews get very involved, potential reviewers tune out.
  • The core team is concerned about the effort it takes from authors to get a proposal accepted.

I'd like to point out first that this was not actually a drawn-out review. There were just 15 posts over the nine days between the announcement of the review starting and its resolution. If the process of getting the proposal accepted was exhausting it's because it actually got three back-to-back reviews, in quick succession. Given that review feedback resulted in substantial revisions each time, it seems clear to me that the proposal should have gone back to the pitch phase for more collaborative design work, but instead we returned to find yet another review had started, with a new revision of a large proposal.

Having worked with most of members of the core team for many years, I know and respect them, and draw no conclusion about why the proposal was handled this way. That said, to someone less connected it could easily appear that core team had decided the proposal needed to pass, and if there were objections, it was just going to keep running reviews until everyone—including core team members—got so sick of the arguing that they decided to “just accept it already.” Therefore, I think handling feedback this way is bad PR, discouraging to reviewers, and so tiring for everyone that it can't help leading to worse results for the language.

In the end, I didn't say anything about counter-proposals because I don't think they're at the core of a problem—I just put that in the title because it's what we were invited to discuss—and therefore I don't know if any of what I've written actually addresses the core team's concerns. But that's my take on what could have gone better, as a reviewer.


First off, thank you for your response.

I do feel that counter-proposals are at the core of the question. This thread was not meant to be about SE-0270 specifically, but it's an illuminating example. Over the course of SE-0270, we received a lot of feedback from the community. Most of that feedback was over what we would normally consider minor aspects of the proposal: method names and whether to include a few secondary APIs. Your feedback was the major outlier, because while you did make some comments of that kind, you also made a number of much deeper objections and suggested an alternative design that was quite different in nature.

The Core Team is satisfied with how the minor feedback was handled. The author responded by making (in our view) minor revisions to the proposal, and the community seemed generally satisfied with those changes. In the end, there were some disagreements about how to apply the naming guidelines, and the Core Team simply made a decision on those issues. While the overall review did drag out a bit, that was largely due to the US holiday schedule — well, that and the review-manager switch, which we should've foreseen the need for.

Our concern really is about our handling of counter-proposals like yours. In this case, while the proposal author and the Core Team did spend some time discussing it, it wasn't something the community ended up providing much feedback about. In general, the Core Team wants to prevent the evolution process from being overtly biased towards the first proposal to make it into review, and ensuring that counter-proposals are adequately "briefed" and discussed during review is an important part of that. That's why we're interested in ideas for how to better call attention to major new ideas that come up during review. That may just be as simple as the review manager explicitly directing people's attention to them.


Characterizing the disagreements present in the end as being about how to apply naming guidelines trivializes something that is IMO rather serious, besides being inaccurate—there was no disagreement about applying guidelines in the last review. . The first version of the proposal demonstrated (in my view) that it lacked clarity about the abstraction being proposed, and the disagreements at the end demonstrated (to me) that the lack of clarity had persisted through two subsequent revisions. It seems to me that the most likely reason that the the issue is being characterized in this trivial way is that the core team was just tired of dealing with it.

That is easily explained, in my opinion. Most peoples' tolerance for disagreement is extremely limited. Because the proposal was sent immediately back into review without confirming that the concerns that spurred counterproposals had actually been addressed, there was a high risk that the disagreements that caused me to make counter-proposals would persist, which in fact they did. By the time we got to the second round of review and disagreement was still evident, most people tuned out.

I should also add, I made counter-proposals in an effort to be constructive rather than simply critical, but even if I had just made substantive criticisms with no proposals, it would have been far better for the core team to have waited for a consensus position to form in a second pitch phase. In other words, I still don't think the presence of counter-proposals are the issue.

IMO it's unrealistic to expect to observe broad interest in one person's deeply considered objections, especially if those objections become a repeated source of disagreement over three closely-spaced reviews. The better course would have been to encourage consensus building among those who were interested, outside the pressured context of a formal review.

The Rust community also has discussed this issue before, so their observations might be helpful. (blog, discussion thread).

I like the idea of breaking up the proposal into several steps each with clearly defined goals. I want to draw particular attention to one of the points mentioned there:

Steady state: At some point, the discussion reaches a “steady state”. This implies a kind of consensus – not necessarily a consensus about what to do, but a consensus on the pros and cons of the feature and the various alternatives. Note that reaching a steady state does not imply that no new comments are being posted. It just implies that the content of those comments is not new

I feel like evolution proposals sometimes spend too little time talking about the rest of the solution space (alternatives) and the downsides. There are lots of axes to consider, such as tooling support (formatters, linters, syntax highlighting, debugger etc), quality of diagnostics, potential compiler performance cliffs, backwards deployment and so on. Often, these get brought up in the discussion but the resulting conclusion ("yes, we will take a hit on X but we accept that tradeoff" or "no, we don't want to give up X which is why we do this") doesn't necessarily make its way back into the proposal document.

Taking time out to discuss this also means that a proposal already incorporates its counter-proposals in a way, because it talks about the different points in the design space and why it prefers one over the other.

At the same time, I feel like the linear nature of the forum is not really well-suited to having multiple overlapping discussions. It is fine if there are 2-3 discussions but with a large proposal, it can quickly balloon into many slightly different conversations and it isn't really clear who is in agreement with whom. Using GitHub issues might be one possible solution. Having "shepherds" (either the review manager or someone else) actively summarize the state of the discussion in between might also be helpful. The different stages kinda' force that to happen as there needs to be some summarizing at the end of each stage.

At the same time, this process might be a bit too much for certain proposals which are small in scope (certainly, what exactly falls under small is subjective... for example, I think SE-0276: Multi-pattern catch clauses qualifies as small whereas a type system feature like variadic generics would not qualify as small). For those, we can continue to have the more lightweight process today.


This is a very interesting document to read. There are as many steps proposed after implementation as there are before, and even end-user explainers and docs are drafted before several stages of evaluation. Moreover, the dynamic between the community and the team is imagined as one where each has a role to play in an ongoing dialogue; commitment to bringing a feature into the language is a staged process, not one where the community simply talks and talks until the core team brings down a verdict. Bringing such a structured process to Swift Evolution could foster a more vibrant community.

People who evaluate proposals come from many different perspectives; for instance, some are speaking to the design of the feature in question, while others are speaking to how the feature will affect the design of their own work. We ask all-comers to evaluate the proposed design of certain features, and lately it’s become clear that some people misunderstand what feature is being proposed or even why—which is hardly surprising when the relevant docs and explainers appropriate for a diverse audience don’t necessarily even exist at review time.

A multi-step process would allow people better to choose what aspects of the process they’d like to be involved in. This will differ from topic to topic even for one person. For instance, I am certainly interested in SwiftPM’s maturation, but I am much more capable of speaking to the design of new numerics APIs than of SwiftPM features. Therefore, I would want to be involved more heavily in the earlier design stages of numerics-related proposals but only the refinement stages of SwiftPM-related proposals. I would be more likely to help write explainers for the former, and more likely to want to read explainers for the latter in order to be able give useful feedback on the user experience.

I feel like fundamental issues have arisen when participants are trying to help design a given feature or have fundamental things to say about the design, and the proposal itself is at the final stages of evaluation or has even gone into extra innings. In the workflow sketched out in the linked RFC, designing is something like step 2, and final approval based on refining the user experience is something like step 10. It is little wonder that the process becomes frustrating for all involved when some are on step 2 and others are trying to complete step 10.

Finally, I feel like the core team needs to think seriously about what stages of the process they actually want to elicit community involvement for, which has clearly differed from proposal to proposal. These expectations need to be set out explicitly.

Terms of Service

Privacy Policy

Cookie Policy