Yeah, that's fair. It would be silly to insist that the proposal only be accepted if the implementation will be bug-free after all.
The scenario I'm thinking of that I think it currently not well-handled is when you're using a library which provides a macro, your use of the macro is not expanding to what you expect it to be, and you want to step through the expansion in a debugger to figure out what you're doing wrong. The proposed workflow of invoking the macro executable on just the macro to expand and debugging that is a very natural way to go about developing the macro, but seems quite clunky to set up for a user of the macro.
The ideal experience here is what DrRacket does, where debugging macro expansion is identical to debugging ordinary code. That is largely made possible by the fact that in Racket macro expansion is just one of the phases of execution rather than a "compile time" thing, but it's theoretically possible that Xcode could do something sufficiently clever to fake this. If Xcode isn't planning to do something clever, then what's the best we can do? One idea would be to attach by name to the macro executable and then hit build in Xcode. This probably works well if you have exactly one use of the macro, but doesn't if you want to debug the second one. Maybe there could be a way to tell Swift to perform expansion on a specific use of a macro first?
Hmm, this seems to add a lot of new syntax. Perhaps we could reuse some idioms we already have, like importing macros from modules instead of having to do separate declarations for them. And I think they would be better suited as functions or callable structs. Just my two cents.
@Douglas_Gregor I would appreciate it if you could respond to the questions in my previous post before the review deadline. I would especially want to know if the problems I encountered where bugs or features. Then we will be able to discuss the implications of that. Thank you.
And there is one more concern I want to mention: Up until recently, if you wanted to use SwiftSyntax in a library (e.g. for code generation in a swift package plugin), you had to fulfill one of these requirements:
Depend on the exact SwiftSyntax version matching the toolchain of the library user. This is not feasible because different users use different versions of Xcode, etc.
Supply a dylib for every toolchain-platform combination with the library.
Is this still a requirement? The SwiftSyntax changelog seems to suggest, it isn't and I didn't have time to check. If this is still a requirement, it will be very problematic for libraries vending macros.
I can certainly see how it would be valuable. Opaque result types as they exist today for functions/properties/subscripts have a notion of identity that is tied to the function/property/subscript returning them, where the identity of the opaque type is determined by that function/property/subscript and any generic arguments to it. That notion of identity doesn't work for macros, because macros get the source code of the arguments and everything that you can access via the macro expansion context, so there's no way to establish identity. That's the logic I followed in banning opaque result types for macros.
However, we could say that every use of a macro that has an opaque result type produces a completely unique opaque type, similar to what happens when we open an existential. From a compiler perspective, we know the opaque result type after macro expansion.
That's all a very long-winded way to say that I think we can lift the restriction, and allow opaque result types for macros. @Joe_Groff does the above seem plausible to you?
I think this is my preferred solution, which we've talked about at various times as being a "join" of the types of each of the returns.
The compiler model here does have some of this information, but it's not being surfaced well to users. When a macro is expanded, the compiler creates an internal buffer containing the source code produced by the macro implementation. If something goes wrong when type-checking that code, the compiler will print an error message pointing into that buffer, with a follow-up note showing the location where the macro expansion was triggered. The result looks something like this (extracted from a compiler test case):
macro:addBlocker:/swift/test/Macros/macro_expand.swift:81:7-81:27:1:4: error: binary operator '-' cannot be applied to two 'OnlyAdds' operands
oa - oa
~~ ^ ~~
/swift/test/Macros/macro_expand.swift:81:7: note: in expansion of macro 'addBlocker' here
_ = #addBlocker(oa + oa)
^~~~~~~~~~~~~~~~~~~~
The addBlocker macro is my silly example which replaces + with -, so the oa - oa code is the result of macro expansion of #addBlocker(oa + oa). That weird macro:addBlocker: file name is the name of the internal buffer, showing that the macro-expanded source code is there in a compiler, but it's only showing one line of it. We can do better here, for example by leveraging the nifty formatter in swift-syntax to show more of the macro expansion buffer in the compiler's output. (This idea just came to me; I haven't tried it yet)
If you're using Xcode, you'll have to look into the build log to see the compiler output. Clang and Swift use a different serialized format for diagnostics when talking to IDEs, so all of this information is lost right now. I have an RFC on the LLVM Discourse to extend that serialized format to include macro-expansion buffers. Swift's compiler already implements it, but IDEs would need to pick up the new APIs to provide a good experience here.
That looks like a bug in the compiler implementation, rather than a limitation of the design.
It's a limitation of the current implementation, thanks!
Thank you for diving deep into result builders-via-macros! It's great to see what can be achieved with macros, what is still out of reach, and where the pain points are.
Sorry 'bout the delay. Answers above.
Nope! With the advent of the new Swift parser, the swift-syntax package is all-Swift and fully standalone.
Sorry I missed your questions before, and you bring up a good point: given that we don't expose any other source location information, why these? Perhaps it would be better to remove them entirely and extend MacroExpansionContext later, with an eye toward a more complete approach to source-location information for source code that macros work with.
[EDIT: Yes, I'd like to remove moduleName and fileName. Your comment about the initializer made me realize that we should also switch MacroExpansionContext to a class-bound protocol so there can be separate macro expansion context implementations for e.g., testing vs. compilation. I put up a pull request with these changes.
I don't think "not realistically usably by third-parties" is a fair characterization. SwiftLint is actively migrating over to SwiftSyntax, and a number of folks here in this thread have managed to build nontrivial macros with it despite the limitations of the prototype. Yes, we can improve tutorials (some of which ha happened during this review), and as more examples and docs are written over time it'll get easier to get started.
There's several kinds of "stepped through in a debugger" that apply for macros. The first is stepping through the syntactic transformation as it occurs, to debug the workings of the macro itself. The macro being a normal Swift program means this is the just the normal debugger. Since we're doing syntactic transforms, stepping through the transform involves putting the Swift code you want to start with in a string literal that initializes a SourceFileSyntax:
let sf: SourceFileSyntax =
"""
my code that uses macros
"
and then calling sf.expand(macros: [MyMacroType.self], in: testContext) and checking whether you get the source code out. The simple test in the example repository is all it takes, and you can single-step through the transformation.
If you want to do it "online", as part of the compiler running, you can break at the start of the macro program. Debuggers can do that (by following the process launch), and one could make it a little easier with extra tooling.
This is actually a huge benefit to this "macros are normal Swift programs" approach, because all the existing tools apply. In contrast, if we had some kind of macro interpreter in the compiler---say, a more declaration macro scheme, or interpreted sublanguage---then you would need special debugging tools.
A second thing "debugging macros" can mean is understanding what's failing to type check in the code that resulted from the expansion. Perhaps the expansion is correct, but your inputs were wrong in some way, and you need to make sure you see the result of the expansion to understand the diagnostics. This is less "step through in a debugger", yet still important. I gave a long-winded answer about how the system is designed to track all of the information needed to see through the layers of macro expansion.
Finally, "debugging macros" could mean debugging the code produced by macro expansion. This means ensuring that the results of macro expansion persist in some form that can be used for debugging, and that one can (say) step into a macro expansion to walk into the expanded code. This is absolutely achievable in the implementation.
I vehemently disagree with this statement. The model has been designed to be debuggable. But Xiaodi is correct in drawing the line between "can be implemented well" and "is guaranteed to be implemented well". The former is relevant to the design, the latter is not.
I have a question about the intended scope of the use-case for macros. Is it just supposed to replace ad-hoc code generation, or is the vision for broader use in “everyday” Swift code? Code generation has always felt like a hack, so macros make sense there to migrate such generation to code an “official” mechanism in the language. However, the rampant abuse of macros in C and other C-family languages (including Objective-C) makes me worry about the readability and maintainability of post-macro-introduction Swift code since these macros won’t be hygienic, the key issue being the possibility of unexpected side effects and identifier conflicts that aren’t visible in the raw source code. Great tooling can mitigate these issues by offering inline expansions and whatnot, but as other people have mentioned, these Swift Evolution proposals can only consider whether good tooling could be built, not whether it actually will be built. Plus, Swift should be a great language even without specific tooling, such as on new platforms and and in new environments, lest it repeat the mistakes in the Java ecosystem.
No Problem. My main worry was that my issues with the current implementation were by design. Based on your answers, this seems not to be the case.
That sounds great!
This would be good. However, the macro implementation would still need to put the code inside a closure, which may have other repercussions.
That's a slight improvement. However, it may not always be obvious which generated code was caused by which piece of original code. And the compiler does not have this knowledge.
I had the idea that the macro could add #sourceLocation() annotations to the expanded code. This way, errors would point to the correct code location. However, the error message itself may still be confusing. For that to work, the macro would need access to the path of the source file. I am not sure if the fileName property of MacroExpansionContext or #file could be used to make that work.
Whatever is decided, it should definitely be possible for the macro to use #sourceLocation().
Great!
With my questions answered, I still stand by this sentiment.
I also think so. While in its current (and past) state, it is not easy to work with, you can definitely use it to build cool and reliable products. I have built a library with it that automatically generates API code.
Details for anyone inerested
The library is called SwiftyBridges. It can be used by Vapor servers to automatically generate API code for server and client. The library uses SwiftSyntax to scan struct declarations and generates an API endpoint for every public method.
I am currently in the process of completely overhauling the library. The new version uses SwiftSyntax to also scan for imports, protocol conformances and custom annotations in comments.
I believe that for the macro you describe, you would need a statement macro and not an expression macro.
It's not possible now, because as you noted, we cannot put a return into an expression like that. This intersects a bit with the discussion of the if/switch expression proposal, and the ideas around early returns there. For example, if do became an expression and allowed returns, then that could be used in expression macros. Personally, I'm not sure this is something we'd ever want to allow, because I don't like the idea of hiding control flow within expressions.
Right. I mentioned this a bit in my review of if/switch expressions, where I note that adding multi-statement do expressions would be really nice for macros.
We could add something like this, although I was thinking it would be an operation on the MacroExpansionContext that gives a source location for the syntax node it's passed. It's also conceivable that the compiler could infer the relationship in many cases. When your macro is splicing in syntax nodes from its arguments, swift-syntax could maintain identity for those syntax nodes (or a mapping). Perhaps we could use that to establish the link between the code you wrote and the code that ended up in the instantiation. I still don't know to present that in a general way to the user.
this is good to hear, although i'd feel less apprehensive about it if LLDB already had a GUI (a portable GUI, maybe similar to the kind that you can make with Dear ImGui).
Like, if LLDB were already in the kind of business where it depends on HarfBuzz and renders glyphs[*], then i would expect that it would have no problem toggling an exploded view of an invocation of a macro (or toggling the watch-window display of variables whose declarations were generated by that invocation).
in other words, the lowest quality-of-implementation would be reasonably good.
but at this time we don't have that gui, so i have doubts about debugging QoI on linux & windows (when stepping through code that contains a macro invocation).
[*] and i don't know why it's not in that business, because there's only so much you can do by sending a stream of UTF-8 units to a terminal emulator
I'd say that LLDB already has several "GUI" frontends---VS Code, Xcode, Emacs, and so on. That's where you want integration for debugger features like this. The macro expansion buffer can already be encoded in a DWARF 5 extension so this is something that can be generally supported.
I believe that the proposal's approach here---type-checked inputs, constraining the grammar of the outputs, affordances for creating unique names, etc.---mitigates nearly all of these concerns despite not meeting the strict definition of "hygienic". Do you consider these mitigations insufficient?
[apologies up-front; the following may seem a little tangential, but I'm going to bring it back to macros in Swift.]
Well... there's a problem with all of those frontends.
Part of the problem is that, in debugging mode, they also allow you to modify text, which is something I basically never want to do while stepping through code in a debug session. And an accidental edit can lead to an undesired rebuild, which means i have to be constantly vigilant about, say, not accidentally hitting the space bar (which is harmless in vim's normal mode), which consumes some of my attention, which i needed for debugging so that i can better understand some aspect of the program that i'm stepping through. (Though if a debugger were to add synchronization with an editor, kind of like in skim.app, that would be cool.)
Another part is that, because they are primarily dedicated to all the other things that IDEs/editors do, the keybinding space is mostly occupied and thus unavailable for debugger actions. (e.g. i can't hit 'n' to next-step; i have to do a weird key combo.)
basically, all of those frontends added a debugging UI that is secondary to the main UI, and that makes the debugging experience cumbersome, distracting, limited (because non-debugging controls also consume visual space), & unnecessarily time-consuming.
What I really want is something like RemedyBG, but baked into LLDB, so that if I have LLDB, then I also have a good UI that's dedicated exclusively to debugging. (Side note: if I had that, I wouldn't need CMake's generators for Xcode or Visual Studio anymore. The absence of a GUI in LLDB is driving demand for CMake.)
All of this may seem separate from macros as a language feature, but when a non-emacs, user is trying to track down the root cause of a showstopping problem with a swift program running on linux at 5pm on their kid's birthday, if they don't have a quick & obvious way to see what a macro invocation produced—and if its output contains an important clue about the bug—then they're not going to see a clear delineation between the language and the tooling, and they shouldn't have to.
I feel like you're missing the point. LLDB can absolutely be extended to show the macro expansion buffer. This is why I pointed out that there's a DWARF 5 extension that allows us to encode macro expansion buffers into debug info:
Now, this requires work. LLVM doesn't current implement this extension, so we'd need to do that, and then have the Swift compiler take advantage of it. Then LLDB would need to integrate that information when it's found in the DWARF. At that point, on could single-step into the result of a macro expansion, put a breakpoint in there, etc.
In the interim, one can probably fake a lot of this by dumping temporary files that contain macro-expansion buffer contents and pointing debug info at those. This has a whole heap of downsides, but it gets some of the experience quicker, and with less engineering work.
Either way, you'll make it to the kid's birthday party.
I did understand that DWARF does its part to support this, and I trust that LLDB will add support for reading these expansion buffers, and that its terminal interface will allow people to step into the exploded form of a macro invocation. It's just that I'm having difficulty seeing how the terminal interface can present this in a way that isn't worse than what it does now (which, as a consequence of the fact that it's a terminal interface, is pretty bad to begin with).
I guess LLDB could emit, to a temporary file, an exploded version of an entire source file that uses macros. Then when it "enters an expansion" from the collapsed file (the one the user wrote), it can set the IP's src location to one inside the explosion file. When you step beyond the explosion (or "step out", which in this context hopefully means "step to just beyond the invocation"), it would switch back.
This may be ok as long as the user doesn't want a view that's only partially exploded (in the case of an expansion that contains nested expansions, to skip over some of the nested expansions, but not all of them, and not the outermost expansion), which at some point they will.
So i keep coming back to this assertion: that a terminal interface is not a user interface. It's not comparable to the experience of using a GUI dedicated to debugging.
Although a user interface could be added to LLDB (at which point it should be easy to add a button or something that makes LLDB toggle between exploded & collapsed views of an invocation of a macro when you click on it with a mouse cursor), it's been something like 15 years since its introduction, and it hasn't happened yet, so I have to assume that it still won't have a UI when people start debugging code that contains uses of macros.
I think we can safely say that any argument with this premise is firmly outside the scope of this proposal review, possibly even the Swift open-source project in its entirety.
Well, no other scope was created to talk about the user's overall experience of trying to read macro-generated code (which will include their attempt to read macro-generated code in a debug session, which, for at least some inputs, will hinge on the debugger's UI or lack thereof), so I thought it might as well be mentioned here.