Sorry I missed your questions before, and you bring up a good point: given that we don't expose any other source location information, why these? Perhaps it would be better to remove them entirely and extend MacroExpansionContext later, with an eye toward a more complete approach to source-location information for source code that macros work with.
[EDIT: Yes, I'd like to remove moduleName and fileName. Your comment about the initializer made me realize that we should also switch MacroExpansionContext to a class-bound protocol so there can be separate macro expansion context implementations for e.g., testing vs. compilation. I put up a pull request with these changes.
I don't think "not realistically usably by third-parties" is a fair characterization. SwiftLint is actively migrating over to SwiftSyntax, and a number of folks here in this thread have managed to build nontrivial macros with it despite the limitations of the prototype. Yes, we can improve tutorials (some of which ha happened during this review), and as more examples and docs are written over time it'll get easier to get started.
There's several kinds of "stepped through in a debugger" that apply for macros. The first is stepping through the syntactic transformation as it occurs, to debug the workings of the macro itself. The macro being a normal Swift program means this is the just the normal debugger. Since we're doing syntactic transforms, stepping through the transform involves putting the Swift code you want to start with in a string literal that initializes a SourceFileSyntax:
let sf: SourceFileSyntax =
my code that uses macros
and then calling sf.expand(macros: [MyMacroType.self], in: testContext) and checking whether you get the source code out. The simple test in the example repository is all it takes, and you can single-step through the transformation.
If you want to do it "online", as part of the compiler running, you can break at the start of the macro program. Debuggers can do that (by following the process launch), and one could make it a little easier with extra tooling.
This is actually a huge benefit to this "macros are normal Swift programs" approach, because all the existing tools apply. In contrast, if we had some kind of macro interpreter in the compiler---say, a more declaration macro scheme, or interpreted sublanguage---then you would need special debugging tools.
A second thing "debugging macros" can mean is understanding what's failing to type check in the code that resulted from the expansion. Perhaps the expansion is correct, but your inputs were wrong in some way, and you need to make sure you see the result of the expansion to understand the diagnostics. This is less "step through in a debugger", yet still important. I gave a long-winded answer about how the system is designed to track all of the information needed to see through the layers of macro expansion.
Finally, "debugging macros" could mean debugging the code produced by macro expansion. This means ensuring that the results of macro expansion persist in some form that can be used for debugging, and that one can (say) step into a macro expansion to walk into the expanded code. This is absolutely achievable in the implementation.
I vehemently disagree with this statement. The model has been designed to be debuggable. But Xiaodi is correct in drawing the line between "can be implemented well" and "is guaranteed to be implemented well". The former is relevant to the design, the latter is not.
I have a question about the intended scope of the use-case for macros. Is it just supposed to replace ad-hoc code generation, or is the vision for broader use in “everyday” Swift code? Code generation has always felt like a hack, so macros make sense there to migrate such generation to code an “official” mechanism in the language. However, the rampant abuse of macros in C and other C-family languages (including Objective-C) makes me worry about the readability and maintainability of post-macro-introduction Swift code since these macros won’t be hygienic, the key issue being the possibility of unexpected side effects and identifier conflicts that aren’t visible in the raw source code. Great tooling can mitigate these issues by offering inline expansions and whatnot, but as other people have mentioned, these Swift Evolution proposals can only consider whether good tooling could be built, not whether it actually will be built. Plus, Swift should be a great language even without specific tooling, such as on new platforms and and in new environments, lest it repeat the mistakes in the Java ecosystem.
No Problem. My main worry was that my issues with the current implementation were by design. Based on your answers, this seems not to be the case.
That sounds great!
This would be good. However, the macro implementation would still need to put the code inside a closure, which may have other repercussions.
That's a slight improvement. However, it may not always be obvious which generated code was caused by which piece of original code. And the compiler does not have this knowledge.
I had the idea that the macro could add #sourceLocation() annotations to the expanded code. This way, errors would point to the correct code location. However, the error message itself may still be confusing. For that to work, the macro would need access to the path of the source file. I am not sure if the fileName property of MacroExpansionContext or #file could be used to make that work.
Whatever is decided, it should definitely be possible for the macro to use #sourceLocation().
With my questions answered, I still stand by this sentiment.
I also think so. While in its current (and past) state, it is not easy to work with, you can definitely use it to build cool and reliable products. I have built a library with it that automatically generates API code.
Details for anyone inerested
The library is called SwiftyBridges. It can be used by Vapor servers to automatically generate API code for server and client. The library uses SwiftSyntax to scan struct declarations and generates an API endpoint for every public method.
I am currently in the process of completely overhauling the library. The new version uses SwiftSyntax to also scan for imports, protocol conformances and custom annotations in comments.
I believe that for the macro you describe, you would need a statement macro and not an expression macro.
It's not possible now, because as you noted, we cannot put a return into an expression like that. This intersects a bit with the discussion of the if/switch expression proposal, and the ideas around early returns there. For example, if do became an expression and allowed returns, then that could be used in expression macros. Personally, I'm not sure this is something we'd ever want to allow, because I don't like the idea of hiding control flow within expressions.
We could add something like this, although I was thinking it would be an operation on the MacroExpansionContext that gives a source location for the syntax node it's passed. It's also conceivable that the compiler could infer the relationship in many cases. When your macro is splicing in syntax nodes from its arguments, swift-syntax could maintain identity for those syntax nodes (or a mapping). Perhaps we could use that to establish the link between the code you wrote and the code that ended up in the instantiation. I still don't know to present that in a general way to the user.
this is good to hear, although i'd feel less apprehensive about it if LLDB already had a GUI (a portable GUI, maybe similar to the kind that you can make with Dear ImGui).
Like, if LLDB were already in the kind of business where it depends on HarfBuzz and renders glyphs[*], then i would expect that it would have no problem toggling an exploded view of an invocation of a macro (or toggling the watch-window display of variables whose declarations were generated by that invocation).
in other words, the lowest quality-of-implementation would be reasonably good.
but at this time we don't have that gui, so i have doubts about debugging QoI on linux & windows (when stepping through code that contains a macro invocation).
[*] and i don't know why it's not in that business, because there's only so much you can do by sending a stream of UTF-8 units to a terminal emulator
I'd say that LLDB already has several "GUI" frontends---VS Code, Xcode, Emacs, and so on. That's where you want integration for debugger features like this. The macro expansion buffer can already be encoded in a DWARF 5 extension so this is something that can be generally supported.
I believe that the proposal's approach here---type-checked inputs, constraining the grammar of the outputs, affordances for creating unique names, etc.---mitigates nearly all of these concerns despite not meeting the strict definition of "hygienic". Do you consider these mitigations insufficient?
[apologies up-front; the following may seem a little tangential, but I'm going to bring it back to macros in Swift.]
Well... there's a problem with all of those frontends.
Part of the problem is that, in debugging mode, they also allow you to modify text, which is something I basically never want to do while stepping through code in a debug session. And an accidental edit can lead to an undesired rebuild, which means i have to be constantly vigilant about, say, not accidentally hitting the space bar (which is harmless in vim's normal mode), which consumes some of my attention, which i needed for debugging so that i can better understand some aspect of the program that i'm stepping through. (Though if a debugger were to add synchronization with an editor, kind of like in skim.app, that would be cool.)
Another part is that, because they are primarily dedicated to all the other things that IDEs/editors do, the keybinding space is mostly occupied and thus unavailable for debugger actions. (e.g. i can't hit 'n' to next-step; i have to do a weird key combo.)
basically, all of those frontends added a debugging UI that is secondary to the main UI, and that makes the debugging experience cumbersome, distracting, limited (because non-debugging controls also consume visual space), & unnecessarily time-consuming.
What I really want is something like RemedyBG, but baked into LLDB, so that if I have LLDB, then I also have a good UI that's dedicated exclusively to debugging. (Side note: if I had that, I wouldn't need CMake's generators for Xcode or Visual Studio anymore. The absence of a GUI in LLDB is driving demand for CMake.)
All of this may seem separate from macros as a language feature, but when a non-emacs, user is trying to track down the root cause of a showstopping problem with a swift program running on linux at 5pm on their kid's birthday, if they don't have a quick & obvious way to see what a macro invocation produced—and if its output contains an important clue about the bug—then they're not going to see a clear delineation between the language and the tooling, and they shouldn't have to.
I feel like you're missing the point. LLDB can absolutely be extended to show the macro expansion buffer. This is why I pointed out that there's a DWARF 5 extension that allows us to encode macro expansion buffers into debug info:
Now, this requires work. LLVM doesn't current implement this extension, so we'd need to do that, and then have the Swift compiler take advantage of it. Then LLDB would need to integrate that information when it's found in the DWARF. At that point, on could single-step into the result of a macro expansion, put a breakpoint in there, etc.
In the interim, one can probably fake a lot of this by dumping temporary files that contain macro-expansion buffer contents and pointing debug info at those. This has a whole heap of downsides, but it gets some of the experience quicker, and with less engineering work.
Either way, you'll make it to the kid's birthday party.
I did understand that DWARF does its part to support this, and I trust that LLDB will add support for reading these expansion buffers, and that its terminal interface will allow people to step into the exploded form of a macro invocation. It's just that I'm having difficulty seeing how the terminal interface can present this in a way that isn't worse than what it does now (which, as a consequence of the fact that it's a terminal interface, is pretty bad to begin with).
I guess LLDB could emit, to a temporary file, an exploded version of an entire source file that uses macros. Then when it "enters an expansion" from the collapsed file (the one the user wrote), it can set the IP's src location to one inside the explosion file. When you step beyond the explosion (or "step out", which in this context hopefully means "step to just beyond the invocation"), it would switch back.
This may be ok as long as the user doesn't want a view that's only partially exploded (in the case of an expansion that contains nested expansions, to skip over some of the nested expansions, but not all of them, and not the outermost expansion), which at some point they will.
So i keep coming back to this assertion: that a terminal interface is not a user interface. It's not comparable to the experience of using a GUI dedicated to debugging.
Although a user interface could be added to LLDB (at which point it should be easy to add a button or something that makes LLDB toggle between exploded & collapsed views of an invocation of a macro when you click on it with a mouse cursor), it's been something like 15 years since its introduction, and it hasn't happened yet, so I have to assume that it still won't have a UI when people start debugging code that contains uses of macros.
Well, no other scope was created to talk about the user's overall experience of trying to read macro-generated code (which will include their attempt to read macro-generated code in a debug session, which, for at least some inputs, will hinge on the debugger's UI or lack thereof), so I thought it might as well be mentioned here.
The partially-expanded approach is the one we're building toward. It retains all of the information needed to see what happened with macro expansion, and one can build a fully-expanded view on top of it. We already have this integrated in the diagnostics infrastructure, and I've shown the path to debugger integration.
A terminal interface is a user interface. It may not be your preferred one, but many folks do prefer to work at the terminal, and often that's all you get because you're ssh'd into some server somewhere. Many folks prefer to work in an IDE. Some folks prefer to use bespoke tools for specific tasks. That's all good. The role of the language is to make it possible to make those tools good, by having a model that admits good tooling. The role of the compiler and language-focused services like SourceKit is to provide the information needed to build that tooling. Then it's up to vendors to actually make that tooling, and users can pick the tooling that works best for them.
Your entire thrust seems to be that, without a bespoke graphical tool for debugging macro expansions, it's impossible to accept the language design. I reject that premise completely. If we can't meet folks where they are, with their preferred tools, we have a much bigger problem. So the LLDBs and IDEs of the world need to have access to the information they need about macro expansions, to build good tooling, and maybe some day someone builds the debugger GUI you want. But there is no way that will ever make sense as a prerequisite to language design.
In terms of Conway's law: I do think that, within the overall LLVM system, there ought to be at least one subsystem that presents user interfaces like the ones I'm thinking of (ones that (1) are ported to all supported host platforms that support modern visual displays and (2) are vendor-neutral), but the sub-organization that would do the work of creating such a subsystem does not exist at this time. (If it did, I would have posted to their forum instead of this one.)
(I didn't think of it in these terms until last night though, hence the messy initial post of mine from a few days ago.)
This forum, in contrast, is dedicated to issues that are more obviously related to the design of the subsystem that performs translation from swift source code to machine code. My comments in previous posts are therefore out of scope here. So wrt responses to UI-related comments like this:
...I will not respond on this forum (even though i really want to).
...except for one thing:
Glad to see this!
Everything else that's in-scope LGTM, and I don't mean to hold you up any further. Please proceed!
(also: good work!)
Macro debugging experience is important. We can read code and send pull requests for Swift, LLDB, and SwiftSyntax, but not for Xcode, and the Swift core team is one of the few people who can talk directly to the Xcode team. I agree that tool-related topics are outside the scope of the language feature review, but I would like to know what kind of macro support is planned for Xcode, as we cannot know anything about Xcode.
For example, it would be nice to be able to see the source code after expanding a macro with a feature like the current Jump to Generated Interface.