I would also add, that I think @xedin's first PR about magic literals should definitely be accepted - unlike with Type declarations, I don't see other way of getting the names of the functions and properties.
However, I have one question. When should we run the Attribute .allInstances(of:) function? I assume, that we should re-run this function every time a .dylib/.so/.dll is loaded or unloaded. I can imagine some C __attribute__ that we could use, but I think, that this should be somehow part of the design - possibly with some API that allows us to iterate loaded modules and their related instances of metadata.
It’d be great if the authors could add the Godot bindings example to the motivation, especially if it uses metadata for functions, as I currently have had trouble imagining such use cases.
@xedin@hborla do you envision a way to integrate custom reflection metadata with property wrappers? I have a project where a use a CKField property wrapper to abstract a cloud kit model and I create dummy instances to retrieve a field key:
Model.init()[keyPath: fieldKeyPath] as! any CKFieldProtocol).key
A reflection metadata seems the right tool to retrieve this key but I can't simply get rid of the property wrapper in this case.
I'm glad to see that there's an "considered alternative" section for Using reflection types in the init(attachedTo:) signature, but I hoping we don't have to give up on that just yet. Something about the non-uniformity of the various init overrides seems ... suboptimal to me. Especially because I imagine it'll be pretty common for people to want to introspect details of the "attachee" (do we have a name for this?), and the reflection types are a very natural way to communicate that.
For example, a web framework might want to do something like this:
// Redundant name.
// It would be nice if the name of the function could be accessed directly
@get(name: "index")
func index() -> HTTPResponse { ... }
We considered using types from the Reflection module to represent declarations which have reflection attributes. For example, Reflection’s Field could be used as the type of the first parameter in init(attachedTo:) when a reflection attribute is attached to a property declaration.
IMO this seems like a useful case study that should influence the design of the reflection library. Perhaps it's useful to have a AnyField vs Field<T> distinction, like with AnyKeyPath/KeyPath<Root, Value>.
All of these chases are supported - if attribute is associated with a class/struct/enum #function gets the same of that type, if it's a property - name of the property, if it's a function/method - name of the method.
If he had access to a FunctionMirror, ClassMirror, (or whatever the "reflection types" end up being called), we could get at a name: String property without needing a magic literal
Though I'm curious how much overhead would be involve in instantiating those mirror objects
Speaking of overhead, what's the memory and runtime cost of this system? Reflection and runtime manipulation in every language I've used has been dog slow compared to implementing things statically. Since proposal is a general feature I'd like some idea of what the general impact will be.
However, the trouble with a custom attribute that is both a property wrapper and custom reflection metadata is separating the argument lists between init(wrappedValue:) or other property wrapper initializer, and init(attachedTo:) / buildMetadata(attachedTo:). There isn't really a good answer here; having one argument list represent two different call argument lists would be very misleading because a single written expression would be evaluated twice in different contexts, it's unclear whether the types are allowed to be inferred differently across those two calls, etc. Separating the argument list isn't great either, because we'd probably have to pick some arbitrary rules. Property wrappers and custom reflection metadata attributes can have an arbitrary number of initializer arguments, so we would need to set some specific rule like "only the wrappedValue: argument belongs to the property wrapper initializer and the rest belong to init(attachedTo:)".
I anticipate the memory cost of using custom reflection metadata will be much less than using property wrappers as metadata, because property wrappers have to store the metadata in every instance of the property wrapper backing storage. This feature solves that problem by not storing the metadata in the instance, and only initializing the metadata instance when requested. Introducing finer grained reflection queries, such as "give me all reflection metadata attached to this specific key-path", will take greater advantage of this laziness.
Emitting reflection metadata has a code-size cost, because they exist as records in the reflection metadata section of the binary. This custom reflection metadata is opt-in via attribute specifically to avoid increasing code-size for all code; pursuing a more general "iterate over all types in the program" or emitting dedicated conformance reflection metadata for all types for more efficient discovery would have a much higher performance impact than this feature as proposed. Moving toward an opt-in reflection model to improve performance of code that doesn't need it is also the motivation for SE-0376: Opt-In Reflection Metadata.
Ultimately, my opinion is accepting a small increase in code size is worth avoiding both higher memory usage and eager metadata initialization.
I agree that this proposal is a good motivator for considering some of the Alternatives Considered in the Reflection proposal. It would be very nice to have the same representation of declarations to be used in init(attachedTo:), in the other Reflection APIs, and possibly other places in the language/ecosystem (e.g. in semantic libraries built on top of SwiftSyntax, perhaps?)
For that, we would need a much richer representation of the type system in the Reflection API. My inclination is that the protocol/concrete type approach might work, something like:
For me, the biggest question is: how much of this will be subsumed by macros?
Take the testing example in the proposal - why does the list of tests, or the metadata about a specific test function, need to be discoverable at runtime? Why can this not be generated at compile-time?
I can't think of many compelling cases that couldn't be handled at compile-time. In the worst case, you could use the compile-time generation capabilities to create your runtime metadata.
Take test discovery. I haven't been following the declaration macro proposal, but IIRC we want it to be able to generate protocol conformances, so it'll need to have some ability to inspect members of a type. So (just sketching it out here), imagine we had something like:
And that #TestCase macro basically generates a list of tests, and their names, and a closure to invoke them. Basically it would generate something like LinuxMain. How is the thing being proposed better than that?
Given that work on macros is progressing in parallel with this, I wonder if we shouldn't give that feature time to bloom, and then revisit this idea with a greater understanding of where the gaps are.
As I understand it, the runtime discoverability is the big difference here. I don't think that just by "declaring" something (i. e. using declaration macro) you can "trigger" an action at runtime - for example adding an item referencing the type (somehow) to a static array. AFAIK there is no way of "iterating" the metadata right now (from inside of the application).
Why not though? We're introducing compiler plugins which have access to elements of the program (as source code) and can generate more source code. It's really powerful. How far can we push it?
For instance, if we need to visit all of the types in a module annotated with #TestCase and create a big list of all test-cases in the module, why couldn't we invent some kind of module-level macro to do that at compile time?
And if, when creating an application, we need to visit all of those types across multiple modules and add them all to some big application-level list, why couldn't we invent an app-level macro to do that?
What exactly are the challenges that necessitate a runtime solution? That's what I'm interested in. There are some obvious cases which can only be decided at runtime (e.g. dlopen), but maybe we can think of more targeted solutions for them.