SE-0385: Custom Reflection Metadata

It’d be great if the authors could add the Godot bindings example to the motivation, especially if it uses metadata for functions, as I currently have had trouble imagining such use cases.

In general, I'd really appreciate a functional example that shows a useful attribute and how it would be used.


@xedin @hborla do you envision a way to integrate custom reflection metadata with property wrappers? I have a project where a use a CKField property wrapper to abstract a cloud kit model and I create dummy instances to retrieve a field key:

Model.init()[keyPath: fieldKeyPath] as! any CKFieldProtocol).key

A reflection metadata seems the right tool to retrieve this key but I can't simply get rid of the property wrapper in this case.

I'm glad to see that there's an "considered alternative" section for Using reflection types in the init(attachedTo:) signature, but I hoping we don't have to give up on that just yet. Something about the non-uniformity of the various init overrides seems ... suboptimal to me. Especially because I imagine it'll be pretty common for people to want to introspect details of the "attachee" (do we have a name for this?), and the reflection types are a very natural way to communicate that.

For example, a web framework might want to do something like this:

// Redundant name.
// It would be nice if the name of the function could be accessed directly
@get(name: "index")
func index() -> HTTPResponse { ... }

We considered using types from the Reflection module to represent declarations which have reflection attributes. For example, Reflection’s Field could be used as the type of the first parameter in init(attachedTo:) when a reflection attribute is attached to a property declaration.

IMO this seems like a useful case study that should influence the design of the reflection library. Perhaps it's useful to have a AnyField vs Field<T> distinction, like with AnyKeyPath/KeyPath<Root, Value>.

#function could be used with @get, this is something I have in the first PR I mentioned.

1 Like

Ah, I missed this. How about names of classes, structs and such?

All of these chases are supported - if attribute is associated with a class/struct/enum #function gets the same of that type, if it's a property - name of the property, if it's a function/method - name of the method.

That's uhh... odd, isn't it?

Kind of yes, but we don't have a better magic literal for this purpose.

If he had access to a FunctionMirror, ClassMirror, (or whatever the "reflection types" end up being called), we could get at a name: String property without needing a magic literal

Though I'm curious how much overhead would be involve in instantiating those mirror objects

Speaking of overhead, what's the memory and runtime cost of this system? Reflection and runtime manipulation in every language I've used has been dog slow compared to implementing things statically. Since proposal is a general feature I'd like some idea of what the general impact will be.

1 Like

We have thought about this, and it's mentioned in the Alternatives Considered section:

However, the trouble with a custom attribute that is both a property wrapper and custom reflection metadata is separating the argument lists between init(wrappedValue:) or other property wrapper initializer, and init(attachedTo:) / buildMetadata(attachedTo:). There isn't really a good answer here; having one argument list represent two different call argument lists would be very misleading because a single written expression would be evaluated twice in different contexts, it's unclear whether the types are allowed to be inferred differently across those two calls, etc. Separating the argument list isn't great either, because we'd probably have to pick some arbitrary rules. Property wrappers and custom reflection metadata attributes can have an arbitrary number of initializer arguments, so we would need to set some specific rule like "only the wrappedValue: argument belongs to the property wrapper initializer and the rest belong to init(attachedTo:)".


I anticipate the memory cost of using custom reflection metadata will be much less than using property wrappers as metadata, because property wrappers have to store the metadata in every instance of the property wrapper backing storage. This feature solves that problem by not storing the metadata in the instance, and only initializing the metadata instance when requested. Introducing finer grained reflection queries, such as "give me all reflection metadata attached to this specific key-path", will take greater advantage of this laziness.

Emitting reflection metadata has a code-size cost, because they exist as records in the reflection metadata section of the binary. This custom reflection metadata is opt-in via attribute specifically to avoid increasing code-size for all code; pursuing a more general "iterate over all types in the program" or emitting dedicated conformance reflection metadata for all types for more efficient discovery would have a much higher performance impact than this feature as proposed. Moving toward an opt-in reflection model to improve performance of code that doesn't need it is also the motivation for SE-0376: Opt-In Reflection Metadata.

Ultimately, my opinion is accepting a small increase in code size is worth avoiding both higher memory usage and eager metadata initialization.


I agree that this proposal is a good motivator for considering some of the Alternatives Considered in the Reflection proposal. It would be very nice to have the same representation of declarations to be used in init(attachedTo:), in the other Reflection APIs, and possibly other places in the language/ecosystem (e.g. in semantic libraries built on top of SwiftSyntax, perhaps?)

For that, we would need a much richer representation of the type system in the Reflection API. My inclination is that the protocol/concrete type approach might work, something like:

protocol Type<InterfaceType> {
  associatedtype InterfaceType

struct ConcreteType<InterfaceType> { ... }

which would allow using both a type-erased any Type, or a constrained ConcreteType<Bool>.

1 Like

For me, the biggest question is: how much of this will be subsumed by macros?

Take the testing example in the proposal - why does the list of tests, or the metadata about a specific test function, need to be discoverable at runtime? Why can this not be generated at compile-time?

I can't think of many compelling cases that couldn't be handled at compile-time. In the worst case, you could use the compile-time generation capabilities to create your runtime metadata.

Take test discovery. I haven't been following the declaration macro proposal, but IIRC we want it to be able to generate protocol conformances, so it'll need to have some ability to inspect members of a type. So (just sketching it out here), imagine we had something like:

class MyTests {

  func testSomething() { ... }

  #Test(expectedFailure: true)
  func testSomethingElse() { ... }

  // ...etc

And that #TestCase macro basically generates a list of tests, and their names, and a closure to invoke them. Basically it would generate something like LinuxMain. How is the thing being proposed better than that?

Given that work on macros is progressing in parallel with this, I wonder if we shouldn't give that feature time to bloom, and then revisit this idea with a greater understanding of where the gaps are.


As I understand it, the runtime discoverability is the big difference here. I don't think that just by "declaring" something (i. e. using declaration macro) you can "trigger" an action at runtime - for example adding an item referencing the type (somehow) to a static array. AFAIK there is no way of "iterating" the metadata right now (from inside of the application).

Why not though? We're introducing compiler plugins which have access to elements of the program (as source code) and can generate more source code. It's really powerful. How far can we push it?

For instance, if we need to visit all of the types in a module annotated with #TestCase and create a big list of all test-cases in the module, why couldn't we invent some kind of module-level macro to do that at compile time?

And if, when creating an application, we need to visit all of those types across multiple modules and add them all to some big application-level list, why couldn't we invent an app-level macro to do that?

What exactly are the challenges that necessitate a runtime solution? That's what I'm interested in. There are some obvious cases which can only be decided at runtime (e.g. dlopen), but maybe we can think of more targeted solutions for them.

1 Like

Agree! Intergrating these features into a cohesive overall narrative makes lot of sense IMO.

That also gives a very natural answer to the question “how do I look up the attributes of a single thing? Well you ask it’s mirror, of course!

let someClassMirror = ClassMirror(SomeClass.self)
someClassMirror[SomeAttribute.self] // returns a `SomeAttribute?`

I'm really touching the edge of my current understanding, but afaik the issue with dynamic libraries goes beyond the dlopen api.

Afaik (and please correct me if I'm wrong) even dynamic libraries "resolved" with linker editor may not be present in memory at all times. Some dynamic libraries may be even optional. Since I was getting more familiar with glibc, I'm aware that you can list loaded .so files and their paths - which in theory could be used in such project-wide macro. My point is, that with macros this is going to get very complicated very quickly. I'm also skeptical, whether we would like to introduce an "application-level macro" that would take effect from within a dependency.

With this I agree. While I'm excited for this feature, I think that "just being able to iterate the metadata" is not enough.

I also agree, that a better example demonstrating the necessity of this proposal (i. e. something that can not be done during build time) would be great for the proposal. But since I'm biased in favour of this feature in the first place, it does not bother me.


I'm curious about the "order of operations" between attached macros, property wrappers, and these custom reflection metadata attributes. What gets evaluated when in a declaration like this? Can we apply them in any order, or do some kinds of attributes need to be closer to the declaration than others?

struct Test {
    var name: String

Also, the proposal doesn't mention any ABI impact of these attributes. What's possible from a library evolution standpoint? Can we add or remove attributes to existing types without breaking ABI? It seems like perhaps yes, and then the run-time reflection APIs will only reflect what's actually loaded — is that what we should expect?