Does declaring RawRepresentable have compiler optimization benefits?

Does the compiler do extra optimization for types declaring RawRepresentable that it otherwise doesn't do for types that conform but don't declare?

I'm currently making a binding library and essentially every type conforms to RawRepresentable but I wanted init(_ rawValue: RawValue) not init(rawValue: RawValue) for most of them, that way the code appears more like a cast.

I'd like as much of the binding to be optimized away as possible, so I don't mind slightly uglier code if it makes an actual difference. Anyone know?

You can use godbolt.org to answer some of these questions at an assembly level. Compiler / runtime engineers might be able to provide an implementation perspective. But if you compare the two initializers in godbolt you can see the produce the same assembly, presumably because calling init?(rawValue: String) from init?(_ rawValue: String) optimizes down to just the called initializer. Now, if you're using a custom RawRepresentable conformance you'll want to take a look yourself.

1 Like

Oh nice! I've seen links for this before but never checked it out, looks perfect.

Yeah, I have lots of simple structs that only have a rawValue for storage, so in theory the struct shouldn't even need to exist in an optimized build. Just want to make sure declaring RawRepresentable will produce the same code.
Guess I'll find out! Thank you :slightly_smiling_face:

Terms of Service

Privacy Policy

Cookie Policy