Input parameters of `T.Type`

From the first glance, the following two signatures are equally powerful, and so input parameter type seems to be redundant:

func f<T: X>(type: T.Type) -> T {}
func f<T: X>() -> T {}

But it seems to be a pattern in the standard library to stick to the first one:

What is the reason for that?

My guess is that this enables creating instances of the subtype from the non-generic code:

let type: Base.Type = Derived.self
...
loadObjects(ofClass: type) { (objects: [Base] in ... }

But that does not sound like a very common use case.

Another thing I can think of - is to have more natural signature - with type T being an input, so it should be placed among inputs.

But those are just my guesses. I'm curious about the actual reasoning behind this.

1 Like

The first idiom allows the caller to specify the type explicitly, whereas the second one will infer it from context. The former is often easier to work with, especially if f() is overloaded further. Recall that in Swift, you can explicitly specify generic arguments on generic types, but not on function calls.

6 Likes

You can also make the two equivalent by using f<T: X>(type: T.Type = T.self) -> T {}. This allows the function to use both inference as well as explicit types. We use that in Alamofire to simplify generic response closures. With explicit types we can use responseDecodable(of: Type.self) { response in } rather than having to make the inference, responseDecodable { (response: AFDataResponse<Type>) in }.

5 Likes

In the case of unsafeBitCast and pointer initialization, it's about requiring that the user be explicit when performing a potentially unsafe operation, rather than relying on type inference.

I'm not as sure about the Codable and UIDropSession usage, but to me it looks like for those the experience of calling those methods is better if you supply the type. Without it you'd need to write e.g. encoder.container() as KeyedEncodingContainer<MyKeyType>.

Personally I believe most of the explicit types in the Codable APIs are a mistake, as the inference should always work, and it was noted in the review. At the least, defaulting them to the inferred type would be nice.

3 Likes

Even if it fails, f() as SomeType should still be as informative as f(SomeType.self).


On the more curious case: View.preference(key:value:).

The fact that Key can be inferred, but only Key.Value appears in the function signature really rubs me the wrong way. It looks incredibly fragile.