Some inconsistency in Generic Type Parameter declaration

struct Temp<T: Decodable>: Decodable {
    let t: T?
}

extension KeyedDecodingContainer {
    func decode<T>(_: Temp<T>.Type, forKey key: Key) -> Temp<T> {
        Temp(t: nil)
    }
}

This compiles.
But why didn't the compiler force me to declare type parameter T for decode function specifying the required constraint:

func decode<T: Decodable>...

In many seemingly similar situations we need to specify this explicitly. For example in generic typealias:

typealias Test<T: Decodable> = Temp<T>

Short answer in jest: It doesn't force you to because it doesn't need that constraint specified. :stuck_out_tongue:

Real answer: When calling decode you have to specify a concrete type of Temp that already respects the constraint, that's basically what the first parameter is for. You will not be able to specify such a type without also somehow defining a T that Temp accepts, i.e. some type that adopts decodable.
So the compiler can already ensure you do not specify a "wrong" type T.

Or in other words: The constraint does indeed not "come from" the decode method, but rather from Temp, and that is already visible.

2 Likes

Yep. Try this with xcrun swiftc tmp.swift -debug-generic-signatures:

protocol Decodable {}
struct Temp<T: Decodable> {}

func decode1<T>(_: Temp<T>) {}
func decode2<T: Decodable>(_: Temp<T>) {}

You will get

tmp.(file).decode1@tmp.swift:5:6
Generic signature: <T where T : Decodable>
Canonical generic signature: <τ_0_0 where τ_0_0 : Decodable>

tmp.(file).decode2@tmp.swift:6:6
Generic signature: <T where T : Decodable>
Canonical generic signature: <τ_0_0 where τ_0_0 : Decodable>
1 Like

Since nobody addressed this part, I think it would be nice if type constraints could be inferred for type aliases as well.

1 Like

To be honest, I would prefer the compiler to enforce writing type constraints explicitly everywhere :smiley: But yeah, any consistency in approach would be great :slightly_smiling_face:

1 Like

We have to be careful when adding new inference sources to avoid cycles, but the underlying type of a type alias is probably fine to consider.

The actual change is very simple. We collect the list of types from which to infer requirements here: swift/lib/Sema/TypeCheckGeneric.cpp at main · apple/swift · GitHub

You can see for functions and subscripts, we look at parameter and result types. A good starter project for someone would be to split this off into its own function, tidy it up a bit (the nested conditionals are excessive) and add something like this in the right spot,

if (auto *typeAliasDecl = dyn_cast<TypeAliasDecl>(foo))
  inferenceSources.push_back(typeAliasDecl->getStructuralType().getPointer());

I don’t know if this merits a proposal or not. (Edit: I’d also be happy with a pitch to phase requirement inference out entirely with a future language mode too :joy:).

Yes please! I've always been bothered by this spooky-requirements-from-a-distance behavior in Swift; it's really only a benefit for writing code while being a detriment to future reading of it, the latter of which we should be optimizing for.

2 Likes

It allows for a couple of neat tricks, which may or may not carry their weight. The first is it lets you simulate something that comes up occasionally, “constraint type aliases”. If I declare a generic type alias with some arbitrary list of requirements,

typealias G<T> = T where T: P, T.A: Q, …

Then something like this

func f<C: Collection>(_: C) -> G<C.Element>

Returns C.Element, but it imposes these extra requirements on it.

The other thing is it would be nice if you could do that with an opaque return type as well, since otherwise there’s no way to impose arbitrary requirements on one:

func f() -> G<some Equatable>

But opaque return types are resolved too late for this to work right now.

2 Likes

The following change would be feasible: we build the generic signature without inference, and try to completely resolve all inference sources against that signature without emitting diagnostics. If any failed to resolve, we build the signature again, but this time with inference sources. Then depending on flags, you either,

  1. Proceed with the new signature silently or with a warning,
  2. Compare the new requirements against the old, and diagnose an error with a fixit to add the inferred requirements to the where clause

Now (2) becomes a generally useful feature because it means you can omit the requirements while writing your code and have your IDE fill in the where clause.

The actual logic behind requirement inference is very simple and straightforward, and we need it in SIL function type lowering already. But we can certainly tweak how and when it surfaces in the language.

3 Likes

I can appreciate the convenience in cases like that, even though it's hard to reconcile it with my desire to read the declaration of f and know what its valid inputs are by just looking at its generic parameters and constraints that are written in the <...> and in a where clause. It's also a bit discomfiting that changing the definition of G could change the requirements of f without touching f at all.

I'd like it if there was some way to explicitly state that I want to impose the requirements from G onto the input of that function, but unfortunately any syntax I can think of off the top of my head feels like it would just be inelegantly repeating information, since G<C.Element> is already right there.

I like this (especially (2)). I don't know if using a flag risks turning this into a "dialect" of Swift; maybe the access levels of the declarations involved here should be factored in as additional signal:

  • If a declaration is open or public (or package?), users should have to write out the requirements explicitly, because writing an API for others to use means you should be more aware of what you're defining.
  • If a declaration uses any APIs imported from other modules, you should have to write out the requirements because if that module changes under you, it might change your requirements without you realizing it.
  • Otherwise, if the API being defined and all of its inference sources are internal or lower, maybe it's fine to infer silently then.
1 Like

Yeah. I was imagining a hypothetical -swift-version bump, more than a separate language mode just for this. Another option is an unconditional warning with a fixit.

1 Like

It also has the distinction of being the only place in the language where a type alias has a semantic effect. You can have two function declarations that only differ in type alias spelling, but they are assigned different where clauses.

1 Like