Empty initializer protocol

In my code, I ended up having a need for a protocol for classes/structs that have empty initializers, because it was useful in certain generic functions.

protocol DefaultInit
{
  init()
}

extension String: DefaultInit {}
extension Date: DefaultInit {}
extension Int: DefaultInit {}
extension Bool: DefaultInit {}

It seems like a general-purpose enough thing that maybe it should be in the standard library. C# has a similar thing, a sort of pseudo-protocol (or interface, since this is C#) called new() that matches any class with an empty constructor, so that you don’t have to explicitly declare conformance. So in that case it’s actually a language feature. Thoughts?

Mind sharing your use case(s)? I know that RangeReplaceableCollection has the default initialiser requirement, but that would only apply to String out of the examples you provided.

Basically it’s a situation where I need to convert optionals into concrete values, and using a default value like 0 or an empty string, or a struct filled out with its own defaults, is acceptable if the optional is nil.

More specifically, I’m putting together a form for the user to fill out, working from data where some fields have initial values and some don’t, but I have to put something in the UI to start with.

My C# use case was for a factory class where factory methods were stored in a dictionary, and new() was the only constraint I could use because C# interfaces can’t include constructors.

Here's the old thread about this, the consensus seemed to be that it was not important enough to include.

For a workaround @allevato had a nice solution. If I may point you to it:

3 Likes

This has been proposed many, many times, all the way back to when the Swift Evolution mailing lists were set up.

The general consensus is that such a protocol is merely a syntactical protocol and not a semantical one, something protocols should strive to be. The default value for most types isn’t clear or unique. 0 is a good default value when counting or taking a sum, but not when taking a product. The empty string might mean something in some use cases and isn’t a good default value across the language either. Swift’s only default value is nil, explicitly indicating the absence of a value, and the explicit opt-in is making a type optional and then leaving out the initialiser expression. Swift is all about being clear.

There’s one protocol in the stdlib that does require an zero-parameter initialiser and that is RangeReplaceableCollection. It does have a semantic meaning though: creating an empty collection. Since this is defined on a collection protocol, “empty collection” makes sense. We can’t extend this “make an empty collection” operation to Int, however.

In almost all cases, a default-value requirement can be implemented in much better ways. For example, a closure that fills in nils with a default value, or a property or parameter containing the default value. Every instance that needs a default value for some type can have a different value this way. For example, this is how Array.init(repeating:count:) can create an array of a fixed size without having to know what “default value” to use, unlike what a Java or C++ new[] initialiser expression would do.

5 Likes

BinaryInteger also specifies a default init(). It’s specified to initialize to 0. I believe it’s defined in an extension though, so it’s not a customization point.

I don’t understand this. It seems clear to me that “can be default initialized without parameters” is a semantic concept. That’s why C# has special syntax for their generics to support that use case.

That may be true, but the request here isn’t to make every type support this. The request is for the concept to exist and be supported by those types that do have a clear default value, of which there are plenty. That seems both reasonable and useful to me.

Fortunately Swift at least has the ability to make this work ad hoc using extensions, but I definitely see the value in making it work out of the box for the appropriate types in the standard library, as well as standardizing the protocol that others can use to officially support the concept of default initializable.

I like this idea.

Could you name a few and also give some examples of algorithms that will use it.


Tying any type to this protocol will mean that type has the same default value in any context.

For example Int, which doesn’t have the same ‘default’ value for any situation. For instance in some situations you may want a default value of 0 for summation, but in other times you would want a 1 for multiplicative operations.

For an optional do you want it to return nil or maybe via conditional conformance return the default of the Wrapped.

So while there might be semantics tied to a default init it is usually only in a certain context.

1 Like

Int is a poor example, because it already has an empty initializer, and it always initializes to 0.

But it’s in the context of a BinaryInteger which has its own semantics. I can write generic algorithms on BinaryInteger and have guarantees about what a default BinaryInteger is. On the other hand DefaultInit provides no information about the type.

So if we created a DefaultInit protocol like so:

protocol DefaultInit {
  init()
}

then we make some generic function using it:

extension Sequence where Element: DefaultInit {
  func reduce(_ f: (Element, Element) -> Element) -> Element {
    return reduce(Element.init(), f)
  }
} 

now we have to conform Int to DefaultInit to use it:

extension Int : DefaultInit {} // (1)

// or you could also conform differently

extension Int : DefaultInit { // (2)
  init() {
    self = 1
  }
}  

finally lets use it:

[1, 2, 3].reduce(+) // 6 (1)
[1, 2, 3].reduce(+) // 7 (2)
[1, 2, 3].reduce(*) // 0 (1)
[1, 2, 3].reduce(*) // 6 (2)

As you can see the main problem is actually writing a generic algorithm that should work as expected. This is because there are no semantic guarantees that DefaultInit can provide.

To have some guarantees about the semantics of default values for an Int, regarding multiplication and addition, one could write the following two protocols.

protocol MultiplicativeIndentifiable {
  /// creates an instance i such that i * x == x, for any other instance x of same type i
  init()
}

protocol AdditiveIndentifiable {
  /// creates an instance i such that i + x == x, for any other instance x of same type i
  init()
}

We would then write algorithms against one of these protocols.

1 Like

I don’t find this a compelling argument. To the API consumer, it doesn’t matter why Int() initializes to zero, it only matters that it does. For any type which would adopt the proposed protocol, the adopter would be choosing the default. It only has to be a reasonable choice, not the only choice.

Today, in your scenarios, the app developer already has to provide an explicit non-zero value for integers when zero doesn’t make sense as a starting point. Nothing would change.

This is not to say that I am arguing for the proposal. On that, I don’t really have a strong opinion either way.

[edit]

Another consideration is that Int (and related types) already have a well-defined behavior, and as zero would still make as much sense as a default if this proposal was adopted, it is the only logical choice, to prevent breakage of large amounts of code.

But protocols aren’t meant to be just a bag of syntax. They have semantics, so why something behaves in a certain way is vital to designing an algorithm.

Do you have any examples of adopters that would use this without needing to know what a default is?

The only trivial example I can think of is an Array<T: DefaultInit>.init(count: Int) that would produce an array filled with default T's, but I don’t think this is enough to warrant inclusion in the Standard Library. Also, I think being explicit about what is being initialised makes code more readable.

4 Likes

I didn't mean to imply that that protocols are mere collections of syntax. I am saying that your argument that there are multiple possible default values for numeric types, therefore we should choose none, does not sway me. Any type having a default initializer has to have a developer-chosen default value. That there may be a number of sane selections does not mean that the selected value is not a default.

In short, I don't see the proposal as implying that the object created with a default initializer will have the "one true" default for that type. Rather, it is saying that a conforming type will have an initializer that returns an object with a documented default.

1 Like

I wasn’t trying to say that a type can have multiple default values, but rather that a type can only have a meaningful default within a certain context. Hence certain protocols like BinaryInteger and RangeReplaceableCollection can have it as a requirement, because a default has meaning in what they, the protocols, model.

1 Like

Indeed, that’s another protocol besides RRC. However, the semantic meaning here is: “initialise a zero-initialised binary integer”. This semantic has meaning in that context; it’s not a default value initialiser. Zero integers are used in bitwise arithmetic. It probably wouldn’t make sense for a decimal number protocol/type.

In my opinion, instead of using a protocol for default initializers you should define your protocol around a specific well-defined use case and use a static property. For instance:

protocol HavingDefaultValueForSomeUseCase {
  static var defaultValueForSomeUseCase: Self { get }
}

extension String: HavingDefaultValueForSomeUseCase {
  static let defaultValueForSomeUseCase: String = ""
}
1 Like

It sounds to me like your argument is more against the very idea of a default initializer. As others have already pointed out, if a type has a default initializer then someone has already decided what the “default value” (or the “default default value”, if you will) for that type should be. It doesn’t have to be the appropriate value for every situation to be useful. Often when this comes up you just need the object to be created, and you may not even care what the value is.

A simple example might be a generic factory, which is common in dependency injection frameworks. Another example might be some generic collection or algorithm that wants to allocate a pool of objects ahead of time without caring about the values of the pre-allocated objects (maybe it just wants the space to be allocated early).

Again, this concept is the same as C#’s new() constraint, which was useful enough to add a whole language feature. I don’t think it’s fair to say there’s no use case for it when there’s clear precedent that it is in fact useful. Just because you haven’t needed it doesn’t mean others haven’t or won’t.

1 Like

I’ll quote the comment I referred to up-thread inline here:

Not true. I’ve needed it and done it before (twice). In fact, I even called it DefaultInit in my code. I can’t remember what it was for now though. But as the comment above mentions, there are better ways of solving these problems in Swift.

I agree that it’s possible to solve this today without any new feature. I still see value in making an official way to do it so that we don’t all have to reinvent the same approach.

FWIW, I think it’s more important in C# because it has reflection. It’s possible to write generic code in C# that doesn’t know at compile time which types it will encounter at runtime because the type could be discovered at runtime. That’s why this approach is common with dependency injection.

If Swift gets reflection in the future then this may become a necessity rather than just a nice to have.