Need a generic parameter to not be a class

I think that a generic parameter can be marked to conform to AnyObject to require it to be a class, i.e. a reference type. What if I need to the type to be a value type? Do I put a precondition(!(T is AnyObject)) in every initializer or something?

What are trying to solve? I don‘t understand the question.

I'm making an immutable copy of an instance of the generic parameter type within the initializer. This works fine if the type is a value type, but not a reference type. (For reference types, the immutable datum is a pointer to the still-mutable object.) I wonder if I can do anything besides adding a warning in the initializer's documentation to not mutable the instance through other references while self is active (since it'll cache some data).

Can you elaborate why it's necessary for you to have a value type only constraint? The reference semantics are well understood, but for the inclusion of a AnyValue type it was always a requirement to provide strong and reasonable examples.

Without a value-type-only constraint, the instance can be changed, and therefore any cached data invalidated, without my knowledge

You couldn't really enforce value semantics even if there was a way to disallow a class type. Consider this struct:

struct FakeValue {
    var ref: AnyObject
}

It's a struct where the content is a reference to an object. And Optional is an enum, but is Optional<AnyObject> a value or a reference? And then you have some immutable classes (like NSString) that pretty much have value semantics.

What you could do is invent a protocol for things that have value semantics and take care to only add conformances to that protocol for types that do have value semantics.

6 Likes

It's a collection wrapping another one.

I guess it'll just be a warning note in the documentation, then.

Please don‘t forget that you can cast everything to AnyObject on Apple platforms due to Objective-C runtime.

2 Likes

Why not define a protocol that doesn't surface the mutable state and use that as the constraint on the generic type?

Even if you're given a view-only interface to access some data, there's no guaranty the data under your view-only interface will not be modified by something else. The view-only getter might return a value at one point, and another value some time later. What he want here is a contract that the underlying value will not change; this is not enforceable in the language.

This thread poked my curiosity: I searched in the documentation and the source code of Dictionary any note concerning mutating keys. I could not find any, but maybe I'm a bad searcher, or I was just unable to infer the answer I was looking for from documented guarantees. What happens when you mutate a key in a dictionary?

The values of a constant Dictionary (or eg Array) are immutable, which is true also if the values happens to be references, but that doesn't say anything about the thing that the immutable values (the references) points to, so the referenced objects can mutate:

import Foundation

func test() {
    let mutableString = NSMutableString("foo")
    let a = [mutableString]
    print(String(a[0])) // foo
    mutableString.append("bar")
    print(String(a[0])) // foobar
}
test()

But:

import Foundation

func test() {
    let mutableStringUsedAsKey = NSMutableString(string: "foo")
    let d = [mutableStringUsedAsKey : 123]
    print(d[mutableStringUsedAsKey] ?? "n/a") // 123
    mutableStringUsedAsKey.append("bar")
    print(d[mutableStringUsedAsKey] ?? "n/a") // randomly n/a or 123 each run
}
test()

I guess mutating the NSMutableString caused its "value" (what it equals to and its hash value) to change.

EDIT: Hmm, I just noticed that running the last example a number of times will show that the second print will print either 123 or n/a seemingly at random, can anyone explain that?

Well, how about implementing copy-on-write semantics?

Dictionary is implemented using a hash table. So when you insert a new key it takes the hash modulo the size of the table and puts it at that location in the table. When you retrieve it back, it takes the hash-modulo-size-of-the-table of the search key and look at that location. If search key == key at that location you have a match, otherwise it'll treat it as a hash collision and it'll check the next location that key could be in the table.

If you mutate the key, the dictionary doesn't automatically relocate the entry at a new location in the table to reflect its changed hash. So unless the hash-modulo-size-of-the-table is the same, you'll get nil. As for why sometime it works and sometime it doesn't, that's because each time you restart the program hashes are constructed from a different base value (to prevent hash collision attacks).

5 Likes

Ah, thanks, I guess we're getting off topic here, but: Does it make sense for eg NSMutableString to conform to Hashable?

I don't think Hashable implies any guaranties about the type being non-mutable or behaving as a value type.

You certainly could make a protocol for types that have value semantics, but it's up to you to make sure only the right types conform to it as this is not something that is verifiable by the language.

Another option is to create a mutable/immutable pair of classes (ala NS(Mutable)String).

But I think the problem lies more with trying to limit a parameter for a generic method, rather than actually trying to create an immutable version of a class.

If this really is about limiting mutability within a method, then I feel, with the current state of Swift, a generic parameter is possibly not the best way to go.

IIRC you can‘t (if the Keys view type is not mutable) and shouldn‘t because of multiple reasons:

  • current dictionary type provides no guarantees about order
  • key and hash collisions
  • you‘ll probably need to rehash the dictionary

Therefore we only have mapValues but not mapKeys.

In Obj-C Foundation, NSDictionary copies its keys and retains its values, so keys must conform to NSCopying.

We don't have a protocol to describe copyability in Swift. I think it would be a great addition. Trivially-copyable value types and those which support COW could simply return self. The compiler could certainly synthesise that.

4 Likes

I believe Rust uses Clone(able) to describe this behavior, which I think is a better name than "Copy", as it lets us use Copyable strictly for the ownership-manifesto copy behavior.

Especially as it seems reasonable to me for Swift to support both Copyable but not Cloneable types, e.g. structs with AnyObject contents, and Cloneable but not Copyable types, e.g. moveonly classes with value contents.

Terms of Service

Privacy Policy

Cookie Policy