Countable types

What types in Swift represent members of countable sets?

As far as I can tell, only BinaryInteger-conforming types and Unicode.Scalar do.

Are there other "countable" types?

EDIT 1: I'm actually not sure if Unicode.Scalar is countable.

EDIT 2: Ignore the finite memory size for each type.

It sounds like a trick question to me. Your definition is this?

A countable set is either a finite set or a countably infinite set. Whether finite or infinite, the elements of a countable set can always be counted one at a time and, although the counting may never finish, every element of the set is associated with a unique natural number.

By that definition, any struct with contiguous memory qualifies, and that is most of them. The underlying memory can be enumerated lexicographically (0, 1, 10, 11, 100, 101, 110, 111, ...—each one effectively a natural number in binary). If you then attempt to load each memory pattern as the type and verify that it produces a valid instance that has not already been accounted for, you will eventually enumerate every instance of that type. But since for most types you would have to write the loading and validation code yourself, I am not very confident that that is really what you are trying to ask.

Even an entire application is just a sequence of binary data, so all possible applications could theoretically be enumerated the same way. It is just that attempting to develop that way would be incredibly inefficient. :wink:


Yes, it is trivially countable:

var scalars: [Unicode.Scalar] = []
for integer in 0 ... UInt32.max {
  if let scalar = Unicode.Scalar(value: integer) { // Some are not valid.
    scalars.append(scalar)
  }
}
// “scalars” is now a list of all Unicode scalars.

Are there other "countable" types?

Actually, every type. That's because you never have uncountably infinite amount of memory in a computer, and thus even if a type is a representation of something "uncountable" like floating point values, they are still countable strictly speaking.

2 Likes

I think we can ignore the memory size limit. For example, Double is restricted to 64-bit, but each Double instance represents a member of the real set. Because the real set is uncountably infinite, it disqualifies Double.

Although, at the same time, all floating point types have finite precisions, so they're all in the rational numbers set, which is countable.

1 Like

I guess Unicode.Scalar is just a strange case then. It's countable, but not Strideable, because some values are invalid.

If you want to "ignore" the memory restriction — although that is the reason why you can't turn something into an uncountable set within computers — here are some more straightforward examples:

  • integers are countable, as you noted,
  • all enumerations are countable being a finitely populated type,
  • likewise, all tuples and arrays of integers or enumerations are countable — even if you assume them to be infinitely large,
  • all strings are countable because they are a countable array of characters from a countable alphabet,
  • all possible sum and product types composed of those above are countable (again, tuples, but also structs and classes, because they have a countable amount of fields)

— if you extrapolate, you'll see that the majority of types represent a countable set of values, really. So the only ones that aren't are those that have a Float or a Double within them (like a struct or a tuple/SIMD type or an array with such value(s)).

Unicode.Scalar's invalid values are not invalid because of it not being "countable" (which it is), but because Unicode is not fully populated — there are some scalars that simply don't have a definition (e.g. see here). This doesn't make Unicode.Scalar uncountable in mathematical sense.

I'm rusty with discrete maths, so likely I'm wrong here:

I thought the set of all strings (if a string can have infinite length) would be like a superset of the set of infinite binary sequence, which is uncountable?

"Infinite" does not automatically imply "uncountable". The set of natural number is infinite, but still countable — by its own definition. An infinite binary sequence would also be countable, since you can count the digits.

It's when you enter a two-dimensional territory (like with an infinite array of infinite arrays) where you start encountering uncountability.

Minor technical correction: Scalars which don’t have a definition yet are valid (for forward compatibility), and the initializer will succeed. The invalid integers are those which will never have a corresponding scalar, because their byte patterns are reserved in order to enable the various Unicode transfer formats. As such, the iteration I presented above will overwhelmingly produce undefined scalars, which you can freely combine into strings however you wish; the result will just be meaningless.

4 Likes

I was thinking kind of in terms of an infinite array of infinite strings (which are infinite arrays of finite characters), and doing the uncountability proof with the diagonal thing:

Infinite string 1: "alkdkjafs..."
Infinite string 2: "ovajkdfnk..." 
Infinite string 3: "Pwq,sadbk..."
Infinite string 4: "qerefmsdu..."
Infinite string 5: ">KHJendwn..."
...

Diagonal sting: not "a" + not "v" + not "q" + not "e" + not "e" + ...

Yep, that one is uncountable by the diagonal thing (Cantor's proof). The key here is not the range of characters anymore, but rather the fact that you run out of natural numbers before you can map them onto every possible value of the infinite array of infinite strings.

But you don't run out of them if the array of infinite strings is itself finite! Maths is funny.

Still, coming back to the original question, I hope you can see that even if you assume infinite memory, you still need to be inventive to meet an uncountable value set for a Swift type. And if you don't assume that — that is, if you take the physical memory's constraints into account — every type will happen to be finite (and thus countable). So it's not only BinaryInteger — it's virtually everything.

I can't agree that Double has size-limit. It's numerical representation is highly dependent on the number of bits used in for each of the mantissa and exponents. Giving it a few more bits does nothing. If someone were to make variable-size IEEE-754, then sure, that jumps into uncountably infinite type, but that is definitely not Double.

Float and Double are countable. They represent only a subset of real number. Float has less than 2^32 members, and Double 2^64. You can just enumerate 0-2^32 and convert each into float, ignoring the duplicated values.

The same also applies to standard integer types, e.g., Int*, UInt*. Not only are they countable (duh), but they're also strictly finite. There are only 2^32 elements in Int32 by definition. Same with Int* and UInt*. The tricky one is Int since technically some system can just use BigInt, making it infinite, but it's still countable nonetheless.

Essentially anything that has upperbound on the size of in-memory representation is finitely countable. Dictionary with countable Key and Value is also countable.


It's a headache to think about the cardinality in normal programming. It's not that useful anyway. That something is countable doesn't make the enumeration easy to compute. Hence why we have CaseIterable separated from regular enum.

Further, we don't deal with the edges too often. Unless there's a strict memory constraint, you just increase the Int* size if you hit the integer boundary. That's why you can generally treat Int (or rather Int* family) as practically infinite, but that's not from a rigorous mathematical proof.

4 Likes

Btw, Float and Double are even more constrained than being a subset of real numbers - they are a subset of rational numbers. Even if you extended IEEE to variable size, it would still be countable because rational numbers are countable.


assuming infinite memory, and pointers that can point to anywhere inside it, you can construct something like this to represent irrational numbers. Of course you wouldn't be able to construct or print them in finite time.

class BigInt {
    class Digits {
        let value: String
        let rest: Digits?
    }
    let digits: Digits
}
struct Irrational {
    class Digits {
        let value: String
        let rest: Digits
    }
    let integerPart: BigInt
    let fractionalPart: Digits
}
1 Like

No, this is not how we have chosen to define floating-point types or what's countable. We had this discussion back when Swift Evolution discussed the numeric types and the countable ranges.

Floating-point types model the reals, which are uncountable, not the set of representable values or the rationals. The modeling is imperfect, but that is irrelevant for the purposes of this determination; every model is necessarily imperfect and creating a definition which would apply to everything would then serve no purpose.

The question, rather, to answer is this: is what you're modeling an uncountably infinite set of values or not?

2 Likes

Sure, I don’t mind either way. Both are finite data structure modeling uncountable mathematical object.

I don’t think question like this serves much purpose anyway. We tend to treat Int as infinite and Double as continuous for most usage despite them not being strictly so. We also treat Int as finite and Double as quantized at times as well, so :woman_shrugging:.

2 Likes

This is exactly what I wanted to ask, but perhaps phrased poorly. Thanks for helping me clarify it.

In other words, I want to look past what a type can physically represent, and into what it intends to. For example, as several people have noted in this thread, a Double instance has a finite size in memory, so the values are all rational, and so the set of all possible instances is countable. However, the existence of Double.pi indicates that it intends to model both rational and irrational numbers, which is uncountable.

Great discussion. Yes, [String] is uncountable per Cantor’s diagonal argument. That said, what about the (Void) → Void type? Arbitrary computations are not equatable due to the halting problem. We conclude that (Void) → Void is impossibly uncountable. :rofl:

Edit: Correction below.

Well, a finite sequence of bits (aka, program) is countable. So, you just aren’t counting it right. :crazy_face:

An infinite sequence of bits is countable as well. But you are right. I should have written [(Void) → Void] instead of (Void) → Void. With that, we have a (potentially infinite) sequence of (potentially infinite) programs, uncountable per Cantor’s diagonal argument.

[T] may be unbounded, but any given instance is finite. Also, a description of a program is (or can be represented as) a finite bitstring (as @Lantua notes), so [() -> ()] really models all finite sequences of finite bit strings, which is perfectly countable.

Terms of Service

Privacy Policy

Cookie Policy