It sounds like a trick question to me. Your definition is this?

A countable set is either a finite set or a countably infinite set. Whether finite or infinite, the elements of a countable set can always be counted one at a time and, although the counting may never finish, every element of the set is associated with a unique natural number.

By that definition, any struct with contiguous memory qualifies, and that is most of them. The underlying memory can be enumerated lexicographically (0, 1, 10, 11, 100, 101, 110, 111, ...â€”each one effectively a natural number in binary). If you then attempt to load each memory pattern as the type and verify that it produces a valid instance that has not already been accounted for, you will eventually enumerate every instance of that type. But since for most types you would have to write the loading and validation code yourself, I am not very confident that that is really what you are trying to ask.

Even an entire application is just a sequence of binary data, so all possible applications could theoretically be enumerated the same way. It is just that attempting to develop that way would be incredibly inefficient.

Yes, it is trivially countable:

var scalars: [Unicode.Scalar] = []
for integer in 0 ... UInt32.max {
if let scalar = Unicode.Scalar(value: integer) { // Some are not valid.
scalars.append(scalar)
}
}
// â€śscalarsâ€ť is now a list of all Unicode scalars.

I think we can ignore the memory size limit. For example, Double is restricted to 64-bit, but each Double instance represents a member of the real set. Because the real set is uncountably infinite, it disqualifies Double.

Although, at the same time, all floating point types have finite precisions, so they're all in the rational numbers set, which is countable.

I'm rusty with discrete maths, so likely I'm wrong here:

I thought the set of all strings (if a string can have infinite length) would be like a superset of the set of infinite binary sequence, which is uncountable?

Minor technical correction: Scalars which donâ€™t have a definition yet are valid (for forward compatibility), and the initializer will succeed. The invalid integers are those which will never have a corresponding scalar, because their byte patterns are reserved in order to enable the various Unicode transfer formats. As such, the iteration I presented above will overwhelmingly produce undefined scalars, which you can freely combine into strings however you wish; the result will just be meaningless.

I was thinking kind of in terms of an infinite array of infinite strings (which are infinite arrays of finite characters), and doing the uncountability proof with the diagonal thing:

Infinite string 1: "alkdkjafs..."
Infinite string 2: "ovajkdfnk..."
Infinite string 3: "Pwq,sadbk..."
Infinite string 4: "qerefmsdu..."
Infinite string 5: ">KHJendwn..."
...
Diagonal sting: not "a" + not "v" + not "q" + not "e" + not "e" + ...

I can't agree that Double has size-limit. It's numerical representation is highly dependent on the number of bits used in for each of the mantissa and exponents. Giving it a few more bits does nothing. If someone were to make variable-size IEEE-754, then sure, that jumps into uncountably infinite type, but that is definitely notDouble.

Float and Double are countable. They represent only a subset of real number. Float has less than 2^32 members, and Double 2^64. You can just enumerate 0-2^32 and convert each into float, ignoring the duplicated values.

The same also applies to standard integer types, e.g., Int*, UInt*. Not only are they countable (duh), but they're also strictly finite. There are only 2^32 elements in Int32 by definition. Same with Int* and UInt*. The tricky one is Int since technically some system can just use BigInt, making it infinite, but it's still countable nonetheless.

Essentially anything that has upperbound on the size of in-memory representation is finitely countable. Dictionary with countable Key and Value is also countable.

It's a headache to think about the cardinality in normal programming. It's not that useful anyway. That something is countable doesn't make the enumeration easy to compute. Hence why we have CaseIterable separated from regular enum.

Further, we don't deal with the edges too often. Unless there's a strict memory constraint, you just increase the Int* size if you hit the integer boundary. That's why you can generally treat Int (or rather Int* family) as practically infinite, but that's not from a rigorous mathematical proof.

Btw, Float and Double are even more constrained than being a subset of real numbers - they are a subset of rational numbers. Even if you extended IEEE to variable size, it would still be countable because rational numbers are countable.

assuming infinite memory, and pointers that can point to anywhere inside it, you can construct something like this to represent irrational numbers. Of course you wouldn't be able to construct or print them in finite time.

class BigInt {
class Digits {
let value: String
let rest: Digits?
}
let digits: Digits
}
struct Irrational {
class Digits {
let value: String
let rest: Digits
}
let integerPart: BigInt
let fractionalPart: Digits
}

No, this is not how we have chosen to define floating-point types or what's countable. We had this discussion back when Swift Evolution discussed the numeric types and the countable ranges.

Floating-point types model the reals, which are uncountable, not the set of representable values or the rationals. The modeling is imperfect, but that is irrelevant for the purposes of this determination; every model is necessarily imperfect and creating a definition which would apply to everything would then serve no purpose.

The question, rather, to answer is this: is what you're modeling an uncountably infinite set of values or not?

Sure, I donâ€™t mind either way. Both are finite data structure modeling uncountable mathematical object.

I donâ€™t think question like this serves much purpose anyway. We tend to treat Int as infinite and Double as continuous for most usage despite them not being strictly so. We also treat Int as finite and Double as quantized at times as well, so .

This is exactly what I wanted to ask, but perhaps phrased poorly. Thanks for helping me clarify it.

In other words, I want to look past what a type can physically represent, and into what it intends to. For example, as several people have noted in this thread, a Double instance has a finite size in memory, so the values are all rational, and so the set of all possible instances is countable. However, the existence of Double.pi indicates that it intends to model both rational and irrational numbers, which is uncountable.

Great discussion. Yes, [String] is uncountable per Cantorâ€™s diagonal argument. That said, what about the (Void) â†’ Void type? Arbitrary computations are not equatable due to the halting problem. We conclude that (Void) â†’ Void is impossibly uncountable.

An infinite sequence of bits is countable as well. But you are right. I should have written [(Void) â†’ Void] instead of (Void) â†’ Void. With that, we have a (potentially infinite) sequence of (potentially infinite) programs, uncountable per Cantorâ€™s diagonal argument.

[T] may be unbounded, but any given instance is finite. Also, a description of a program is (or can be represented as) a finite bitstring (as @Lantua notes), so [() -> ()] really models all finite sequences of finite bit strings, which is perfectly countable.

So, the size of [Int] or String is at least not less than the size of Double. If size of Double is already uncountable, so the size of [Int] and String are uncountable too.

There is a lot of confusion, because some people say Foo when they mean an instance of Foo, and some people say Foo when they mean a set of all of the possible instances of Foo