I can't agree that `Double`

has size-limit. It's numerical representation is highly dependent on the number of bits used in for each of the mantissa and exponents. Giving it a few more bits does nothing. If someone were to make variable-size IEEE-754, then sure, that jumps into uncountably infinite type, but that is definitely **not** `Double`

.

`Float`

and `Double`

are countable. They represent only a subset of real number. Float has less than 2^32 members, and Double 2^64. You can just enumerate 0-2^32 and convert each into float, ignoring the duplicated values.

The same also applies to standard integer types, e.g., `Int*`

, `UInt*`

. Not only are they countable (duh), but they're also strictly finite. There are only 2^32 elements in `Int32`

by definition. Same with `Int*`

and `UInt*`

. The tricky one is `Int`

since *technically* some system can just use `BigInt`

, making it infinite, but it's still countable nonetheless.

Essentially anything that has upperbound on the size of in-memory representation is *finitely* countable. `Dictionary`

with countable `Key`

and `Value`

is also countable.

It's a headache to think about the cardinality in normal programming. It's not that useful anyway. That something is countable doesn't make the enumeration easy to compute. Hence why we have `CaseIterable`

separated from regular `enum`

.

Further, we don't deal with the edges too often. Unless there's a strict memory constraint, you just increase the `Int*`

size if you hit the integer boundary. That's why you can generally treat `Int`

(or rather `Int*`

family) as practically infinite, but that's not from a rigorous mathematical proof.