Why does Swift allow you to initialise an `Int` with a `UInt64`?

Swift allows you to initialise an Int with UInt64 value that may be too large to fit in an Int and result in a runtime crash. Isn't this exactly the type of runtime error that Swift's strict type system is supposed to eliminate? Why is this the case?

The only reason I can think of is that the size of Int is system dependent so the compiler might not know how many bits an Int will have at runtime. But IMO it should at least force you to use an explicit cast in these situations.


Using an initializer that takes a single unlabeled argument is the Swift way of (safe) explicit casting.


Using an initializer that takes a single unlabeled argument is the Swift way of (safe) explicit casting.

Well, it's not "safe" if it can crash at runtime. Other initialisers for Int that can fail (eg, taking a String as an argument) are failable and return nil if the cast cannot be made.

1 Like

Swift uses the word "safe" to mean "memory safe". Crashing at runtime does not make something un-memory-safe.

Note that Int has a lot of initializers, including Int(exactly:), which returns an optional, as well as Int(clamping:), Int(truncatingIfNeeded:), and Int(bitPattern:) among others. You can choose the behaviour you mean. For the default, unlabelled initializer, it has been decided that unrepresentable values are a kind of invalid input where there is no reasonable default behaviour for all scenarios, and so it should trigger a runtime error.


Why is a UInt64 that is too large to be converted to an Int any different to trying to do Int("abc")? Both are unrepresentable values and a kind of invalid input. They should both behave in the same way, which is that the initialiser should be failable and return nil in the case where the input is invalid.


That's a perfectly reasonable viewpoint, but you're describing a different language.


With all due respect, that doesn't answer my question. Why is Int(_ s: String) failable and Int(_ u: UInt64) not? What is the justification for the difference?


Not only is it “not unsafe,” it’s affirmatively the means by which Swift delivers its promised safety: a program that does not run cannot corrupt memory. This is a pattern adopted pervasively throughout Swift.


he's got a point indeed.

i'd say the two initializers historically were either done by the same person at different times or by different people, that's how it usually happens. probably too late to change, but you can always introduce your own inititializer or a set of initializers that behave consistently:

Int?(optional: UInt64) Int?(optional: String)
Int(throwing: UInt64) throws Int(throwing: String) throws
Int(crashes: UInt64) Int(crashes: String)
Int(truncates: UInt64) Int(truncates: String)

1 Like

now i know an easy way to make a bug free app!!!


It is practical for the caller to check the preconditions required in the latter case beforehand. Therefore, Swift gives you the flexibility to choose how you want to do that and only checks your work.

It is not practical for the caller to check that conversion from String will succeed without substantially implementing the conversion algorithm itself. Therefore, Swift tells you whether it succeeded or not after the fact. Because it is more broadly useful only to distinguish success from failure in this case rather than different failures from each other, the initializer is failable (returns an optional result) rather than throwing.

See the document describing the error handling rationale in the Swift repository for more on these ideas. You will see this reasoning applied broadly throughout the standard library design.


API design is delicate balance of many competing factors - ergonomics, correctness, performance, maybe even language limitations, etc. A language's standard library is even more delicate, because literally every single person who uses the language uses it.

In some sense, it's a similar issue to Array's subscript. There are plenty of threads where people wish it would return an optional, rather than crash in the (hopefully) rare case that you try to, say, access element 100 from a 10-element Array.

Ultimately, the standard library designers considered that it would be incredibly tedious if they tried to ensure that code never crashed by making everything optional. It would propagate through the language and make even the simplest of applications an absolute nightmare to develop or maintain. And what is actually the tangible benefit? The code might still contain all kinds of subtle mistakes or miscalculations. You're always responsible for the correctness of your code.

Which is to say - it is, certainly, a reasonable request, I'm sure the standard library authors considered it, but from 2 designs that are both reasonable in their own way, they chose this one. Where possible, Swift tries to diagnose issues statically - but "possible" doesn't just mean "technically possible". It has to be pragmatic and not penalise the ergonomics of correct code, too.


array subscript is a good example that also jumped to my mind. if array subscript was returning an implicitly unwrapped optional then 99% of code would look like today or like "array[index]!" at worst - not too bad looking - with the same effect we have now in either case. minority of code would check for nil and do something different in case of nil. every time i have to introduce my own version of a non crashing array subscript (to handle that remaining 1%) i blame that API decision to "crash by default". OTOH there would have been more "!" than we have now. trade-offs.

1 Like

The justification is that Swift integer types model the integers, ℤ.

They do not model “integers modulo n”.

• • •

The fact that a fixed-width integer value can only physically represent a subset of the integers is immaterial. The type itself models the whole set.

Thus, an integer which overflows the type is still valid in the model. It just cannot be represented in the limited space available.

• • •

The decision to crash on overflow follows from that design choice. An out-of-range integer is still an integer, and it is still valid in the model. Thus, being unable to represent it is a hard error.

This is why operations like + trap on overflow rather than wrap: it represents addition in the integers, not in the integers modulo n.

Similarly, converting from a larger integer type to a smaller one is not a conversion at all in the model, since both types model the integers. If the destination type cannot fit the value, that is an overflow error.

• • •

In contrast, String can hold values which do not represent integers at all, thus the conversion produces an optional.

The one apparent inconsistency is that strings which do hold an integer, but its value overflows the destination type, produce nil rather than trapping:

Int8("500")     // nil

This can be explained as “attempt to convert the string to a representable value” rather than “attempt to convert the string to an integer, then store the integer”.


I guess if you really wanted to nitpick the API surface, the initialiser Int?(_ string:) maybe should have been given a label like Int(from:). I tend to give failable inits labels for most of my own types as it flows better when reading, and I feel like most of the Swift standard library follows that convention too (from memory, haven't checked).

But regardless, this is an intellectual discussion more than anything as I highly doubt Swift would suffer that churn at this point in its life. The status quo is 'fine'.

by this logic UInt.init(Int) shall be optional as "UInt models ℕ₀"....

hope you don't mean Int8("500") shall trap while Int8("five hundred") return nil - that would be quite inconsistent :-)

what xwu wrote above is good enough justification to me. and in places i strongly disagree with the language - (not the topic being discussed but in general) - i just "modify the language locally" to my personal preferences (by using custom extensions).

This comes up often in a few different forms. Here's a similar one: myDictionary[myString] returns an optional, but myArray[myInt] does not.

There is no technical reason why dictionaries could not also follow the requirement to check your dictionary contains a key before looking up the value with it, the same as array requires you to check the index is within bounds. Similarly, array element lookup could return an optional like dictionaries do. So why are they different?

The big difference is that it is common – you could even say nearly always the case – that the integer you're using as an index is provably within bounds without it needing checking (trivial example: for i in a.indices { a[i] += 1 }). In these cases, the only possible cause of a trap is programmer error, not input error. On the other hand, it's common – not always the case but certainly more often than not – for dictionary keys to be expected to be absent for some inputs. So we handle those differently, and people should use the force-unwrap to indicate that a missing key would be a logic error.

Int(_ s: String) vs Int(_ u: UInt64) follow the same pattern. Strings are expected to often not be integers and so it's more convenient/appropriate to combine checking and converting, and to bang it when you know for certain it'll never fail. Whereas the expectation is that it's pretty rare for a UInt64 conversion to be expected to fail, more likely if it did it'd be due to some logic or systemic error that would justify a trap, and so if/when that's a possibility you're expected to check it explicitly.

p.s. I'm not looking to argue if the unsigned-to-signed conversion trapping is normally a logic error or not... just offering this explanation for why the difference


Thank’s for all the replies, especially those of @xwu and @Ben_Cohen, which helped me to understand why the API is the way it is.

I was merely curious, not suggesting that it should be changed.


This behaviour also mirrors the trapping behaviour of common arithmetic operators. E.g.:

let a: UInt8 = 255
let b = a + 1 // trap!

While one could make the addition operator return an optional (which would never even be possible with the += operator), it would make any arithmetic operations amazingly unwieldy.

Initialising a signed Int with an unsigned Int which causes overflow is a very similar operation.


Theoretically one should retrieve Array indices from the Array instance; that is Array is a Collection.

Terms of Service

Privacy Policy

Cookie Policy