Something that’s still an annoyance to me in Swift is conversion between different integer types, both by size and signed vs. unsigned, as many mismatches result in errors requiring either refactoring to one common type, or boiler-plate to cast the values. This is annoying, and the end result seems to be that most people just use a regular Int for everything, including cases where negative values aren’t required or are even invalid (requiring code to check for these, or handle any faults).
This can be highlighted with the following simple example:
var a:Int64 = 12345
let b:Int16 = 123
a += b
The last line currently results in an error that the two types differ, which is really just an annoyance as there’s clearly ample room for the addition to proceed successfully. Requiring me to instead do a += Int64(b) which seems unnecessary.
I’d like to propose that Swift implicitly cast values to an integer type with a larger size (so in this case there is no error as an Int64 is larger than an Int16). However, in the opposite case an error would occur, but could be suppressed with the overflow operator (allowing simple casts to small sizes), for example:
var a:Int16 = 123
var b:Int64 = 12345
a &+= b
Lastly there’s the case of unsigned vs signed integers, in which case similar rules should apply, i.e- a UInt32 can be safely cast to an Int64, but would require the overflow operator to cast implicitly to Int32 (as it can’t represent the full range of positive values that UInt32 can). The trickier case is what to do with conversion of signed to unsigned types, since they could contain negative values that can’t be represented at any size, but I’d say allowing the developer to permit overflow in these cases is fine too.
Essentially what I want is a better system for converting between integer types, as my preference is to use the type that most efficiently stores the range of values that I require, particularly in arrays, but I keep finding myself having to add extra code around these which slows down development and makes for messier looking code, even though most of the time I’m taking a smaller type and manipulating it within a larger one.
Anyway, I’m wondering what others think? I know it may seem fairly minor, but I seem to have a tendency to work with numbers a lot, and always having to think about what I need to convert each variable to or from just slows me down.