Because that's the documented syntax and semantics of the operators, which are requirements defined in Swift's numeric protocols.
Their signature is (Self, Self) -> Self, meaning that the result--if one is returned--must be a valid instance of the same type as the operands. (And, yes, implicit promotion has been explicitly rejected as an option for Swift.)
It is documented that overflow (which is to say, returning a number* of type Self other than the actual result of the arithmetic operation) is disallowed.
There are three and only three ways of expressing such disallowal in Swift: throwing (which the signature does not permit), returning nil (which the signature does not permit), and never returning. I suppose you could have the implementation go into an infinite loop instead of trapping in order never to return, but I don't see why you would.
* I suppose you could design a type that has representations for infinity and NaN instead, but that's its own bag of worms.
The proposed meaning of constrainedtype is that it doesnât create a new type, so as far as the operator is concerned itâs (Int, Int) -> Int. Itâs the assignment of the result where Height comes back into the picture. Itâs that assignment which Iâm proposing be throwable (iff itâs not determined at compile-time that it cannot throw).
Mutating operators, if truly functionally equivalent to syntactic sugar (i.e. a = a + b == a += b), would naturally follow suit. I donât recall off-hand, however - does Swift require explicit implementation of e.g. +=, or is it implied by an implementation of +?
I agree entirely that it should be caught and handled gracefully. I phrased my prior post badly - sorry about that.
What I meant to convey is that trapping is not the only conceivable approach. When I referred to it as design flaw I did so with the caveat that we donât know how to do better. Given what we know how to do, itâs a regrettable wrinkle but the result of a net-positive compromise: Swift explicitly requires try around throwable code, as opposed to just allowing any code to throw. I think this is the right choice overall (and particularly makes sense, historically, given Swiftâs âexceptionsâ are really more derived from NSError than C++-style exceptions). But, if it didnât, then having operators throw on overflow would be the obviously superior solution to overflow (in isolation), IMO.
Huh? Of course it creates a new type. One of the fundamental definitions of a type is the set of values a variable can have. Swift is a statically typed language; the whole point is to have the compiler enforce these things (and hence the title of this thread).
If this is how you feel, why are you using Swift? I mean this completely genuinely, not in a snarky sense. If you really believe those are better semantics, you can use any of the innumerable languages that either wrap or make overflow UB. Making such conditions fatal errors is really fundamental to the philosophy of Swift.
See my earlier posts as to why the approach where it does create a new type seems intractable, or at least to have all these concerns and many more. Thereâs no physical or logical law that saw it has to create a new type - see also my prior examples and explanation of how this is essentially equivalent to a macro; syntactic sugar, as it were, for constraints.
I donât think itâs a foregone conclusion that Swift has to be unwaveringly compile-time safe. I donât think thatâs the intent or spirit of the language at all, and it certainly isnât how it is to date - witness the very discussion point about trapping on overloads (which is a runtime failure in lieu of preventing the issue at compile time). Consider also the recently added dynamic properties / methods, the existence of the try keyword at all, etc.
If Swift is to fulfil itâs stated purpose of being a language with wide dynamic range - from OS kernels to shell scripts - it needs to gracefully & dynamically adjust between the expectations of those environments. Allowing the author to gracefully defer validation from compile-time to runtime at their discretion is a good means to that end.
A counter-proposal was that the compiler should refuse to compile code which doesnât explicitly prove that the constraint is preserved. I want that too - itâs not mutually exclusive with also providing a way to âsatisfyâ the compilerâs demands by demonstrating to it that youâll handle it at runtime instead. That relates back to a prior wish I expressed (albeit possibly in a different thread) that, as a general feature, Swift allow try et al to be omitted if itâs clear to the compiler that the specific use of the throwable thingy wonât actually throw (e.g. because the compiler can see into the method implementation and see that none of the throws are reachable given the possible arguments from the call site).
Poking holes in type safety is absolutely a nonstarter for Swift. What you're describing is an entirely different language, with error handling and type system choices that completely contradict those that have been made for this language.
Forgive my inexperience with how the compiler and related code functions. Is it unreasonable to implement the syntax I proposed as sugar on top of the @compilerEvaluable work being done?
My desired use case is for enforcing constraints on the state of an object.
// Given some type like this
class Dependency {
enum State {
case unbuilt
case built(result: Bool, objects: Foo)
}
var state: State
}
// Some shorthand syntax for constraining values
typealias BuiltDependency = Dependency where self.state == .built, self.state.result == true
// maybe this where clause is better expressed as a function
// I'd like to say this function can only be called on a dependency that has been built and result = true
func download(dependency: BuiltDependency) {
someFunction(dependency.state.objects)
}
func someFunction(foo: Foo) { }
Philosophy aside, Swift does trap when there's an irrecoverable logic error, and throws when there's a recoverable (possibly environmental) one.
At times, it does make sense to recover from failure to restrict a variable (to, say Height), as often as it making sense to simply trap it. The restriction isn't limited to logic error, but also input-related one. I'd say that having a one-size-fit-all typealias won't be a good fit.
I'd rather have a low-cost (potentially free) struct with constantEvaluable initializer
struct Height {
var value: Int
@constantEvaluable
init(_ newValue: Int) {
precondition(newValue >= 0 && newValue <= 20)
value = newValue
}
}
struct ThrowingHeight {
var value: Int
@constantEvaluable
init(_ newValue) throws {
if newValue < 0 || newValue >= 20 {
throw ...
}
value = newValue
}
}
extension Int {
init(_ height: Height) { self.init(height.value) }
init(_ height: ThrowableHeight) { self.init(height.value) }
}
Some of @wadetregaskis concerns also seem legit, especially one about Height + Int.
Many of the restricted calculation won't have a proper invertibility (w.r.t. most operations). Whether or not + traps, it doesn't even make sense to add two Height together, and many of restricted type would be similar. So it'd make sense that Height won't have many operation Int enjoys. Rather, there are at least two ways to tackle this:
Reimplements any operations that would make sense on Height
To forego any checked type, and recheck again (subjected to compiler optimization) whenever a type mutate.
let newValue = Height(Int(oldHeight) + diff)
or if we allow Height to be Int with compile-time checking
let newValue = oldHeight + diff // newValue is now Int
let checkedValue: Height = oldHeight + diff // checkedValue is now Height
Right, thatâs the âwrapperâ approach I mentioned [way] earlier, which is absolutely doable today - no changes required. I canât speak for the thread creatorâs intent, but my interpretation of it - and certainly what Iâm looking for - is that weâre trying to figure out if thereâs a better way. Better meaning much less verbose and/or with much less manual labour, in a nutshell.
Re. Height + Int, another possibility is that the compiler automatically makes the result Height iff it (known at compile time) meets Heights constraints, else devolves it to Int. I think I dismissed this earlier, but upon revisiting it, maybe it is viable.
That would allow the following to work naturally:
var a: Height = 8
var b = a + 1 // b is implicitly a Height - could also be explicitly typed that way.
Similarly:
var a: Height = 8
var b = a + 3 // b is implicitly an Int.
Only if you then explicitly typed b as Height, or otherwise used b where a Height is expected, would the compiler complain. And youâd have to rectify it suitably (e.g. by adding in verbage around the usage to convince the compiler that b really is a valid Height, or using try to defer the question to runtime, etc).
Of course, this is all just thinking from a code writerâs perspective. How the compiler knows that it should do this magic is of course an important question. At a quick thought, it seems that as long as Height were an actual subtype in some sense, and the operators were appropriately defined to return Self rather than Int explicitly, then the compiler would know to prefer Height for Self (but know it can fall back to the âparentâ type, Int, if needed). Methods that genuinely intend to return Int would presumably be defined to do so, not Self. Theyâd continue to unconditionally return Int when used on Height.
I don't think wrapper is any less verbose, only that if we should support both failure handling, that's probably one of a few ways.
That wouldn't be very swifty, many things compilers can do automatically has to be asked by user, eg. Codable conformance. It'd be code breaking as well if compiler suddenly can infer more, which wouldn't be very good. Esp. when this should be transparent to programmers.
The examples in this thread lead me to think that you would like Height would inherit the full set of Int operations automatically; that's a very dangerous source of bugs.
var a: Height = ...
var b: Height = ...
var c = a + b
makes perfect sense, but c = a*b either should be disallowed, or should produce some other type (HeightSquared). a / b should be dimensionless (Int?). Does Height.bitWidth make sense? What about a &+ b or any of the innumerable other operations defined on Int?
This is very dangerous, and would be the source of countless bugs. Far better to define:
struct Height {
@Clamping(0...10) var value: Int
}
and expose only the operations that make semantic sense for this new type. Yes, this currently involves a bunch boilerplate. We should consider features that reduce the burden of that boilerplate, but encouraging blindly importing all the operations of a type rather than selecting the appropriate ones is probably a step too far.
If we look at constraint type as a way to reduce condition-checking boilerplate, we would at least agree that user shouldn't try to predict what a compiler can/can't constantly-evaluate. So the design should be able to handle cases like nothing is constantly-evaluated.
To that note, I'd suggest that we use as?, and as! to do the checking. These operators do provide a mean to signal when conversion fail.
// Compiler will compile-time evaluate these conditions with best-effort policy
typealias Height = Int where self >= 0 && self < 20
let checked = 19 as! Height
// This is the same as
//
// let temp = 19, checked: Int
// if temp >= 0 && temp < 20 {
// checked = temp
// } else {
// fatalError()
// }
// Mutation invalidate the constraint
let mutatedUncheck = checked + 2 // Is of type Int
// Force it back to Height, emit WARNING if compiler can constant-fold this case, runtime trap if not
let mutatedChecked = (checked + 2) as! Height
// Try to convert it to Height, get nil if constraint isn't met
let mutatedOptional = (checked + 2) as? Height
And inplace mutation would recheck types (subject to compile-time optimization)
var mutatingChecked = 12 as! Height
mutatingChecked += 3 // Still Height
mutatingChecked += 6 // Still Height, emit warning if constant-folded, trap if not
At first read I like the way using as looks. If we went this route it would be nice to be able to use as without ?/! and the compiler would allow it if it could compile-time check or give compiler error otherwise. You would still be able to attempt at runtime with as? and as!
I intentionally left out as. As I mentioned in the post:
Current compiler may be able to constant-fold and constraint-check an expression (and give suggestions to drop ! from as!) but it doesnât mean that itâll still be able to moving forward, or even in the less-optimized compilation options.
When I omit a type and let the compiler infer the type I end up in a similar situation â if the compiler succeeds then my code is succinct and if the compiler can not figure out the type I get an error and then I can choose to tell it the type or refactor or maybe I just need to fix my code. I would not be surprised if type inference were a poor analogy from a compiler technology perspective but the user experience feels similar enough.