It's an interesting way to word it.
Yes, a compiler couldn't tell you. But then - why are we even asking a compiler? Compilers turn source code in to executable code, they don't validate URLs or check domain names! ...do they?
The benefit of having a compiler do this (via compile-time evaluation) is that the results can be statically guaranteed. It's just an optimisation - we simulated the result and can constant-fold a bunch of logic.
But if we can't do that - because the library is part of an ABI-stable SDK, or the standards are not stable enough for that kind of guarantee, or the library is too complex, or whatever other reason - why are we still asking the compiler?
I said before that I think linting is the way to go here. Build-time input validation delivered by packages but not evaluated by the compiler. Libraries would tag functions/initializers (@lintable
?), and the compiler would gather all the inputs it manages to constant-fold and pass them to a package plugin. The plugin checks them, and the IDE can show a little at the call-site to show that the tool is happy with the value, and if something can't pass the build-time tool, it might even be worth failing the build.
That would actually be build-time validation, in a way that can be delivered in a realistic timeframe (right?), which scales to ABI-stable SDKs, unstable standards, and other complex libraries. It wouldn't change anything in the type system - you'd still deal in optionals like today, but you'd have that extra level of checking at build-time with no configuration needed.
Even if it only applied to strings, I think a system like that could carry us a long way.