I don't think that we want Int.bitWidth
but rather #sizeof(pointer)
or something else. What happens when someone tries to do something like CLong.bitWidth
? Well, LLP64 says that should be 32 and not 64.
I don't think it'd be practical to allow compile-time evaluation in #if blocks at all. That'd require a pretty deep entanglement of layers in the compiler, since #if can affect what declarations and even what imported modules are available, in turn affecting what name lookup is able to find in #if compile-time expressions. You can make it work with enough rules, but all the rulesets I know of for this kind of thing inevitably lead to confusion.
I don't see why this would be necessary. Just as the outermost @compilerEvaluable
function will ascertain that any functions it may call will be @compilerEvaluable
(or otherwise fail to compile), it can also decide to export their code in the process.
Hey,
any updates on this? I would love to see regex validation on StaticString
during compiling.
Is there any update on this? I see that both OSLog and atomics use special attributes to ensure that certain parameters are compile-time constants, where that is defined as:
// A "compile-time constant" is either a literal (including
// string/integer/float/boolean/string-interpolation literal) or a call to a
// "constant_evaluable" function (or property) with compile-time constant
// arguments. A closure expression is also considered a compile-time constant
// (it is a constant of a function type).
So some of it appears to be working... maybe?
EDIT:
As for use-cases, I'm actually interested in the possibility of overloading generic functions when some of their parameters are compile-time constants (and hence specialisable). For example, the current implementation of String.init<C, Encoding>(decoding: C, as: Encoding.Type)
does a number of checks against its generic type:
- First, it checks to see if Encoding is UTF8.self. Various fast-paths if it is.
- If not, it goes to
String._fromCodeUnits
, which is non-inlinable, but has@_specialize
attributes (i.e. runtime dispatching) for some common collections and encodings (although not for slices of those collections) - The non-specialised version of
_fromCodeUnits
again checksEncoding.self
, and does some more dispatching based on that.
I'm wondering if we could maybe add an overload of this function where the Encoding.Type
parameter is known to be a compile-time constant (i.e. UTF8.self
, as opposed to an unspecialised generic or erased type). That means we could do more comprehensive checks to find the fastest path, safe in the knowledge that those checks will be specialised away.
In general you may want to take different paths through an algorithm if some of your branches can be guaranteed constant-folded away.
Ideally, you wouldn't need overloads per se, you would be able to express your conditional logic as regular if
statements, and the compiler would constant-fold regular arguments marked as constant-evaluable to get the same effect as specialization via overloading would.
Maybe the existing @available
annotation could be used.
Something like:
@available(compileTime: true)
public func foo(x: Int) -> Int {
return bar(x: x + 1)
}
Example:
protocol Protocol1 {}
struct Struct1:Protocol1 {}
struct Struct2:Protocol1 {}
//That is possible
func existentialOperation(_ protocolValue:inout Protocol1)->Void
{
protocolValue=Struct2()
print(protocolValue is Struct2)
}
//This not:
/*
func existentialOperation2<T:Protocol1>(_
protocolValue:inout T)->Void
{
//Is this possible, for generics it shouldn't
var newProtocolValue:Struct2 = Struct2()
protocolValue = newProtocolValue
print(protocolValue is Struct2)
}
*/
func dispatch(_ protocolValue:Protocol1)->Void{
print("Protocol")
}
func dispatch(_ structValue:Struct1)->Void{
print("Struct1")
}
func dispatch(_ structValue:Struct2)->Void{
print("Struct2")
}
func dispatch2(_ structValue:Protocol1)->Void{
print("Protocol")
}
func dispatch2(_ structValue:Struct1)->Void {
print("Struct1")
}
func dispatch2(_ structValue:Struct2)->Void {
print("Struct2")
}
func testDispatch(_ protocolValue:Protocol1)->Void
{
return dispatch(protocolValue)
}
func testDispatch2<T:Protocol1>(_ protocolValue:T)->Void
{
return dispatch2(protocolValue)
}
var s:Protocol1=Struct1()
existentialOperation(&s)
let p:Protocol1=Struct1()
testDispatch(p)
testDispatch(Struct1())
testDispatch(Struct2())
//testDispatch2(p)
testDispatch2(Struct1())
testDispatch2(Struct2())
dispatch2(Struct1())
dispatch2(Struct2())
Look at existentialOperation1 for the existential case and existentialOperation2 for the generic case.
The latter won't work as expected.
I investigated the dispatching behavior between the generic and the existential case, they behave the same which is however not correct for the generic case, e.g.: testDispatch2 and dispatch2 should behave the same, but they didn't.
I'm not sure how that contradicts my statement. A function can take an inout Protocol
and change the dynamic type of the argument's contained value, but the resulting modified existential value still has the same semantics as the new underlying value, and it still uniquely owns the new value.
Sorry, for the long break but I forgot to mention the concerns at that time.
Originally, you state the generics and existentials are isomorph, but they aren't, you can't change the generic type T
of a value but you can change the inner type T
of an existential (changing the implementation type of the contained value).
Your intention was to replace generic protocol bounds with existentials, but that wouldn't work completely, e.g. the List passed in and the List passed out of a function may not share the same element type which requires still to use generic protocol bounds.
From dynamic context yes, from static context no as you didn't know what is behind and can't access all the fields of the underlying value.
This is something I don't understand because I had to pass s
by reference and not by value, so s
will be shared between the caller and the callee, right?
They are only isomorphic as pure input arguments; func foo(x: P, y: P)
has the same domain as func foo<T: P, U: P>(x: T, y: U)
, language limitations on opening existentials notwithstanding. Indeed, existential outputs or inouts are not equivalent to generic returns or inouts. Sorry If I failed to be specific about that.
I'm not sure what you mean by "my intention". Every existential value has to carry around an independent type variable to represent its dynamic type; internally, the compiler uses OpenedArchetypeType
s to represent these. Ultimately, an existential value is merely a tuple of (T: Protocol.Type, value: T)
If you receive a new existential value, whether from the return value of a function, the result of an inout
operation, or a load from shared mutable state, you do need a new opened archetype to represent the new dynamic type. From the perspective of compile-time evaluation, though, dealing with those dynamic types should be the same as dealing with a generic type argument for most purposes.
inout
arguments take unique ownership of the argument value for the duration of the call, and hand it back to the caller when the call ends. They may be implemented as pass-by-reference, but the constraints on inout arguments make them equivalent to a value-result convention; they could also be treated as:
func foo(x: inout P)
func foo(x: __owned P) -> P
The existential erasure itself does not introduce reference semantics; the existential value before the call uniquely owns its value, as does the new existential value after the call.

inout
arguments take unique ownership of the argument value for the duration of the call, and hand it back to the caller when the call ends. They may be implemented as pass-by-reference, but the constraints on inout arguments make them equivalent to a value-result convention; they could also be treated as:
Thanks, that wasn't clear to me.

@compilerEvaluable
becomes at its limit a synonym for pure (no side effects) +@inlinable
(clients are allowed to hardcode the behavior of the current implementation of this function).
And what's about non-determinism at compile time?
Like reading a cfg file from the outside at compile time and producing an efficiently generated program embodying the rules specified in the cfg.
If we want to support reading files from compile-time-evaluable code, then ad-hoc file IO is probably not a good model for it; there needs to be some way for build systems to see the dependency, so that they can know to rebuild when the files being read are changed. That way it also doesn't need to be modeled as a non-deterministic operation; each compilation sees the file contents at the time the compilation began.

so that they can know to rebuild when the files being read are changed.
But you don't know which swift files need a rebuild after updating the cfg.
Can I even make a file immutable? IMO, it is outer world, therefore nondeterministic for us, you could change the file after code compilation but before compile time evaluation.
Okay, maybe by treating the cfg as a resource file which gets injected into the binary.
A better take would be to model the cfg over result/function builders which is pure swift and store that in a swift file, then the compiler knows what to recompile.
But what is with true non-determinism like downloading a cfg from web?
Has any progress been made on this proposal? I was recently creating an enum
who's values were already defined as macros in a C header. These appear to be expanded too late for the Swift compiler, making them unusable as case values. These were all bit flags, so I decided to capitulate and write the values as a series of bit shifts (e.g. 1 << 2
for 0x4
, etc.). In both cases I get the same error:
error: raw value for enum case must be a literal
Are these optimized out later during compilation but not reduced when constructing the AST?
The idea of user definable compile time functions is great, but it seems like there is low hanging fruit around arithmetic operators that could be accomplished without any additions to the language.

These were all bit flags, so I decided to capitulate and write the values as a series of bit shifts (e.g.
1 << 2
for0x4
, etc.). In both cases I get the same error:
Off-topic, but when you're defining low-level bit flags like this, you almost always want to have either an integer or struct type, not an enum. Making these vars on a struct makes the problem go away, at the cost of some other boilerplate:
struct MyBitfield {
var _rawValue: Int32
var someFlag: Self { Self(_rawValue: 1 << 4) }
var someOtherFlag: Self { Self(_rawValue: 1 << 5) }
}
See Swift System for some discussion about why this is preferred, and also lots of examples of using this pattern.
Other times, you can cut down on the boilerplate by simply using an integer type instead, at the cost of some type safety. It really depends on how you're using it. enum
, unfortunately, is rarely the right solution for this use case (especially in binary-stable contexts, but also sometimes in other contexts).
Other times, you can cut down on the boilerplate by simply using an integer type instead, at the cost of some type safety.
Type safety is exactly what I want. OptionSet
s provide the expressiveness I'm looking for, but Java's EnumSet
maps more closely to how I think about set operations on enums (granted, Java enums don't come with additional runtime data and can get away with ordinals as bit positions).
Ultimately though, my question is about compile time evaluation literal expressions.

Ultimately though, my question is about compile time evaluation literal expressions.
Right, but you don't actually need any fancy new feature for that here; either of the approaches I suggested get you what you want as a side effect of normal optimization (and note that the struct version has the same type safety that an enum would).