Why is Bool implemented as a struct instead of as an enum?

I just noticed that the Bool type is implemented in the standard library as a struct. Intuitively it would make sense for it to be implemented as an enum. Is there some particular functionality that enums do not provide that Bool needs? Thanks.


My guess is that by using a struct, it's easier to store the underlying llvm primitive that is used to actually represent the boolean. Which in turn means the optimizer has an easier time dealing with it. Since a struct with a single member generally has the same size (and possibly layout) as the single property's type, the optimizer can effectively peer directly at the llvm primitive without needing too much special casing that a Bool represented by an enum might need.


Ah. Ok, makes sense. I was wondering if it had something to do with optimization. Thank you!

Can confirm. These days the compiler is good enough that we probably could use an enum, but back in the pre-Swift-1 days it really did lead to worse code. It does still bug me that something that's so obviously enum-like isn't actually implemented as an enum, but I don't think there are any practical issues with it being a struct.


Just think of all the true vs .true debates we've avoided!


Worse than that, even—early Swift enum case naming conventions would have had us using .True and .False!


Truly terrifying!

They weren't avoided, we just had them before Swift was open source.


Also at this point, changing Bool from a struct to an enum would massively break ABI because of mangling (of course you could hack around it with some special "mangle this enum as a struct" attribute...)


Bool has a special substitution, so that specifically wouldn't be a problem. (There are doubtless other things that would break.)


Having Bool as enum an the pitch where enum cases would get synthetized is prefixed properties would be cool .false would have isFalse. I'd guess that we could still have boolean literals on the enum Bool which would avoid the usage of the cases directly but not impossible. Not saying we should do this, just thinking out loud. :slight_smile:

1 Like

Last I saw, the only way to test an enum was a switch_enum{_addr} instruction in SIL. That was the original issue that caused us to switch to an i1 because it is completely unreasonable to try to match CFG diamonds back into logic. It turns a ton of simple problems into crazy CFG jump threading problems - it is the wrong abstraction.

This could be solved by introducing a "get enum tag" instruction of some sort that turns an enum value into an encoding of its tag value. This would be useful for many other things as well.


Right, at the time we didn’t have select_enum. By the time we did, we had more important problems to solve.