This is why I was asking what you imagine the use cases for this feature would be. This is a very narrow, specialised attribute that I don't imagine will be used outside of the standard library and a few specialised modules (e.g. matrix and other math libraries, and Tensorflow for Swift, which the proposal author is working on). You seem to think it will be much more common than that, so I'm wondering why you think that.
Because you will try to use the result of
f() in a context that requires compile time evaluation to work (e.g. it's part of a compile-time assertion or conditional statement, or it provides the length of a fixed size array). Why would you care otherwise if it's evaluated at compile-time or run-time if the semantics are the same? This is where I think the point of contention is. It seems you are thinking mostly of compiler optimisation, which is semi-related but can be (and already is) done without this feature and doesn't require an evolution proposal. My understanding is that it would help you more to think of a future “compile-time error when I try to multiply two matrices that don't have compatible dimensions” than “this program runs faster because some of it is evaluated at compile time”.
Resilience is obviously a focus for Swift, which isn't true of many languages. And most people can ignore @inlineable and similar because they're not writing a library that is performance-critical. They can similarly ignore @compilerEvaluable, at least as I understand it.