This isn't safe either. Swift doesn't make promises about the byte representation of anything, I think, including existential boxes.
One reason I can think of why this would fail is because if you, for example, do this:
protocol A {}
protocol B {}
struct C: A, B {}
let value: any A = C()
let bad = unsafeBitCast(value, to: (any B).self)
Then the box is holding a pointer to what it thinks is a witness table for B, but it's actually a witness table for A. You might get lucky if A were a subprotocol of B. Even if it works today, it could break on a change of OS version, minor language version, compiler settings, etc., because it's coupling to undocumented behavior.
Definitely. First, the documentation specifically says not to use it for casting between reference types, regardless of any type relationship. Second, this is bitcasting between two value types. You're guessing not only what the internal byte representation of Array is, but the calling convention of its member functions, since this is effectively casting member functions like subscript(position: Int) -> BaseClass to subscript(position: Int) -> DerivedClass.
Even if you're lucky and the bytes all align (today, with your compiler version, compiler settings, OS, etc.), isn't this breaking reference counting? You're trying to avoid an O(n) operation, right? But that would mean avoiding a call to retain on every element. If you don't do that, well there's going to be a (O(n)) call to release on every element both when derived2 is destroyed, and when derived1 is destroyed. Those objects will have a retain count of 1 and get deallocated after derived2 is destroyed but before derived is destroyed. Kaboom.
unsafeBitCast has a very restricted set of uses. Basically it's for casting to bytes so they can flow through code that doesn't understand those bytes and make it to some code somewhere else that can do the reverse cast of the bytes back to the same type, and this is only safe to do with things that are pure values with trivial copy constructors (structs with only members that are structs with only members that are structs... repeat until you get down to primitives).
That might be because you followed the rule I stated but broke and turned on optimizations. If your entire code is just declaring these variables then not doing anything with them, the compiler can probably tell that it can optimize out both the extra releases, and correspondingly the extra retains. Maybe if you do it in a function where derived is an incoming parameter and derived2 is stored in a variable (and so escapes) you'll see the O(n) operation.
Trying to guess what optimized code is doing is a fool's errand. It's usually suspect that anyone is trying in the first place because the compiler probably produced optimal code before you started coding in an unusual style to try to make it more optimized. Optimization usually comes down to reworking the (visible, not under the hood implementation detail) logic, i.e. realizing a nested loop is doing unnecessary redundant calculations, not trying to beat the language or standard library at common tasks. Arrays in Swift are probably aggressively optimized in ways that wouldn't even make sense to anyone who's not a compiler, machine language and computer architecture expert.
That we're not even asking the right questions is evident by the fact this conversation is about Big O behavior. That's way less relevant to the vast majority of code (which isn't creating containers or running loops with hundreds of millions of elements/iterations) than cache misses and failed branch predictions. Performance on modern computer architectures, which have multiple layers of memory caches, superscalar CPUs, etc. is not easy to understand at all.
Here is a good short video on the subject.
Worrying about Big O, especially theoretical Big O (what you're taught in data structures courses), is not only usually irrelevant but often actively misleading.
This begs the question: can you guys give a single example from your careers where Swift doing implicit O(n) upcasting of arrays caused a problem at all, let alone one that was expensive to discover or solve?
If not, isn't it completely irrational to demand that Swift make writing the typical application code we spend nearly 100% of our time writing considerably more difficult in order to avoid what I can only guess is an entirely theoretical problem that none of us have ever actually encountered?
I don't even see how the implicit conversion is concerning even if it does turn out to be a performance bottleneck that requires reworking. You can't discover performance bottlenecks through analysis, you have to do it empirically by measurement. So you'll find the bottleneck whether it's implicit or not. Making the conversion explicit doesn't fix it. You'd have to rework your code to not need to do the conversion at all (the part I never see people actually think about in these performance thought experiments... what is the alternative? Why were you converting arrays to begin with, and how are you going to solve your problem without doing that?)
I end up giving this rant every time I see a "this is a performance footgun" conversation. The real footgun is thinking you can predict the performance of real code through this kind of analysis. And you've probably already wasted more time wondering about it than you ever would dealing with it because the performance cost is unnoticeable anyways.
I have experience optimizing a 3D graphics engine (it ran fine on iOS, then on Android it couldn't maintain 10 fps). This was C++, there were a few places where containers were being copied instead of passed by reference, so it was an important optimization to pass by reference. In a few places it was important to reserve enough capacity ahead of building up arrays. It was mostly identifying the expensive OpenGL calls and figuring out how to restructure the rendering process to avoid making them as often (i.e. saving and reusing stuff, which was much more difficult to code correctly, which is why I didn't do it that way to begin with). Swift avoids the "oops I passed by value" stuff specifically by hiding those implementation details under abstractions (you don't even get to see pointers, the compiler decides all of that and can thus do it better than you). The meat of the optimization was the logic of my code, and it didn't involve Big O at all. If one thing is O(n) and another is O(1), and n = 10, but the O(1) thing has a constant cost factor that is 1000x the O(n) thing, the O(n) thing wins by a factor of 100. The irony of Big O is that most of the time, the O isn't that Big.
The next rule is to test your actual program, not a snippet of code written solely for the purpose of testing its performance. That will not tell you anything about the performance of real code. If my guess (the best I can do) is right, you concluded that sharing reference counted objects in an array is O(1) only because it was in an unrealistic example where the sharing was completely unnecessary and could be optimized entirely out of the program. If you're copying that array and both copies escape, it's simply impossible for it to not go through and retain every element. That's actually a great lesson in "stop trying to to outsmart the compiler". You want to make that conversion explicit, but the compiler optimized it in a way that isn't possible for the programmer to do... unless you argue that automatic reference counting is a performance footgun and Swift needs to go back to manual reference counting.