This is plainly false.
But, I understand where the confusion might stem from. In Swift (as in many modern languages), arrays conform to a series of protocols/interfaces that are more general than arrays.
Originally (as in in early computer history), arrays were contiguous section of memory, allocated to be able to store several values, back-to-back, contiguously in e.g. RAM. The memory would have to be instantiated, and filled with values. But conceptually, an array were always a just "list" or "collection" of "things".
But there are also other kinds of "list of things" that aren't contiguously allocated spans of memory, but still encapsulates the idea of a "list of things". Moderns computer languages abstract this notion into a family of related protocols/interfaces. Swift, has Collection and Sequence. A sequence is anything that can be iterated through. A collection is anything that you can look into and extract a value from, given an index. Arrays are both collections and sequences.
In Swift, when you write for ... in ... you can iterate over anything that is a Sequence, not just arrays.
A Range<Int> is just a pair of values, the upper and lower bound. Even though it just occupies those two numbers in memory, it conceptually represents the entire range of values from the lower bound to the upper bound. Therefore Range implements Sequence, and is eligible for iteration, just as if you instead used an Array.
But of you iterate over a range of millions of numbers, there is no need to allocate a span of memory millions of bytes long, set each byte to an increasing number from 1... and then read each memory location, one-by-one. Instead you can just create a counter, start from the first, and keep increasing it until you reach the end. In that way, ranges are much more efficient than arrays.
But in many cases Range<Int> looks like an Array<Int>, because they both get all the functionalities of sequences and collections.