I'd like assign typealias for 2D generic array. ( I don't want to create new type struct MatrixT<T>{} ). So I do next:
typealias MatrixT<T> = [[T]]
But when I started write extension for it I found that compiler actually doesn't understand that type MatrixT is 2D array. It recognizes type of self as [Element]
extension MatrixT {
var columnsCount: Int {
let copy = self \\compilator recognizes this as let copy: [Element] = self
let row = self[0] \\compilator recognizes this as let row: Element
return 0
}
}
But outside of the extension the Swift compiler understands that element of MatrixT is array.
func testCreation() {
let matrix: MatrixT = [[0]]
let firstRow:[Int] = matrix[0] \\ correct
let columnsCount = firstRow.count
}
Why I can't refer to type MatrixT in the extension as to 2D array ( [[T]] )?
MatrixT is an array of arrays of T. If you write extension of MatrixT, you're writing the extension of that outer array, which means that the elements of it are arrays of T, not just T
You can try to print Element.self, or type(of: self[0]) to check that out
You can. [[T]] is different way to write [Element] in the context of extension of MatrixT
Note that, while you can do this, it's a pretty bad abstraction for matrices, since it doesn't enforce the constraint that all the columns have the same count. I would advise against it, unless you have a specific use case in mind where that's not an issue.
I would abstract the nested array in a wrapper type, generic on the element type, restricting the element type to Numeric or similar, where that type's initializer takes a row and column count. You probably want an alternate initializer that adds a sequence or closure argument to get the first values.
Behind a wrapper type, you can regulate the allocations, and keep the user from making the nested array ragged. Your subscript would still use two coordinates. Since your allocations are hidden, you can flatten the storage to a single [T] instead of the fractured allocations of [[T]] and remap the two public coordinates to a single internal index. String does a similar optimization; it flattens its data to a single block of UTF-8 code units.
You may want to provide two view types, one for a collection of row vectors and another for a collection of column vectors.
Personally, my Matrix type has an unconstrained Element, and then various constrained extensions to provide numeric-specific behaviors. Something like:
struct Matrix<Element> {
private(set) var rowCount: Int
private(set) var columnCount: Int
private(set) var elements: [Element]
init(rows: Int, columns: Int, elements: [Element]) { /*...*/ }
subscript(_ row: Int, _ column: Int) -> Element { /*...*/ }
}
extension Matrix where Element: AdditiveArithmetic { /*...*/ }
extension Matrix where Element: Numeric { /*...*/ }
extension Matrix where Element: FloatingPoint { /*...*/ }
// etc.