Generic type resolution using "type(of:)" options

Hi,

I would like to know if it is possible to create an object type using type(of:) options.
Say for example

struct myNum<T:Numeric> {
   var x:T
}

let a = 10
let b = myNum<type(of: a)>(x:10) // how to get this working? this line throws compile error

is it possible to infer the type using the above syntax?

The exact error goes like this

error: adjacent operators are in non-associative precedence group 'ComparisonPrecedence'

Try this:

struct myNum<T:Numeric> {
   var x:T
    init(typeOf _: T, x: T) {
        self.x = x
    }
}

let a1 = 10
let a2 = 10.0
let b1 = myNum(typeOf: a1, x: 10)
let b2 = myNum(typeOf: a2, x: 10)
print("type(of: b1) == \(type(of: b1)), type(of: b2) == \(type(of: b2))")

prints:
type(of: b1) == myNum<Int>, type(of: b2) == myNum<Double>

1 Like

Thanks for the response. Idea is not to have any new init. I want to know if we can use the type(of:) API option available in the swift language to infer the type of the Object without any additional code.

myNum<type(of: a)>(x:10) Compiler knows the type of "a" is Int and hence I am expecting it to be able to infer the type to myNum<Int> from the above syntax. But compiler is interpretting "<" as the lessthan operator and hence triggering the error.

So the question is how to bypass this error or is there any trick available to explicitly inform the compiler to infer the <type(of:a)> as whole instead of refering it as less than or greater than operator.

type(of: a) is not quite an Int as a type. It's a value Int.self, an instance of (meta)type Int.Type.

You can only use type(of: a) where a variable is expected, not where a type is expected (which includes a generic parameter).

Thanks for the response. I completely get your point. But the question is there any work arounds to get it working. Even myNum<a.Type>(x:10) doesn't work for the same reason.

Here is the actual problem I'm trying to solve.

I want to get some generic piece of code tested for many permutations of value And here is the piece of code which is currently doing the job.

  func testMaxRowColumnLimitMatrix() {
        Matrix<Int>(Int.max, Int.max)
        Matrix<UInt>(Int.max, Int.max)
        Matrix<Double>(Int.max, Int.max)
        Matrix<Float>(Int.max, Int.max)

        Matrix<Int>(Int.max, Int.min)
        Matrix<UInt>(Int.min, Int.max)
        Matrix<Double>(Int.min, Int.max)
        Matrix<Float>(Int.max, Int.min)
... etc etc


    }

As you can see it is lot of code with repeated items All I wanted was a permutations of of Int.Max and Int.min values to be tested on the different Numeric type of data.

To avoid such a duplicate of code I am trying to write a code which goes as follows.

    func testMinMaxValuesLimitMatrix() {
        let maxMin = [Int.max, Int.min]
        let type: [Any] = [Int(1), Float(1.0), UInt(1), Double(1.0)]

        for t in type {
            for (i,m) in maxMin.enumerated() {
                Matrix<(type(of: t))>(maxMin[i], m)
            }
        }
    }

I want to avoid repeating the same code.Is there any better alternative to solve such problems.

AFACT, no, Swift doesn't support such kind of type iteration. Closest I could think of is to put the repeated part in a function:

func foo<Element: Numeric>(_: Element.Type) {
  Matrix<Element>(Int.max, Int.min)
}

foo(Int.self)
foo(UInt.self)
...
2 Likes

Assuming you're still referring to the Matrix struct defined in this thread, you can use the "repeating" initializer to avoid to specify the Element type:

func testMinMaxValuesLimitMatrix() {
    let maxMin = [Int.max, Int.min]
    let elements = [Int(0), Float(0), UInt(0), Double(0)]

    for element in elements {
        for m1 in maxMin {
            for m2 in maxMin {
                Matrix(m1, m2, repeating: element)
            }
        }
    }
}

Edit: My bad. Swift arrays cannot have mixed types. If tuples were iterable, you could have used them in a similar way, with elements being a tuple of 4 elements. They may become iterable in the future, but for now you need to write line by line:

func testMinMaxValuesLimitMatrix() {
    let maxMin = [Int.max, Int.min]

    for m1 in maxMin {
        for m2 in maxMin {
            Matrix<Int>(m1, m2)
            Matrix<Float>(m1, m2)
            Matrix<UInt>(m1, m2)
            Matrix<Double>(m1, m2)
        }
    }
}
Off topic Are you really going to initialize matrices with Int.min or Int.max number of columns/rows?

Int.min is a negative integer and there will probably be some unexpected behaviors with a negative number of columns or rows.
Int.max, on the other hand, is 9.223.372.036.854.775.807 on a 64-bit platform. If you try to initialize an Int.max by Int.max matrix of Doubles, you'll need 5.444.517.870.735.015.414.233.402.098 Terabytes of memory (just for one iteration).

1 Like

Thanks for the suggestion. This I have already done something similar and wanted to take it one step further hence created this thread.

func nullMatrix<T:Numeric>(_ m: Matrix<T>) {
    XCTAssertEqual(m.rows, 0)
    XCTAssertEqual(m.columns, 0)
    XCTAssertEqual(m.size, m.rows*m.columns)
    XCTAssertEqual(m.shape.rows, 0)
    XCTAssertEqual(m.shape.columns, 0)
}

...
 func testMaxRowColumnLimitMatrix() {
        nullMatrix(Matrix<Int>(Int.max, Int.max))
        nullMatrix(Matrix<UInt>(Int.max, Int.max))
        nullMatrix(Matrix<Double>(Int.max, Int.max))
        nullMatrix(Matrix<Float>(Int.max, Int.max))
    }

you can find more details here.

THanks for this suggestion. Yes it is related to the same post which I posted earlier. I think I have to settle for this option now. And use the same. When language permits to do such tricks, I will try to re-do it.

This is a very valid point. No I don't intend to create such a huge array. But I would like my library to exit gracefully if some one tries to do such operations and hence trying to fence the limit of the library as part of unit tests.

To avoid the negative indexing issues I tried to create the row and column indexes as UInt. But based on the review comment by @Lantua in this thread I changed it to Int and now I need to handle the negative indexing as well which is fine. I am trying to mimic the numpy library where negative indexes are valid. So it is fine for me to have Int instead of UInt.

Creating matrixes with negative rows/columns is a logic failure. I'd personally recommend that you just use precondition. The programmers need to rewrite codes if that happen, not to handle it at runtime.


Fun fact: there's a ~= operator that you may find useful when checking the range. You can generally do validRange ~= checkingValue. ~= is another special operator used to interpret custom switch-case by the compiler, but it can be use this way as well.

1 Like

Thanks for all the links they are very helpful! :slight_smile: