I was doing some experimentation and noted that type inference results in much better compilation time, when compared with explicit type declaration.
For example A.swift is
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let label0 = UILabel()
label0.font = .systemFont(ofSize: 16, weight: .bold)
label0.textColor = .red
label0.textAlignment = .center
label0.lineBreakMode = .byWordWrapping
}
}
and B.swift is
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let label0 = UILabel()
label0.font = UIFont.systemFont(ofSize: 16, weight: UIFont.Weight.bold)
label0.textColor = UIColor.red
label0.textAlignment = NSTextAlignment.center
label0.lineBreakMode = NSLineBreakMode.byWordWrapping
}
}
I added 1000 labels in each viewDidLoad function and same properties are being set for these. This is because with a single label, there wasn't much difference in compilation time. After averaging out 20 runs of swiftc -c
I got the inferred file A.swift to be 451.9 ms, whereas the explicit file B.swift was at 623.7 ms.
I thought the explicit one will be faster to compile, as there would be no inference overhead. Could anyone explain why the opposite is happening? Was my initial assumption wrong?