Thank you for your detailed reply!
Just a quick question: Which of the 3 categories would the following two issues (from my above list) be?
-
SR-7023 - Missed optimization opportunity, but not missed if wrapping code in a nested func
-
SR-6983 - Unexpected drop in performance of (identity-)closure the second time it is used
I'm not at all sure, but I'd guess they'd both be 3 (Reduced case where the performance problem manifests itself), and if so I should add them as PRs to the Swift Benchmark Suite.
Looking at the way those benchmarks are constructed (test is specified in a func, which seems to then be stored in a property, and later called from within a specific context), I fail to see how these two particular performance problems could/should be turned into such benchmarks (since they are about unexpected differences in performance depending on context, things like whether some code is wrapped within a nested func or not, or if a closure has been passed as an argument once or twice before it is heavily used etc.). A similar example would be if an issue manifested itself only in global scope, not otherwise, I can't see how such an issue could be added as a benchmark in the current design of the Benchmark Suite.
In other words: A lot of the issues I'm seeing manifests themselves only in very specific contexts, and it seems like the Swift Benchmark Suite imposes a certain context on its benchmarks, ie it is not possible to reproduce any context in those benchmarks.
I'm probably just missing something obvious. It would certainly be obvious if I could just see SR-7023 and SR-6983 turned into benchmarks for the Swift Benchmark Suite. So if someone would feel like having a go at that, I'd really appreciate it!