Thanks for calling this out, because in trying to express the disconnect between the formal CS definitions of the terms we're using, and their looser colloquial usage, I was myself not clear enough about what I meant. I agree with your characterization of real-world usage here.
I'm providing the following not to be smarmy, but to be more precise about what I meant — the formal definition of O-notation that I learned, from Introduction to Algorithms (Fourth Edition) by Cormen, et. al. (typesetting theirs):
For a given function g(n), we denote by O(g(n)) the set of functions
O(g(n)) = { f(n) : there exist positive constants c and n₀ such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n₀ }
What I was hoping to express was that in formal analysis, we are not concerned about units. O(g(n)) defines a set of functions, and a formal set cannot have units; similarly, we use O(g(n)) to relate functions f(n) and g(n), and if you consider functions to have units, then:
- They better have the same units in order to be comparable, and if so,
- Their units are constant, and therefore can be factored into
c in the definition above
In the real world: absolutely, if you don't provide context for the calculations you're performing, the analysis isn't helpful. But what I'm trying to express and focus on is that in practice, in exactly that real-world type of analysis, we aren't rigorous — everyone describes Big-O notation a little bit differently. At the same time, a formal runtime analysis of these algorithms would be so obtuse as to be largely useless.
i.e., I'm just trying to say that there's a bit of a catch-22 to Big-O notation: to be somewhat more easily understood, we take liberties with notation, units, context, etc., which leaves room for ambiguity and misunderstanding; being more precise with notation makes the analysis less accessible with theoretical analysis and discussion.
Which is why, if we're really interested in updating comments/documentation to be more exact, instead of trying to stick to Big-O notation as in
then I think we could do even better. I've worked with plenty of talented engineers who don't have a theoretical or formal CS background, and who never learned Big-O notation. Why not cut to the chase?
// `key` is used to perform a Dictionary lookup, which requires hashing every byte of
// the string. If `key` has many Unicode code points, this can be prohibitively
// expensive.
func bar(_ key: String) { ... }