I am spinning off the question of Dictionary lookup complexity here, as the "is String's subscript O(n)..." thread is getting too long already and should stay focused on string's subscript.
In this topic I suggest to focus only on the best case Dictionary lookup time (no hash collisions).
The precious gem post of @lorentey notwithstanding I'd like to focus on the algorithmic complexity itself, disregarding issues related to Obj-C bridging, potentially unbound time of taking a lock or spinning indefinitely in the atomic compare exchange loop, virtual memory unpredictability and existence of other processes. Like in a typical physics assignment "we are in vacuum, there is no friction, and all objects are point sized" we would consider - no virtual memory, no memory caches and no other threads / processes (true, under certain conditions those things we've just disregarded are the major contributors to the time of the operation in question).
With those preconditions in mind I'd say that claiming that dictionary lookup is "O(1)" is wrong or incomplete at best, without specifying the units like so or similar: "O(1) key hash and EQ operations".
Without those units specified one would naïvely assume "seconds" or "CPU clocks", which is not the case here. For example in case of string keys the time complexity of dictionary access is "O(keySize)", as String hash and EQ calculations are known to be O(stringSizeInUnicodePoints). I do not think it is too obvious or fair to hand wave it with just "O(1)" and let the reader figure out the missing details.