I have learned how a hashmap is implemented: an array of linked lists, if separate chaining is used. I know that when a key-value pair (key, val) is inserted, the hashmap inserts the pair {key, val} in the linked list at entry hash(key) % array.size in the underlying array.
So, for the insertion process, all that is done is
- A computation of the hash function; also,
- A modulo operation (take the hash modulo the array size); also,
- An array access; finally,
- Allocating a linked list node and inserting it into the linked list.
However, isn't this insertion process O(1), assuming the hash is O(1), because everything is then O(1) except possibly the linked list insertion, and for the linked list insertion, couldn't we always choose to insert the pair at the beginning of the linked list, which is O(1)? Isn't this O(1) always? If so, then why don't hash tables use this strategy?

std::unordered_multimapin C++ never has to do this check, there is still the risk of having to re-hash on insertion as the capacity of the hash map is exceeded. If you've never re-hashed, inserts could be strictlyO(1)but only while loosing average caseO(1)for lookups in return.