... One more concern: if you repeatedly remove the *first* item, the hash table will start looking lobsided. Since we don't resize the hash table on deletes, maybe picking an item at random (but not using an expensive random generator!) would be better.
Which is the reason SETL doesn't specify *which* set item is removed: if you always start looking at "the front" of a dict that's being consumed, the dict fills with turds without shrinking, you skip over them again and again, and consuming the entire dict is still quadratic time.
Unfortunately, while using a random start point is almost always quicker than that, the expected time for consuming the whole dict remains quadratic.
The clearest way around that is to save a per-dict search finger, recording where the last search left off. Start from its current value. Failure if it wraps around. This is linear time in non-pathological cases (a pathological case is one in which it isn't linear time <wink>).