Interesting, I was just reading about the round rule in IEEE standard last Friday.
What numpy's "around" function does is called "round to even" (round is take to make the next digit even), instead of "round up". According to "What every computer scientist should know about floating-point arithmetic" (do a Google search), the reason to prefer "round to even" is exemplified by the following result from Reiser and Knuth:
Let x and y be floating-point numbers, and define x0 = x, x1 = (x0 - y) + y, ..., xn = (xn-1 - y) + y. If + and - are exactly rounded using round to even, then either xn = x for all n or xn = x1 for all n >= 1.
If you use "round up" the sequence xn can start increasing slowly "for ever", and as the paper says:
"...This example suggests that when using the round up rule, computations can gradually drift upward, whereas when using round to even the theorem says this cannot happen."