I proposed to use nanoseconds because UNIX has 1 ns resolution in timespec, the most recent API, and Windows has 100 ns.Using picoseconds would confuse users who may expect sub-nanosecond resolution, whereas no OS support them currently.Moreover, nanoseconds as int already landed in os.stat and os.utime.
Last but not least, I already strugle in pytime.c to prevent integer overflow with 1 ns resolution. It can quickly become much more complex if there is no native C int type supporting a range large enough to more 1 picosecond resolution usable. I really like using int64_t for _PyTime_t, it's well supported, very easy to use (ex: "t = t2 - t1"). 64-bit int supports year after 2200 for delta since 1970.