[Python-Dev] Python Benchmarks

Fredrik Lundh fredrik at pythonware.com
Sat Jun 3 15:02:27 CEST 2006

Martin v. Löwis wrote:

> Sure: when a thread doesn't consume its entire quantum, accounting
> becomes difficult. Still, if the scheduler reads the current time
> when scheduling, it measures the time consumed.

yeah, but the point is that it *doesn't* read the current time: all the 
system does it to note that "alright, we've reached the end of another 
jiffy, and this thread was running at that point.  now, was it running 
in user space or in kernel space when we interrupted it?".  here's the 
relevant code, from kernel/timer.c and kernel/sched.c:

     #define jiffies_to_cputime(__hz) (__hz)

     void update_process_times(int user_tick)
         struct task_struct *p = current;
         int cpu = smp_processor_id();

         if (user_tick)
             account_user_time(p, jiffies_to_cputime(1));
             account_system_time(p, HARDIRQ_OFFSET,
         if (rcu_pending(cpu))
             rcu_check_callbacks(cpu, user_tick);

     void account_user_time(struct task_struct *p, cputime_t cputime)
         struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
         cputime64_t tmp;

         p->utime = cputime_add(p->utime, cputime);

         tmp = cputime_to_cputime64(cputime);
         if (TASK_NICE(p) > 0)
             cpustat->nice = cputime64_add(cpustat->nice, tmp);
             cpustat->user = cputime64_add(cpustat->user, tmp);

(update_process_times is called by the hardware timer interrupt handler, 
once per jiffy, HZ times per second.  task_struct contains information 
about a single thread, cpu_usage_stat is global stats for a CPU)

for the benchmarks, the problem is of course not that the benchmarking 
thread gives up too early; it's when other processes give up early, and 
the benchmark process is next in line.  in that case, the benchmark 
won't use a whole jiffy, but it's still charged for a full jiffy 
interval by the interupt handler (in my sleep test, *other processes* 
got charged for the time the program spent running that inner loop).

a modern computer can to *lots of stuff* in a single jiffy interval 
(whether it's 15 ms, 10 ms, 4 ms, or 1 ms), and even more in a single 
scheduler quantum (=a number of jiffies).

> You mean, "unless something changed very recently" *on Linux*, right?

on any system involved in this discussion.  they all worked the same 
way, last time I checked ;-)

> Or when did you last read the sources of Windows XP?

afaik, all Windows versions based on the current NT kernel (up to and 
including XP) uses tick-based sampling.  I don't know about Vista; given 
the platform requirements for Vista, it's perfectly possible that 
they've switched to TSC-based accounting.

> It would still be measuring if the scheduler reads the latest value
> of some system clock, although that would be much less accurate than
> reading the TSC.

hopefully, this is the last time I will have to repeat this, but on both 
Windows and Linux, the "system clock" used for process timing is a jiffy 


More information about the Python-Dev mailing list