realtime design

Johannes Nix jnix at
Tue Oct 22 09:10:57 CEST 2002

Chris Liechti <cliechti at> writes:

> i would say that a normal OS (Windows, Linux, others) is suitable for real 
> time down to the range to 1s timesteps. longer times beeing easy. there's 
> of course no guarantee that even that can be met. there much longer delays 
> (e.g. accessing the floppy under windows or deleting a large file under 
> linux ext2, spinning up a HD from low power when accessed)

I am doing real-time audio processing with Python. The OS is Linux
2.4.17 with Andrew Mortons low-latency patches. The audio driver is
ALSA-0.9. Sound hardware are 32-Bit RME sound cards with optical
connections and external AD/DA converter, about 120 dB dynamic range
and up to 8 channels. The driver binding is written in C, most of the
other processing (noise supression and hearing aid algorithms) is
written in Python using the Numeric module. The worst-case latencies
are in the range of 25 - 50 milliseconds. "Latencies" here does not
mean interrupt latencies but the time lag between the input signal and
the output signal.  Part of this is because the RME cards have very
small buffers with fixed fragment sizes so that an extra I/O thread
with buffers on the applications side is needed. I guess very much
that lower latencies in the 2 - 4 millisecond range are with modest
effort reachable on the same system. The main problem here is that for
standard FFT processing one needs about 600 samples of buffering.

Linux-2.4 with low-latency patches provides worst case latencies well
below 0.5 milliseconds. It's possible that one has to use SCSI disks
because not all ATA disks can be tuned by hdparm.

CPU hardware has been a dual-processor UP2000 with 750 MHz Alpha CPUs,
but now we've switched to AMD and Pentium III CPU with 1800 MHz, in
some cases they are still a bit slower.  In terms of computing power
the system is largely equivalent to a VME-Bus based Pentek System with
a SUN sparc host workstation and five Texas Instrumens TMSC40320 DSP
Processors (40 and 50 MHz), but perhaps twenty times faster to program
and at the fifteenth part of the cost. I've reduced about 10000 lines
of C to 1000 lines Python. For example, a standard spectral
subtraction algorithm needs for the core part (not counted window
analysis / overlap add synthesis) about 120 lines, including a
socket-based interface to change parameters at runtime.

Because the Global Interpreter Lock (GIL), parallel processing in Python is
somewhat complicated. The solution I found was to fork the Python
process and exchange data by shared memory access (there does exist an
"shm" module ).

> the fact that it uses Python or an other languages does not matter all that 
> much. some make it easier, some harder. of course, Java with it's gc in the 
> background makes predictions how long something takes harder, especialy for 
> short times, that's a problem in Jython. CPython is much easier to 
> understand with it's reference counting.

I think reference counting helps also to get quite deterministic

I'd say that Python with Numeric and psyco is an excellent language
for many real-time purposes once the GIL is replaced by per-object
locks. I already was thinking about a PEP.

Once there is an easy-to use, powerful graphics package for scientific
graphics, Python will be also a very good engineering language which
has all requirements to replace many applications of Matlab (which is
the de facto standard). I guess you can do all what you need with the
Gnuplot, Dislin and OpenDX bindings but it's still too complicated for
many people.


More information about the Python-list mailing list