[Numpy-discussion] fromstring() is slow, no really!
scopatz at gmail.com
Sun May 13 19:28:30 EDT 2012
This week, while doing some optimization, I found that np.fromstring()
is significantly slower than many alternatives out there. This function
basically does two things: (1) it splits the string and (2) it converts the
data to the desired type.
There isn't much we can do about the conversion/casting so what I
mean is that the *string splitting implementation is slow*.
To simplify the discussion, I will just talk about string to 1d float64
I have also issued pull request #279  to numpy with some sample code.
Timings can be seen in the ipython notebook here.
It turns out that using str.split() and np.array() are 20 - 35% faster,
was non-intuitive to me. That is to say:
rawdata = s.split()
data = np.array(rawdata, dtype=float)
is faster than
data = np.fromstring(s, sep=" ", dtype=float)
The next thing to try, naturally, was Cython. This did not change the
timings much for these two strategies. However, being in Cython
allows us to call atof() directly. My implementation is based on a
thread on this topic . However, in the example in , the string was
hard coded, contained only one data value, and did not need to be split.
Thus they saw a dramatic 10x speed boost. To deal with the more
realistic case, I first just continued to use str.split(). This took 35 -
less time than np.fromstring().
Finally, using the strtok() function in the C standard library to call
while we tokenize the string further reduces the speed 50 - 60% of the
baseline np.fromstring() time.
In : import fromstr
In : s = "100.0 " * 100000
In : timeit fromstr.fromstring(s)
10 loops, best of 3: 20.7 ms per loop
In : timeit fromstr.split_and_array(s)
100 loops, best of 3: 16.1 ms per loop
In : timeit fromstr.split_atof(s)
100 loops, best of 3: 13.5 ms per loop
In : timeit fromstr.token_atof(s)
100 loops, best of 3: 8.35 ms per loop
Numpy's fromstring() function may be found here . However, this code
is a bit hard to follow but it uses the array_from_text() function . On
other hand str.split()  uses a macro function SPLIT_ADD(). The
between these is that I believe that str.split() over-allocates the size of
list in a more aggressive way than array_from_text(). This leads to fewer
resizes and thus fewer memory copies.
This would also explain why the tokenize implementation is the fastest
this pre-allocates the maximum possible array size and then slices it down.
No resizes are present in this function, though it requires more memory up
The np.fromstring() is slow in the mechanism it chooses to split strings
This is likely due to how many resize operations it must perform. While it
need not be the* *fastest* *thing out there, it should probably be at least
fast at Python string splitting.
No pull-request 'fixing' this issue was provided because I wanted to see
what people thought and if / which option is worth pursuing.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion