
On 9/8/11 1:43 PM, Christopher Jordan-Squire wrote:
I just ran a quick test on my machine of this idea. With
dt = np.dtype([('x',np.float32),('y', np.int32),('z', np.float64)]) temp = np.empty((), dtype=dt) temp2 = np.zeros(1,dtype=dt)
In [96]: def f(): ...: l=[0]*3 ...: l[0] = 2.54 ...: l[1] = 4 ...: l[2] = 2.3645 ...: j = tuple(l) ...: temp2[0] = j
vs
In [97]: def g(): ...: temp['x'] = 2.54 ...: temp['y'] = 4 ...: temp['z'] = 2.3645 ...: temp2[0] = temp ...:
The timing results were 2.73 us for f and 3.43 us for g. So good idea, but it doesn't appear to be faster. (Though the difference wasn't nearly as dramatic as I thought it would be, based on Pauli's comment.)
my guess is that the lines like: temp['x'] = 2.54 are slower (it requires a dict lookup, and a conversion from a python type to a "raw" type)
and
temp2[0] = temp
is faster, as that doesn't require any conversion.
Which means that if you has a larger struct dtype, it would be even slower, so clearly not the way to go for performance.
It would be nice to have a higher performing struct dtype scalar -- as it is ordered, it might be nice to be able to index it with either the name or an numeric index.
-Chris