[Numpy-discussion] speed of numpy.ndarray compared to Numeric.array
Thomas.EMMEL at 3ds.com
Mon Jan 10 03:09:17 EST 2011
> Did you try larger arrays/tuples? I would guess that makes a significant
No I didn't, due to the fact that these values are coordinates in 3D (x,y,z).
In fact I work with a list/array/tuple of arrays with 100000 to 1M of elements or more.
What I need to do is to calculate the distance of each of these elements (coordinates)
to a given coordinate and filter for the nearest.
The brute force method would look like this:
def bruteForceSearch(points, point):
minpt = min([(vec2Norm(pt, point), pt, i)
for i, pt in enumerate(points)], key=itemgetter(0))
return sqrt(minpt), minpt, minpt
xDis = pt1-pt2
yDis = pt1-pt2
zDis = pt1-pt2
I have a more clever method but it still takes a lot of time in the vec2norm-function.
If you like I can attach a running example.
> Don't know how much of an impact it would have, but those timeit statements
> for array creation include the import process, which are going to be
> different for each module and are probably not indicative of the speed of
> array creation.
No, the timeit statements counts the time for the statement in the first argument only,
the import-thing isn't included in the time.
This email and any attachments are intended solely for the use of the individual or entity to whom it is addressed and may be confidential and/or privileged. If you are not one of the named recipients or have received this email in error, (i) you should not read, disclose, or copy it, (ii) please notify sender of your receipt by reply email and delete this email and all attachments, (iii) Dassault Systemes does not accept or assume any liability or responsibility for any use of or reliance on this email.For other languages, go to http://www.3ds.com/terms/email-disclaimer.
More information about the NumPy-Discussion