speed up a numpy code with huge array
Alexzive
zasaconsulting at gmail.com
Wed May 26 07:43:24 EDT 2010
thank you all for the tips.
I 'll try them soon.
I also notice another bottleneck, when python tries to access some
array data stored in the odb files (---> in text below), even before
starting the algoritm:
###
EPS_nodes = range(len(frames))
for f in frames:
... sum = 0
---> UN = F[f].fieldOutputs['U'].getSubset(region=TOP).values <---
... EPS_nodes[f] = UN[10].data[Scomp-1]/L3
###
unfortunately I don't have time to learn cython. Using dictionaries
sounds promising.
Thanks!
Alex
On May 26, 8:14 am, Stefan Behnel <stefan... at behnel.de> wrote:
> Alexzive, 25.05.2010 21:05:
>
> > is there a way to improve the performance of the attached code ? it
> > takes about 5 h on a dual-core (using only one core) when len(V)
> > ~1MIL. V is an array which is supposed to store all the volumes of
> > tetrahedral elements of a grid whose coord. are stored in NN (accessed
> > trough the list of tetraelements --> EL)
>
> Consider using Cython for your algorithm. It has direct support for NumPy
> arrays and translates to fast C code.
>
> Stefan
More information about the Python-list
mailing list