Hi, I use python for some fairly heavy scientific computations (at least to
be running on a single processor) and would like to use it in parallel.
I've seen some stuff online about Parallel Python and mpiPy, but I don't
know much about them. Is a python-specific program needed to run python in
parallel or are the others (e.g., mpi/poe) just more difficult to work
with? And which one would you recommend?
Thanks for your time, I appreciate it.
Hello list -
I have a function that normally accepts an array as input, but sometimes a
I figured the easiest way to make sure the input is an array, is to make it
But if I make a float an array, it has 0 dimension, and I can still not do
array manipulation on it.
>>> a = 3
>>> a = array(a)
Traceback (most recent call last):
File "<pyshell#121>", line 1, in ?
IndexError: 0-d arrays can't be indexed
What would be the best (and easiest, this is for an intro class I am
to convert a to an array (recall, most of the time a is already an array).
Thanks for your help, Mark
I've just committed a revision of ticket #425 to speed up clipping in
the scalar case. I also altered the PyArray_Conjugate function (called
by the conjugate method) to use the ufunc for complex data.
These were some relatively largish changes to the source code (all
behind the scences and no interface changes) --- enough to make me want
to see some more testing.
I would appreciate it, if people could test out the new clip function
and conjugate method to make sure they are working well. All tests
pass, but there are some things we are not testing for. I need to still
add the clip tests from ticket #425 --- unless somebody beats me to it.
We need to test the output argument of both of those functions as well
as test for unaligned, byteswapped, etc. inputs.
Ticket #474 discusses the problem that getting a field from a 0-d array
automatically produces a scalar (which then cannot be set).
This produces the problem that recarrays code must often special-case
the 0-d possibility.
rarr.x[...] = blah
doesn't work for 0-d arrays because rarr.x is a scalar.
It makes some sense to make field selection for 0-d arrays return 0-d
arrays as consistent with the changes that were made prior to the 1.0
release to allow persistence of 0-d arrays.
However, changing field selection to return 0-d arrays does change
behavior. A 0-d array is not a scalar (the 0-d array is not hashable
for example, and the 0-d string array does not inherit from the Python
string). Thus, just making the change, may not be advised.
It is easy to account for and fix any errors that might arise. But, we
are in a major release, I need some advice as to whether or not this is
a "bug-fix" or a feature enhancement that must wait for 1.1?
Any stake holders in the current behavior of arrays with records?
Sorry if the message will arrive in duplicate I had some problem with
posting in the mailing list
I've installed in my machine in the following order
matplot lib 0.87
with no problem
I've also installed the same packages at home and in another two
computer and everything went fine.
The I was asked to install this "configuaration" in some classroom
machines and also on another computer and I continue getting this error
The import of the numpy version of the _transforms module,
_ns_transforms, failed. This is is either because numpy was
unavailable when matplotlib was compiled, because a dependency of
_ns_transforms could not be satisfied, or because the build flag for
this module was turned off in setup.py. If it appears that
_ns_transforms was not built, make sure you have a working copy of
numpy and then re-install matplotlib. Otherwise, the following
traceback gives more details:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
from pylab import *
File "C:\Python25\Lib\site-packages\pylab.py", line 1, in <module>
from matplotlib.pylab import *
File "C:\Python25\Lib\site-packages\matplotlib\pylab.py", line 201, in
from axes import Axes, PolarAxes
File "C:\Python25\Lib\site-packages\matplotlib\axes.py", line 14, in
from artist import Artist, setp
File "C:\Python25\Lib\site-packages\matplotlib\artist.py", line 4, in
from transforms import identity_transform
File "C:\Python25\Lib\site-packages\matplotlib\transforms.py", line
223, in <module>
from _transforms import Value, Point, Interval, Bbox, Affine
File "C:\Python25\Lib\site-packages\matplotlib\_transforms.py", line
17, in <module>
from matplotlib._ns_transforms import *
ImportError: DLL load failed: Impossibile trovare il modulo specificato
but I can assure that If I check numpy installation before installing
matplot lib it seems everything fine.
All computer have Windows XP home edition 2002 SP2
the only difference is in the RAM quantity. (more than 256 in the
computer where everything works) but it seems so strange to me that it
is the ram (I've also installed in another computer , old one, and
Any IDEA ????
I imagine that perhaps this issue I'm seeing is only an issue because I
don't thoroughly understand the buffer issues associated with numpy
arrays, but here it is anyway:
In :a1 = numpy.zero
In :a1 = numpy.zeros( (2,2) )
In :a1[0,:] = 1
array([[ 1., 1.],
[ 0., 0.]])
In :a2 = a1.transpose()
That is, when getting the .data from an array, if it was C_CONTIGUOUS
but was .transposed(), the .data does not reflect this operation, right?
So, would that imply that a .copy() should be done first on any array
that you want to access .data on?
Travis and others,
In the course of trying to understand memory leaks in matplotlib I have
been trying to understand a bit about the garbage collector. If I
understand correctly, any container that can can hold references to
other containers could lead to a reference cycle; if the container
supports the gc mechanism, then the gc can at least find the cycles. If
the containers do not have __del__ methods, then the gc can also break
the cycles and reclaim the memory. (This also seems to imply that
__del__ methods should be avoided if at all possible, and I don't
understand the implications and applications of this.)
I notice that numpy.ndarray does not support the gc, correct? And since
an ndarray can hold other containers, it could lead to uncollectable
Did you decide not to include gc support because it is not actually
needed or useful? If so, what am I missing?
I don't think the lack of gc support in numpy has anything to do with
the present leak problem in mpl, so I am asking about numpy partly out
of curiosity, and partly in the hope that your answer will help me
understand exactly when one really needs to worry about gc support in
extension code--mpl has quite a bit of extension code, and relies on
much outside extension code as well. The one little bit of extension
code I wrote for numpy, the wrapper part of the contour routine, does
not support the gc--and if this is a mistake, I want to know about it.
(I am beginning to suspect that it should support the gc, although it is
not part of our most basic problem at the moment.)
Googling has not turned up much information beyond the standard python
docs about the gc, extension code, and memory leaks in python.
Thanks for whatever insight and advice you can provide.
I am relatively new to Python/NumPy switching over from Matlab and
while porting some of my matlab code for practice I ran into the
Assume we have a 2D Matrix such that
a = array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
If I want the second row I can simply enough take
c = a
However, I would like to do a similar operation on the columns of the
2D Array. In matlab I could simply do
c = a(:,2) to get the values array([2,5,8])
In numPy this seems to not be a valid operation. I understand that
since numPy arrays are more akin to C pointer this cannot be done as
easily so my question is; what is the best way to go around obtaining
the column information I want the "numPy" way? I could simply take a
transpose and get the information this way but this seems wasteful.