I have a list of matrices W_k I'd like to multiply with a list of vectors
or another way of looking at it, writing all W_k into a 3d array and all
v_k into a 2d matrix/array, I'd like to compute matrix R as
R_ik = sum_j W_ijk h_jk.
Is there a fast way of doing that in numpy?
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
lets say i have arrays:
a = array((1,2,3,4,5))
indices = array((1,1,1,1))
and i perform operation:
a[indices] += 1
the result is
array([1, 3, 3, 4, 5])
in other words, the duplicates in indices are ignored
if I wanted the duplicates not to be ignored, resulting in:
array([1, 6, 3, 4, 5])
how would I go about this?
the example above is somewhat trivial, what follows is exactly what I
am trying to do:
faceforces = pressure *
self.verts[self.faces[:,0]] += faceforces
self.verts[self.faces[:,1]] += faceforces
self.verts[self.faces[:,2]] += faceforces
vectors = self.verts[self.constraints[:,1]] -
lengths = sqrt(sum(square(vectors), axis=1))
correction = 0.5 * (vectors.T * (1 - (self.restlengths / lengths))).T
self.verts[self.constraints[:,0]] += correction
self.verts[self.constraints[:,1]] -= correction
self.normals[self.faces[:,0]] += facenormals
self.normals[self.faces[:,1]] += facenormals
self.normals[self.faces[:,2]] += facenormals
lengths = sqrt(sum(square(self.normals), axis=1))
self.normals = (self.normals.T / lengths).T
Ive been getting some very buggy results as a result of duplicates
being ignored in my indexed inplace add/sub operations.
First the disclaimer: This is my first numpy experience, so I have next to
no idea what I'm doing.
I've muddled through and managed to put together some code for my current
problem and now that I have it going I'd like to hear any comments people
may have on both my solution and other ways of approaching the problem.
I have two goals here, I'd like to make the process run faster and I'd like
to broaden my understanding of numpy as I can see from my brief use of it
that it is a remarkably powerful tool.
Now to the problem at hand. I find this difficult to explain but will try as
best I can.
The best word I have for the process is decimation. The input and output are
both 3 dimensional arrays of uint8's. The output is half the size of the
input along each dimension. Each cell [x,y,z] in the output corresponds to
the 2x2x2 block [2*x:2*x+2, 2*y:2*y+2, 2*z:2*z+2] in the input. The tricky
bit is in how the correspondence works. If all the cells in the input block
have the same value then the cell in the output block will also have that
value. Otherwise the output cell will have the value MIXED.
Here is my current solution, from my limited testing it seems to produce the
result I'm after.
in_x, in_y, in_z = data_in.shape
out_x = in_x / 2
out_y = in_y / 2
out_z = in_z / 2
out_shape = out_x, out_y, out_z
out_size = product(out_shape)
# figure out which chunks are homogeneous
reshaped_array = data_in.reshape(out_x, 2, out_y, 2, out_z,
2).transpose(0,2,4,1,3,5).reshape(out_x, out_y, out_z, 8)
min_array = numpy.amin(reshaped_array, axis=3)
max_array = numpy.amax(reshaped_array, axis=3)
equal_array = numpy.equal(min_array, max_array)
# select the actual value for the homogeneous chunks and MIXED for the
decimated_array = data_in[::2,::2,::2]
mixed_array = numpy.tile(MIXED, out_size).reshape(out_shape)
data_out = numpy.where(equal_array, decimated_array, mixed_array)
For the curious this is will be used to build a voxel octtree for a 3d
graphics application. The final setup will be more complicated, this is the
minimum that will let me get up and running.
P.S. congrats on numpy, it is a very impressive tool, I've only scraped the
surface and it's already impressed me several times over.
We're having trouble using numpydoc formatted docstrings for one of our
own project. It seems that the "Other parameters" section is not
getting included in the output.
A grep across Numpy's own source code (git master) reveals that this
kind of section is used in only one place -- the docstring for
"recarray". Curiously, the "Other Parameters" section is displayed in
the documentation editor here:
But not in the generated Numpy docs online here:
Is this a bug?
Science Software Branch
Space Telescope Science Institute
Baltimore, Maryland, USA
I need to convert numbers read from a text file to floating point. The
numbers are in the format 1.538D-06 (written by a FORTRAN application)
and have varying amounts of whitespace between them from line to line.
The function fromstring() deals with the whitespace just fine but
'dtype=float' doesn't correctly convert the data. It sees every thing
up to the 'D' and ignores the rest(I assume expecting an 'e' to indicate
the exponential). I was able to get around this using re.sub() to
change the 'D' to 'e' in the string before using fromstring(), but I was
wondering if python has any way to directly read this data as float?
Google didn't find an answer.
I am not sure if this is the correct place for such questions, but here
I am curious about datarray, but I haven't been able to get it to work. The
module datarray does not appear to have a class DataArray (or DatArray). So
I am confused how I am supposed to use it. Can anyone advise?
My other question is whether datarray will be able to handle multiple data
types in the same object; i.e. does it have the functionality of recarrays
and R data.frames?
I tried to savetxt an array with ndim = 3, but I get an error:
In : savetxt('test.dat',a)
in savetxt(fname, X, fmt, delimiter)
785 for row in X:
--> 786 fh.write(format % tuple(row) + '\n')
788 import re
TypeError: float argument required, not numpy.ndarray
I don't see this anywhere in the documentation of savetxt. And if that is a
restriction it should probably be caught with an assertion.
Numpy version 1.4.0.
I see why ndim>2 may cause problems, as there is now way on 'loadtxt' to
figure out the shape of the array, but I think it should throw an assertion
and the documentation should be updated.
In numpy 1.5.0, I got the following for mean of an empty sequence (or array):
In : mean()
Warning: invalid value encountered in double_scalars
Is this behaviour expected ?
Also, would it be possible to have more explicit warning messages about the
problem being related numpy ?
It seems these message cause quite a bit of confusion to users. The
following would be more helpful :
Warning: invalid value encountered in numpy double_scalars