Is there a reason not to add an argument to fromiter that specifies the final size of the n-d array? Reading this discussion, I realized that there are several places in my code where I create 2-D arrays like this: arr = N.array([d.data() for d in list_of_data_containers]), where d.data() returns a buffer object. I would guess that this paradigm causes lots of memory copying. The more efficient solution, I think, would be to preallocate the array and then assign each row in a loop. It's so much clearer this way, however, that I've kept it as is in the code. So, what if I could do something like arr = N.fromiter(d.data() for d in list_of_data_containers, shape=(x,y)), with the contract that fromiter will throw an exception if any of the d.data() are not of size y or if there are more than x elements in list_of_data_containers? Just a thought for discussion. barry On 8/16/07, Robert Kern <robert.kern@gmail.com> wrote:
Geoffrey Zhu wrote:
Hi All,
I want to construct a numpy array based on Python objects. In the below code, opts is a list of tuples.
For example,
opts=[ ('C', 100, 3, 'A'), ('K', 200, 5.4, 'B')]
If I use a generator like the following:
K=numpy.array(o[2]/1000.0 for o in opts)
It does not work.
I have to use:
numpy.array([o[2]/1000.0 for o in opts])
Is this behavior intended?
Yes. With arbitrary generators, there is no good way to do the kind of mind-reading that numpy.array() usually does with sequences. It would have to unroll the whole generator anyways. fromiter() works for this, but you are restricted to 1-D arrays which is a lot easier to implement the mind-reading for.
-- Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion