The registration for
EuroScipy<http://www.euroscipy.org//conference/euroscipy2010>is
finally open.
To register, go to the
website<http://www.euroscipy.org//conference/euroscipy2010>,
create an account, and you will see a *‘register to the conference’* button
on the left. Follow it to a page which presents a *’shoping cart’*. Simply
submitting this information registers you to the conference, and on the left
of the website, the button will now display *‘You are registered for the
conference’*.
The registration fee is 50 euros for the conference, and 50 euros for the
tutorial. Right now there is no payment system: you will be contacted later
(in a week) with instructions for paying.
We apologize for such a late set up. We do realize this has come as an
inconvenience to people.
*Do not wait to register: the number of people we can host is limited.*
An exciting program Tutorials: from beginners to experts
We have two tutorial tracks:
- *Introductory tutorial* <http://www.euroscipy.org/track/871>: to get
you to speed on scientific programming with Python.
- *Advanced tutorial* <http://www.euroscipy.org/track/872>: experts
sharing their knowledge on specific techniques and libraries.
We are very fortunate to have a top notch set of presenters.
Scientific track: doing new science in Python
Although the abstract submission is not yet over, We can say that we are
going to have a rich set of talks, looking at the current submissions. In
addition to the contributed talks, we have:
- *Keynote speakers* <http://www.euroscipy.org/conference/euroscipy2010>:
Hans Petter Langtangen and Konrard Hinsen, two major player of scientific
computing in Python.
- *Lightning talks* <http://www.euroscipy.org/talk/937>: one hour will be
open for people to come up and present in a flash an interesting project.
Publishing papers
We are talking with the editors of a major scientific computing journal, and
the odds are quite high that we will be able to publish a special issue on
scientific computing in Python based on the proceedings of the conference.
The papers will undergo peer-review independently from the conference, to
ensure high quality of the final publication.
Call for papers
Abstract submission is still open, though not for long. We are soliciting
contributions on scientific libraries and tools developed with Python and on
scientific or engineering achievements using Python. These include
applications, teaching, future development directions, and current research.
See the call for
papers<http://www.euroscipy.org/card/euroscipy2010_call_for_papers>
.
*We are very much looking forward to passionate discussions about Python in
science in Paris*
*Nicolas Chauvat and Gaël Varoquaux*

I have an application that involves managing sets of floats. I can use
Python's built-in set type, but a data structure that is optimized for
fixed-size objects that can be compared without hashing should be more
efficient than a more general set construct. Is something like this
available?
--
View this message in context: http://old.nabble.com/efficient-way-to-manage-a-set-of-floats--tp28518014p2…
Sent from the Numpy-discussion mailing list archive at Nabble.com.

Is it possible to downcast an array in-place?
For example:
x = np.random.random(10) # Placeholder for "real" data
x -= x.min()
x /= x.ptp() / 255
x = x.astype(np.uint8) <-- returns a copy
First off, a bit of background to the question... At the moment, I'm trying
to downcast a large (>10GB) array of uint16's to uint8's. I have enough RAM
to fit everything into memory, but I'd really prefer to use as few copies as
possible....
In the particular case of a C-ordered uint16 array to uint8 on a
little-endian system, I can do this:
# "x" is the big 3D array of uint16's
x -= x.min()
x /= x.ptp() / 255
x = x.view(np.uint8)[:, :, ::2]
That works, but a) produces a non-contiguous array, and b) is awfully
case-specific.
Is there a way to do something similar to astype(), but have it
"cannibalize" the memory of the original array? (e.g. the "out" argument in
a ufunc?)
Hopefully my question makes some sense to someone other than myself...
Thanks!
-Joe

The docstring for np.pareto says:
This is a simplified version of the Generalized Pareto distribution
(available in SciPy), with the scale set to one and the location set to
zero. Most authors default the location to one.
and also:
The probability density for the Pareto distribution is
.. math:: p(x) = \frac{am^a}{x^{a+1}}
where :math:`a` is the shape and :math:`m` the location
These two statements seem to be in contradiction. I think what was
meant is that m is the scale, rather than the location. For if m were
equal to zero, as the first portion of the docstring states, then the
entire pdf would be zero for all shapes a>0.
----
Also, I'm not quite understanding how the stated pdf is actually the
same as the pdf for the generalized pareto with the scale=1 and
location=0. By the wikipedia definition of the generalized Pareto
distribution, if we take \sigma=1 (scale equal to one) and \mu=0
(location equal to zero), then we get:
(1 + a x)^(-1/a - 1)
which is normalized over $x \in (0, \infty)$. If we compare this to
the distribution stated in the docstring (with m=1)
a x^{-a-1}
we see that it is normalized over $x \in (1, \infty)$. And indeed,
the distribution requires x > scale = 1.
If we integrate the generalized Pareto (with scale=1, location=0) over
$x \in (1, \infty)$ then we have to re-normalize. So should the
docstring say:
This is a simplified version of the Generalized Pareto distribution
(available in Scipy), with the scale set to one, the location set to zero,
and the distribution re-normalized over the range (1, \infty). Most
authors default the location to one.

I've been working with pyfits, which uses numpy chararrays. I've discovered the
hard way that chararrays silently remove trailing whitespace:
>>> a = np.array(['a '])
>>> b = a.view(np.chararray)
>>> a[0]
'a '
>>> b[0]
'a'
Note the string values stored in memory are unchanged. This behaviour caused a
bug in a program I've been writing, and seems like a bad idea in general. Is it
intentional?
Neil

I have three lists of floats of equal lenght: upper_bound, lower_bound and x.
I would like to check whether lower_bound[i]<= x[i] <= upper_bound[i] for
all i in range(len(x))
Which is the best way to do this?
Thanks.
--
View this message in context: http://old.nabble.com/check-for-inequalities-on-a-list-tp28517353p28517353.…
Sent from the Numpy-discussion mailing list archive at Nabble.com.

Hello,
I have the following arrays read as masked array.
I[10]: basic.data['Air_Temp'].mask
O[10]: array([ True, False, False, ..., False, False, False], dtype=bool)
[12]: basic.data['Press_Alt'].mask
O[12]: False
I[13]: len basic.data['Air_Temp']
-----> len(basic.data['Air_Temp'])
O[13]: 1758
The first item data['Air_Temp'] has only the first element masked and this
result with mask attribute being created an equal data length bool array. On
the other hand data['Press_Alt'] has no elements to mask yielding a 'False'
scalar. Is this a documented behavior or intentionally designed this way?
This is the only case out of 20 that breaks my code as following: :)
IndexError Traceback (most recent call last)
130 for k in range(len(shorter)):
131 if (serialh.data['dccnTempSF'][k] != 0) \
--> 132 and (basic.data['Air_Temp'].mask[k+diff] == False):
133 dccnConAmb[k] = serialc.data['dccnConc'][k] * \
134 physical.data['STATIC_PR'][k+diff] * \
IndexError: invalid index to scalar variable.
since mask is a scalar in this case, nothing to loop terminating with an
IndexError.
--
Gökhan

Hello,
Consider my masked arrays:
I[28]: type basic.data['Air_Temp']
-----> type(basic.data['Air_Temp'])
O[28]: numpy.ma.core.MaskedArray
I[29]: basic.data['Air_Temp']
O[29]:
masked_array(data = [-- -- -- ..., -- -- --],
mask = [ True True True ..., True True True],
fill_value = 999999.9999)
I[17]: basic.data['Air_Temp'].data = np.ones(len(basic.data['Air_Temp']))*30
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
----> 1
2
3
4
5
AttributeError: can't set attribute
Why this assignment fails? I want to set each element in the original
basic.data['Air_Temp'].data to another value. (Because the main instrument
was forgotten to turn on for that day, and I am using a secondary
measurement data for Air Temperature for my another calculation. However it
fails. Although single assignment works:
I[13]: basic.data['Air_Temp'].data[0] = 30
Shouldn't this be working like the regular NumPy arrays do?
Thanks.
--
Gökhan

Hi Stefan,
The windows buildbot throws the error
=====================================================================
FAIL: test_creation_overflow (test_datetime.TestDateTime)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\buildbot\numpy\b11\numpy-install25\Lib\site-packages\numpy\core\tests\test_datetime.py",
line 68, in test_creation_overflow
err_msg='Datetime conversion error for unit %s' % unit)
File "..\numpy-install25\Lib\site-packages\numpy\testing\utils.py",
line 313, in assert_equal
AssertionError:
Items are not equal: Datetime conversion error for unit ms
ACTUAL: 567052800
DESIRED: 322689600000
Because window's longs are always 32 bit, I think this is a good
indication that somewhere a long is being used instead of an intp.

Hello Everybody,
Sorry if it's a trivial question. I'm trying to find out if there is a way to save object array to ascii file. numpy.savetxt() in NumPy V1.3.0 doesn't seem to work:
>>> import numpy as np
>>> obj_arr = np.zeros((2,), dtype=np.object)
>>> obj_arr[0] = np.array([[1,2,3], [4,5,6], [7,8,9]])
>>> obj_arr[1] = np.array([[10,11], [12,13]])
>>> obj_arr
array([[[1 2 3]
[4 5 6]
[7 8 9]], [[10 11]
[12 13]]], dtype=object)
>>> np.savetxt('obj_array.dat', obj_arr)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/io.py", line 636, in savetxt
fh.write(format % tuple(row) + '\n')
TypeError: float argument required
>>>
scipy.io.savemat() supports Matlab format of the object array, but I could not find any documentation on ASCII file format for object arrays.
Thanks in advance,
Masha
--------------------
liukis(a)usc.edu