you may be interested in stochastic programming and optimization with
free Python module FuncDesigner.
We have wrote Stochastic addon for FuncDesigner, but (at least for
several years) it will be commercional (currently it's free for some
small-scaled problems only and for noncommercial research / educational
purposes only). However, we will try to keep our prices several times
less than our competitors have. Also, we will provide some discounts,
including region-based ones, and first 15 customers will also got a
For further information, documentation and some examples etc read more at
I've built blas, lapack, and atlas libraries, as shown below.
$ ls ~/lib/atlas/lib/
libatlas.a libcblas.a libf77blas.a liblapack.a libptcblas.a libptf77blas.a
The library location are specified by site.cfg file, as shown below.
library_dirs = /home/username/lib/atlas/lib
include_dirs = /home/username/lib/atlas/include
libraries = libf77blas, libcblas, libatlas
libraries = liblapack, libf77blas, libcblas, libatlas
I've tried to build numpy (version 1.6.2) by
$ python setup.py build --fcompiler=gnu
However, I got the following error message:
error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared
-L/home/username/lib/ -L/usr/lib64 -Lbuild/temp.linux-x86_64-2.6
-lodepack -llinpack_lite -lmach -lblas -lpython2.6 -lg2c -o
build/lib.linux-x86_64-2.6/scipy/integrate/vode.so" failed with exit
I've searched internet for possible solutions whole day but don't have
any progress so far. Anyone has any idea of how to fix this? Thanks!
f2py, by default, seems to prefer g77 (no longer maintained, deprecated,
speedy, doesn't support Fortran 90 or Fortran 95) over gfortran
(maintained, slower, Fortran 90 and Fortran 95 support).
This causes problems when we try to compile Fortran 90 extensions using
f2py on platforms where both g77 and gfortran are installed without
manually switching the compiler's flags. It is a very minor edit to the
fcompiler/__init__.py file to prefer gfortran over g77 on OS X, and I can
think of almost no reason not to do so, since the Vectorize framework (OS X
tuned LAPACK/BLAS) appears to be ABI compatible with gfortran. I am not
sure what the situation is on the distributions that numpy is trying to
support, but my feeling is that g77 should not be preferred when gfortran
I'm new to python and I'd like to learn about numpy / scipy / matplotlib, but I'm having trouble getting started.
I'm following the instructions here: http://www.scipy.org/Getting_Started
First I installed the latest version of python from python.org by downloading the dmg file, since I read that it doesn't work with apple's installer, and then installed numpy / scipy / matplotlib by downloading the relevent dmg files.
I then downloaded ipython, ran "easy_install readline" and then ran "python setup.py install".
Then I started ipython with "ipython -pylab" as per the instructions but then I get muliple error messages:
$ ipython --pylab
libedit detected - readline will not be well behaved, including but not limited to:
* crashes on tab completion
* incorrect history navigation
* corrupting long-lines
* failure to wrap or indent lines properly
It is highly recommended that you install readline, which is easy_installable:
Note that `pip install readline` generally DOES NOT WORK, because
it installs to site-packages, which come *after* lib-dynload in sys.path,
where readline is located. It must be `easy_install readline`, or to a custom
location on your PYTHONPATH (even --user comes after lib-dyload).
Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43)
Type "copyright", "credits" or "license" for more information.
IPython 0.13 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
[TerminalIPythonApp] GUI event loop or pylab initialization failed
ImportError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/pylabtools.pyc in find_gui_and_backend(gui)
--> 196 import matplotlib
198 if gui and gui != 'auto':
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/__init__.py in <module>()
131 import sys, os, tempfile
--> 133 from matplotlib.rcsetup import (defaultParams,
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/rcsetup.py in <module>()
17 import warnings
18 from matplotlib.fontconfig_pattern import parse_fontconfig_pattern
---> 19 from matplotlib.colors import is_color_like
21 #interactive_bk = ['gtk', 'gtkagg', 'gtkcairo', 'fltkagg', 'qtagg', 'qt4agg',
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/colors.py in <module>()
51 import re
---> 52 import numpy as np
53 from numpy import ma
54 import matplotlib.cbook as cbook
ImportError: No module named numpy
it seems the installation of numpy and readline didn't work, and there are problems with matplotlib, even though I think I followed all the instructions carefully.
I can't figure out what I did wrong. Can anybody help?
I'm running mac os 10.6.
The doc of PyArray_FILLWBYTE here
http://docs.scipy.org/doc/numpy/reference/c-api.array.html is this
PyArray_FILLWBYTE(PyObject* obj, int val)
Fill the array pointed to by obj —which must be a (subclass of)
bigndarray—with the contents of val (evaluated as a byte).
In the code, what it does is call memset:
#define PyArray_FILLWBYTE(obj, val) memset(PyArray_DATA(obj), val, \
This make it ignore completely the strides!
So the easy fix would be to update the doc, the real fix is to test
the contiguity before calling memset, if not contiguous, call
something else appropriate.
Does anyone know if f2py supports allocatable arrays, allocated inside
fortran subroutines? The old f2py docs seem to indicate that the
allocatable array must be created with numpy, and dropped in the module.
Here's more background to explain...
I have a fortran subroutine that returns allocatable positions and
velocities arrays. I wish I could get rid of the allocatable part, but you
don't know how many particles it will create until the subroutine does some
work (it checks if each particle it perturbs ends up in the domain).
subroutine ics(..., num_particles, particle_mass, positions, velocities)
use data_types, only : dp
... inputs ...
integer, intent(out) :: num_particles
real (kind=dp), intent(out) :: particle_mass
real (kind=dp), intent(out), dimension(:, :), allocatable :: positions,
I tested this with a fortran driver program and it looks good, but when I
try with f2py, it cannot compile. It throws the error "Error: Actual
argument for 'positions' must be ALLOCATABLE at (1)". I figure this has
something to do with the auto-generated "*-f2pywrappers2.f90" file, but the
build deletes the file.
If anyone knows an f2py friendly way to rework this, I would be happy to
try. I'm also fine with using ctypes if it can handle this case.
> Your actual memory usage may not have increased as much as you think,
> since memmap objects don't necessarily take much memory -- it sounds
> like you're leaking virtual memory, but your resident set size
> shouldn't go up as much.
As I understand it, memmap objects retain the contents of the memmap in memory after it has been read the first time (in a lazy manner). Thus, when reading a slice of a 24GB file, only that part recides in memory. Our system reads a slice of a memmap, calculates something (say, the sum), and then deletes the memmap. It then loops through this for consequitive slices, retaining a low memory usage. Consider the following code:
import numpy as np
res = 
vecLen = 3095677412
for i in xrange(vecLen/10**8+1):
x = i * 10**8
y = min((i+1) * 10**8, vecLen)
The memory usage of this code on a 24GB file (one value for each nucleotide in the human DNA!) is 23g resident memory after the loop is finished (not 24g for some reason..).
Running the same code on 1.5.1rc1 gives a resident memory of 23m after the loop.
> That said, this is clearly a bug, and it's even worse than you mention
> -- *all* operations on memmap arrays are holding onto references to
> the original mmap object, regardless of whether they share any memory:
>>>> a = np.memmap("/etc/passwd", np.uint8, "r")
> # arithmetic
>>>> (a + 10)._mmap is a._mmap
> # fancy indexing (doesn't return a view!)
>>>> a[[1, 2, 3]]._mmap is a._mmap
>>>> a.sum()._mmap is a._mmap
> Really, only slicing should be returning a np.memmap object at all.
> Unfortunately, it is currently impossible to create an ndarray
> subclass that returns base-class ndarrays from any operations --
> __array_finalize__() has no way to do this. And this is the third
> ndarray subclass in a row that I've looked at that wanted to be able
> to do this, so I guess maybe it's something we should implement...
> In the short term, the numpy-upstream fix is to change
> numpy.core.memmap:memmap.__array_finalize__ so that it only copies
> over the ._mmap attribute of its parent if np.may_share_memory(self,
> parent) is True. Patches gratefully accepted ;-)
Great! Any idea on whether such a patch may be included in 1.7?
> In the short term, you have a few options for hacky workarounds. You
> could monkeypatch the above fix into the memmap class. You could
> manually assign None to the _mmap attribute of offending arrays (being
> careful only to do this to arrays where you know it is safe!). And for
> reduction operations like sum() in particular, what you have right now
> is not actually a scalar object -- it is a 0-dimensional array that
> holds a single scalar. You can pull this scalar out by calling .item()
> on the array, and then throw away the array itself -- the scalar won't
> have any _mmap attribute.
> def scalarify(scalar_or_0d_array):
> if isinstance(scalar_or_0d_array, np.ndarray):
> return scalar_or_0d_array.item()
> return scalar_or_0d_array
> # works on both numpy 1.5 and numpy 1.6:
> total = scalarify(a.sum())
Thank you for this! However, such a solution would have to be scattered throughout the code (probably over 100 places), and I would rather not do that. I guess the abovementioned patch would be the best solution. I do not have experience in the numpy core code, so I am also eagerly awaiting such a patch!
PhD Student, Bioinformatics, Dept. of Tumor Biology, Inst. for Cancer Research, The Norwegian Radium Hospital, Montebello, 0310 Oslo, Norway
E-mail: sveinung.gundersen(a)medisin.uio.no, Phone: +47 93 00 94 54
Does anyone have any experience building a 32-bit version of numpy on a
64-bit linux machine? I'm trying to build a python stack that I can use
to handle a (closed source) 32-bit library.
Much messing around with environment variables and linker flags has got
some of the way, perhaps, but not enough to give me confidence I'm going
down the right track.
Advice would be appreciated!
in recent work with a colleague, the need came up for a multivariate
hypergeometric sampler; I had a look in the numpy code and saw we have
the bivariate version, but not the multivariate one.
I had a look at the code in scipy.stats.distributions, and it doesn't
look too difficult to add a proper multivariate hypergeometric by
extending the bivariate code, with one important caveat: the hard part
is the implementation of the actual discrete hypergeometric sampler,
which lives inside of numpy/random/mtrand/distributions.c:
That code is hand-written C, and it only works for the bivariate case
right now. It doesn't look terribly difficult to extend, but it will
certainly take a bit of care and testing to ensure all edge cases are
Does anyone happen to have that implemented lying around, in a form
that would be easy to merge to add this capability to numpy?