I reinstall it on a different machine i got the same error with the nsdfile
no warning anymore with audiolab .
I followed these points:
audiolab requires the following softwares:
- a python interpreter.
- numpy (any version >= 1.2 should work).
On Ubuntu, you can install the dependencies as follow:
sudo apt-get install python-dev python-numpy python-setuptools libsndfile-dev
Audiolab can optionally install audio backends. For now, only alsa (Linux)
and Core Audio (Mac OS X) are supported. On Linux, you need alsa headers for
this to work; on Ubuntu, you can install them with the following command:
sudo apt-get install libasound2-dev
For Mac OS X, you need the CoreAudio framework, available on the Apple
unix users, if libsndfile is installed in standart location (eg
/usr/lib, /usr/local/lib), the installer should be able to find them
automatically, and you only need to do a “python setup.py install”
Concerning the eventual site.cfg file i have no idea what to do with this it
might be the problem maybe you can give advice?
2010/5/28 David Cournapeau <cournape(a)gmail.com>
> On Fri, May 28, 2010 at 1:58 AM, arthur de conihout
> <arthurdeconihout(a)gmail.com> wrote:
> > Thank you for the answer
> > i would love trying audiolab but i got this error while importing i m
> > running python2.6
> >>>> import audiolab
> > UserWarning: Could not import alsa backend; most probably, you did not
> > alsa headers when building audiolab
> How did you install it ? Your installation looks bogus (if you build
> it by yourself, please give the *exact* instructions you used)
> NumPy-Discussion mailing list
I changed the subject line for this thread, since I didn't want to
hijack another thread. Anyway, I am not proposing that we actually
decide whether to move to git and github now, but I am just curious
how people would feel. We had a conversation about this a few years
ago and it was quite contentious at the time. Since then, I believe a
number of us have started using git and github for most of our work.
And there are a number of developers using git-svn to develop numpy
now. So I was curious to get a feeling for what people would think
about it, if we moved to git. (I don't want to rehash the arguments
for the move.)
Anyway, Chuck listed the main concerns we had previously when we
discussed moving from svn to git. See the discussion below. Are
there any other concerns? Am I right in thinking that most of the
developers would prefer git at this point? Or are there still a
number of developers who would prefer using svn still?
On Wed, May 26, 2010 at 12:54 PM, Charles R Harris
> I think the main problem has been windows compatibility. Git is best from
> the command line whereas the windows command line is an afterthought.
> Another box that needs a check-mark is the buildbot. If svn clients are
> supported then it may be that neither of those are going to be a problem.
I was under the impression that there were a number of decent git
clients for Windows now, but I don't know anyone who develops on
Windows. Are there any NumPy developers who use Windows who could
check out the current situation?
Pulling from github with an svn client works very well, so buildbot
could continue working as is:
And if it turns out the Windows clients are still not good enough, we
could look into the recently add svn write support to github:
No need for us to make any changes immediately. I am just curious how
people would feel about it at this point.
>> i try to implement a real-time convolution module refreshed by head
>> listener location (angle from a reference point).The result of the
>> convolution by binaural flters(HRTFs) allows me to spatialize a monophonic
>> wavfile. I got trouble with this as long as my convolution doesnt seem to
>> work properly:
>> np.convolve() doesnt convolve the entire entry signal
>> ->trouble with extracting numpyarrays from the audio wav. filters and
>> monophonic entry
>> ->trouble then with encaspulating the resulting array in a proper wav
>> file...it is not read by audacity
>> Do you have any idea of how this could work or do you have any
>> implementation of stereo filtering by impulse response to submit me
>> Thank you very much
>> Arthur de Conihout
A while ago we had a brief discussion about this.
Is this a feature? or should there be a ticket for this
>>> a = np.sqrt('5')
This is a one-time message to announce the availability of version 0.91
of the StarCluster package.
Why should you care? StarCluster allows you to create NumPy/SciPy
clusters configured with NFS-shared filesystems and the Sun Grid Engine
queueing system out of the box on Amazon's Elastic Compute Cloud (EC2).
The NumPy/SciPy installations have been compiled against a
custom-compiled ATLAS for the larger EC2 instances.
There is an article about StarCluster on www.hpcinthecloud.com:
There is also a screencast of installing, configuring, launching, and
terminating an HPC cluster on Amazon EC2:
Project description from PyPI:
StarCluster is a utility for creating and managing scientific computing
clusters hosted on Amazon's Elastic Compute Cloud (EC2). StarCluster
utilizes Amazon's EC2 web service to create and destroy clusters of
Linux virtual machines on demand.
To get started, the user creates a simple configuration file with their
AWS account details and a few cluster preferences (e.g. number of
machines, machine type, ssh keypairs, etc). After creating the
configuration file and running StarCluster's "start" command, a cluster
of Linux machines configured with the Sun Grid Engine queuing system, an
NFS-shared /home directory, and OpenMPI with password-less ssh is
created and ready to go out-of-the-box. Running StarCluster's "stop"
command will shutdown the cluster and stop paying for service. This
allows the user to only pay for what they use.
StarCluster provides a Ubuntu-based Amazon Machine Image (AMI) in 32bit
and 64bit architectures. The AMI contains an optimized
NumPy/SciPy/Atlas/Blas/Lapack installation compiled for the larger
Amazon EC2 instance types. The AMI also comes with Sun Grid Engine (SGE)
and OpenMPI compiled with SGE support. The public AMI can easily be
customized by launching a single instance of the public AMI, installing
additional software on the instance, and then using
StarCluster can also utilize Amazon's Elastic Block Storage (EBS)
volumes to provide persistent data storage for a cluster. EBS volumes
allow you to store large amounts of data in the Amazon cloud and are
also easy to back-up and replicate in the cloud. StarCluster will mount
and NFS-share any volumes specified in the config. StarCluster's
"createvolume" command provides the ability to automatically create,
format, and partition new EBS volumes for use with StarCluster.
StarCluster is available on PyPI
(http://pypi.python.org/pypi/StarCluster) and also on the project's website:
You will find the docs as well as links to the StarCluster mailing list
on the website.
New in this version:
* support for launching and managing multiple clusters on EC2
* added "listclusters" command for showing all active clusters on EC2
* support for attaching and NFS-sharing multiple EBS volumes
* added createimage and createvolume commands for easily creating new
AMIs and EBS volumes for use with StarCluster
* experimental support for launching clusters using spot instances
* added support for StarCluster "plugins" that provide the ability to
perform additional configuration/setup routines on top of StarCluster's
default cluster configuration
* added "listpublic" command for listing all available public StarCluser
AMIs that can be used with StarCluster
* bash/zsh command line completion for StarCluster's command line interface
I'm getting an error in check_long_double_representation on a
linux/powerpc64 box. Has anybody seen this before/know a fix? -Chris
> python -V
> uname -a
Linux chl-29-200 2.6.32-21-powerpc64-smp #32-Ubuntu SMP Fri Apr 16 10:28:57
UTC 2010 ppc64 GNU/Linux
>python setup.py install
C compiler: gcc -m32 -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core
-Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath
removing: _configtest.c _configtest.o
Traceback (most recent call last):
File "setup.py", line 187, in <module>
File "setup.py", line 180, in setup_package
line 186, in setup
line 152, in setup
line 975, in run_commands
line 995, in run_command
line 37, in run
line 134, in run
File "/home/cekees/src/proteus/linux/lib/python2.6/distutils/cmd.py", line
333, in run_command
line 995, in run_command
line 152, in run
line 169, in build_sources
line 328, in build_extension_sources
sources = self.generate_sources(sources, ext)
line 385, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 413, in generate_config_h
rep = check_long_double_representation(config_cmd)
File "numpy/core/setup_common.py", line 136, in
type = long_double_representation(pyod(object))
File "numpy/core/setup_common.py", line 244, in long_double_representation
raise ValueError("Unrecognized format (%s)" % saw)
ValueError: Unrecognized format (['001', '043', '105', '147', '211', '253',
'315', '357', '301', '235', '157', '064', '124', '000', '000', '000', '000',
'000', '000', '000', '000', '000', '000', '000',
'376', '334', '272', '230', '166', '124', '062', '020'])
I'm wondering if we could extend the current documentation format to the c
source code. The string blocks would be implemented something like
Answer the Ultimate Question of Life, the Universe, and Everything.
We don't need no stinkin' parameters.
The run time of this routine may be excessive.
and the source scanned to generate the usual documentation. Thoughts?
How do I determine if an array's (or column in a structured array) dtype is
a number or a string. I see how to determine the actual dtype but all I want
to know is if it is a string or a number.
my blog <http://vincentdavis.net> |
Ok, let's try sending this message again, since it looks like I can't
send from gmane...
(See discussion on python-list at
http://permalink.gmane.org/gmane.comp.python.general/661328 for context)
numpy.zeros_like contains the following code:
if isinstance(a, ndarray):
res = ndarray.__new__(type(a), a.shape, a.dtype,
This is a problem because basetype.__new__(subtype, ...) raises an
exception when subtype is defined from C (specifically, when
Py_TPFLAGS_HEAPTYPE is not set). There's a check in Objects/typeobject.c
in tp_new_wrapper that disallows this (you can grep for "is not safe" to
find there the exception is raised).
The end result is that it's impossible to use zeros_like, ones_like or
empty_like with ndarray subtypes defined in C.
While I'm still not sure why Python needs this check in general, Robert
Kern pointed out that the problem can be fixed pretty easily in NumPy by
changing zeros_like and friends to something like this (with some
modifications from me):
if isinstance(a, ndarray):
res = numpy.zeros(a.shape, a.dtype, order=a.flags.fnc)
res = res.view(type(a))