This works. A big array of eight bit random numbers is constructed:
import numpy as np
spectrumArray = np.random.randint(0,255, (2**20,2**12)).astype(np.uint8)
This fails. It eats up all 64GBy of RAM:
spectrumArray = np.random.randint(0,255, (2**21,2**12)).astype(np.uint8)
The difference is a factor of two, 2**21 rather than 2**20, for the extent
of the first axis.
David P. Saroff
Rochester Institute of Technology
54 Lomb Memorial Dr, Rochester, NY 14623
david.saroff(a)mail.rit.edu | (434) 227-6242
Sorry - I'll join there.
At 10:00 AM 12/8/2015, you wrote:
>On Tue, Dec 8, 2015 at 9:30 AM, R Schumacher
>We have a function which describes a frequency
>response correction to piezo devices we use. To
>flatten the FFT, it is similar to:
>Cdis_t = .5
>N = 8192
>for n in range(8192):
>Â B3 = n * 2560 / N
>Â Fc(n) = 1 /
>((B3/((1/(Cdis_t*2*pi))**2+B3**2)**0.5)*(-0.01*log(B3) + 1.04145))
>In practice it really only matters for low frequencies.
>I suggested that we might be able to do a time
>domain correction as a forward-reverse FFT
>filter using the function, but another said it
>can also be applied in the time domain using a bilinear transform.
>So, can one use
>and, how does one generate b,a from the given
>Fourrier domain flattening function?
>This should go to either
> or <mailto:firstname.lastname@example.org>scipy-dev(a)scipy.org
>NumPy-Discussion mailing list
We have a function which describes a frequency response correction to
piezo devices we use. To flatten the FFT, it is similar to:
Cdis_t = .5
N = 8192
for n in range(8192):
B3 = n * 2560 / N
Fc(n) = 1 / ((B3/((1/(Cdis_t*2*pi))**2+B3**2)**0.5)*(-0.01*log(B3)
In practice it really only matters for low frequencies.
I suggested that we might be able to do a time domain correction as a
forward-reverse FFT filter using the function, but another said it
can also be applied in the time domain using a bilinear transform.
So, can one use
and, how does one generate b,a from the given Fourrier domain
I'd guess someone here has done this...
>> > Is the interp fix in the google pipeline or do we need a workaround?
>> Oooh, if someone is looking at changing interp, is there any chance
>> that fp could be extended to take complex128 rather than just float
>> values? I.e. so that I could write:
>> >>> y = interp(mu, theta, m)
>> rather than
>> >>> y = interp(mu, theta, m.real) + 1.0j*interp(mu, theta, m.imag)
>> which *sounds* like it might be simple and more (Num)pythonic.
> That sounds like an excellent improvement and you should submit a PR
> implementing it :-).
> "The interp fix" in question though is a regression in 1.10 that's blocking
> 1.10.2, and needs a quick minimal fix asap.
Good answer - as soon as I hit 'send' I wondered how many bugs get
introduced by people trying to attach feature requests to bug fixes. I
will take a look at the code later and pm you if I get anywhere...
On 07/12/2015 09:38, numpy-discussion-request(a)scipy.org wrote:
> Message: 4
> Date: Sun, 06 Dec 2015 22:01:40 -0500
> From: "DAVID SAROFF (RIT Student)"<dps7802(a)rit.edu>
> To: Discussion of Numerical Python<numpy-discussion(a)scipy.org>
> Cc: Stefi Baum<stefibaumodea(a)gmail.com>
> Subject: Re: [Numpy-discussion] array of random numbers fails to
> Content-Type: text/plain; charset="utf-8"
> I see with a google search on your name that you are in the physics
> department at Rutgers. I got my BA in Physics there. 1975. Biological
> physics. A thought: Is there an entropy that can be assigned to the dna in
> an organism? I don't mean the usual thing, coupled to the heat bath.
> Evolution blindly explores metabolic and signalling pathways, and tends
> towards disorder, as long as it functions. Someone working out signaling
> pathways some years ago wrote that they were senselessly complex, branched
> and interlocked. I think that is to be expected. Evolution doesn't find
> minimalist, clear, rational solutions. Look at the amazon rain forest. What
> are all those beetles and butterflies and frogs for? It is the wrong
> question. I think some measure of the complexity could be related to the
> amount of time that ecosystem has existed. Similarly for genomes.
You are mistaken in this remark in your message;
> Evolution blindly explores metabolic and signalling pathways, and
> tends towards disorder, as long as it functions.
In fact, biological evolution does just the opposite.
It overcomes disorder and creates complexity at the expense of 'pulling
in' energy from the outside, from the environemnt.
Of course you are correct that biological evolution does NOT look for
nor does it achieve optimum solutions. It merely replaces current
mechanism with another mechanism biologically derived from the current
mechanism, provided only that the replacement mechanism is marginally,
fractionally superior in the totality of the life of the ecosystem.
Have a good day,
> Is the interp fix in the google pipeline or do we need a workaround?
Oooh, if someone is looking at changing interp, is there any chance
that fp could be extended to take complex128 rather than just float
values? I.e. so that I could write:
>>> y = interp(mu, theta, m)
>>> y = interp(mu, theta, m.real) + 1.0j*interp(mu, theta, m.imag)
which *sounds* like it might be simple and more (Num)pythonic.
For the wafo  package we are trying to include the extension compilation
process in setup.py  by using setuptools and numpy.distutils . Some
of the extensions have one Fortran interface source file, but it depends on
several other Fortran sources (modules). The manual compilation process
would go as follows:
gfortran -fPIC -c source_01.f
gfortran -fPIC -c source_02.f
f2py -m module_name -c source_01.o source_02.o source_interface.f
Can this workflow be incorporated into setuptools/numpy.distutils?
Something along the lines as:
from numpy.distutils.core import setup, Extension
ext = Extension('module.name',
(note that the above does not work)
I am an open source contributor of Pyston, and working on NumPy Pyston
I disabled some tests which cause segfaults. This is current situation(in
my local branch):
Ran 3240 tests in 80.811s
FAILED (KNOWNFAIL=3, SKIP=6, errors=334, failures=74)
Indeed, the time consuming was longer than CPython. I think this is because
of the additional exceptions / frame introspection etc... And no one took
yet a look at the perf.
Pyston runtime miss some features. I will try to fix the problems which
discovered in running NumPy test suite.
This is basically an announcement. I will update the situation when I get
breakthrough, ask for help if I encounter problems, submit commit to NumPy
Looking for feedback.