I'm sure you all noticed that numpy-discussion was down for 4-5 days
this week. Apparently this was due to some unnoticed misconfigurations
that were triggered when moving it from the ancient and unmaintainable
server that it's been on for the last umpteen years. Some more details
There's some hope that this should reduce the frequency of outages
going forward, though it's still not entirely clear how stable this
new setup is or who is responsible for maintaining the list now or in
the future -- if you know then please speak up :-).
Nathaniel J. Smith -- http://vorpus.org
In communications and signal processing, it is frequently necessary to
calculate the power of a signal. This can be done with a function like the
Return the magnitude squared of the real- or complex-valued input.
return z.real**2 + z.imag**2
A high percentage of the scripts that I write contain or import a function
like this. It would be great if there were a built-in method in NumPy,
preferably with a name like `magsq`, `mag2`, or `msq`.
[Tim, ping me if you want to get dropped from the reply chain, as we are
liable to get more into numpy decision-making. I've dropped python-ideas.]
On Mon, Sep 14, 2015 at 4:34 AM, Tim Peters <tim.peters(a)gmail.com> wrote:
> [Robert Kern <robert.kern(a)gmail.com>]
> >> ...
> >> I'll also recommend the PCG paper (and algorithm) as the author's
> >> cross-PRNGs comparisons have been bandied about in this thread
> >> paper lays out a lot of the relevant issues and balances the various
> >> qualities that are important: statistical quality, speed, and security
> >> various flavors).
> >> http://www.pcg-random.org/paper.html
> >> I'm actually not that impressed with Random123. The core idea is nice
> >> clean, but the implementation is hideously complex.
> [Nathaniel Smith <njs(a)pobox.com>]
> > I'm curious what you mean by this? In terms of the code, or...?
In terms of the code that would be necessary to implement this in a
simultaneously performant and cross-platform way in numpy.random.
> > I haven't looked at the code,
I recommend it! Approach it with the practical goal of "how do I stick this
in as a new core PRNG for numpy.random?"
> > but the simplest generator they
> > recommend in the paper is literally just
> > def rng_stream(seed):
> > counter = 0
> > while True:
> > # AES128 takes a 128 bit key and 128 bits of data and returns
> > 128 bits of encrypted data
> > val = AES128(key=seed, data=counter)
> > yield low_64_bits(val)
> > yield high_64_bits(val)
> > counter += 1
> I assume it's because if you expand what's required to _implement_
> AES128() in C, it does indeed look pretty hideously complex. On HW
> implementing AES primitives, of course the code can be much simpler.
> But to be fair, if integer multiplication and/or addition weren't
> implemented in HW, and we had to write to C code emulating them via
> bit-level fiddling, the code for any of the PCG algorithms would look
> hideously complex too.
> But being fair isn't much fun ;-)
Actually, I meant all of the crap *around* it, the platform-compatibility
testing to see if you have such a hardware instruction or not, and C++
template shenanigans in the surrounding code. It's possible that the
complexity is only due to flexibility, but it was too complex for me to
begin understanding *why* it's so complex before I succumbed to ennui and
moved on to some other productive use of my free time. At least some of the
complexity is due to needing software implementations of reduced-round
crypto for decent performance in the absence of the hardware instruction.
Performing well in the absence of the hardware instruction is very
important to me as I do not seem to have the AES-NI instruction available
on my mid-2012 Macbook Pro. Exposing counter-mode AES128 as a core PRNG is
a nice idea, but it's just low on my wishlist. I want fast, multiple
independent streams on my current hardware first, and PCG gives that to me.
On the "if I had to implement all of the bit-fiddling myself" count, I'd
still prefer PCG. It has a similar problem with 128-bit integers that may
or may not be natively provided (should you want a 128-bit state), and I
have to say that the software implementation of that bit-fiddling is much
nicer to work with than reduced-round Threefry. It's reference
implementation is also overly complex for practical use as it is showing
off the whole parameterized PCG family to support the paper, even the
explicitly unrecommended members. But it's relatively straightforward to
disentangle the desirable members; their implementations, including the
supporting 128-bit arithmetic code, are small and clean.
At 10:01 AM 9/20/2015, you wrote:
>Is that not the same as
>Â Â np.abs(z)**2 ?
>It is, but since that involves taking sqrt, it is *much* slower. Even now,
>In : r = np.arange(10000)*(1+1j)
>In : %timeit np.abs(r)**2
>1000 loops, best of 3: 213 Âµs per loop
>In : %timeit r.real**2 + r.imag**2
>10000 loops, best of 3: 47.5 Âµs per loop
Ahh yes, a full extra step, "back"
Assuming these are spectra, how does timeit do
with scipy.signal.periodogram (using appropriate n and scaling)?
I have a PyQt app running on iOS and would like to add NumPy to improve
calculation speed. I see a few python interpreters in the app store which
use NumPy so it must be possible, however I have not been able to find any
information on the build process for the iOS cross compile.
We are building a python with all libraries static linked. Here is the
Any help getting NumPy compiled into the iOS app?
At 09:16 PM 9/18/2015, you wrote:
>In communications and signal processing, it is
>frequently necessary to calculate the power of a
>signal.Â This can be done with a function like the following:
>Â Â """
>Â Â Return the magnitude squared of the real- or complex-valued input.
>Â Â """
>Â Â return z.real**2 + z.imag**2
Is that not the same as