[Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

josef.pktd at gmail.com josef.pktd at gmail.com
Sat Apr 26 10:10:47 EDT 2014


On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett <matthew.brett at gmail.com>wrote:

> Hi,
>
> On Thu, Apr 24, 2014 at 5:26 PM,  <josef.pktd at gmail.com> wrote:
> >
> >
> >
> > On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd at gmail.com> wrote:
> >>
> >>
> >>
> >>
> >> On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
> >> <charlesr.harris at gmail.com> wrote:
> >>>
> >>>
> >>>
> >>>
> >>> On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd at gmail.com> wrote:
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
> >>>> <charlesr.harris at gmail.com> wrote:
> >>>>>
> >>>>>
> >>>>> Hi Matthew,
> >>>>>
> >>>>> On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
> >>>>> <matthew.brett at gmail.com> wrote:
> >>>>>>
> >>>>>> Hi,
> >>>>>>
> >>>>>> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
> >>>>>> (main author of ATLAS), I've built 64-bit windows numpy and scipy
> >>>>>> wheels for testing.
> >>>>>>
> >>>>>> The build uses Carl's custom mingw-w64 build with static linking.
> >>>>>>
> >>>>>> There are two harmless test failures on scipy (being discussed on
> the
> >>>>>> list at the moment) - tests otherwise clean.
> >>>>>>
> >>>>>> Wheels are here:
> >>>>>>
> >>>>>>
> >>>>>>
> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
> >>>>>>
> >>>>>>
> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
> >>>>>>
> >>>>>> You can test with:
> >>>>>>
> >>>>>> pip install -U pip # to upgrade pip to latest
> >>>>>> pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy
> >>>>>> scipy
> >>>>>>
> >>>>>> Please do send feedback.
> >>>>>>
> >>>>>> ATLAS binary here:
> >>>>>>
> >>>>>>
> >>>>>>
> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
> >>>>>>
> >>>>>> Many thanks for Carl in particular for doing all the hard work,
> >>>>>>
> >>>>>
> >>>>> Cool. After all these long years... Now all we need is a box running
> >>>>> tests for CI.
> >>>>>
> >>>>> Chuck
> >>>>>
> >>>>> _______________________________________________
> >>>>> NumPy-Discussion mailing list
> >>>>> NumPy-Discussion at scipy.org
> >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >>>>>
> >>>>
> >>>> I get two test failures with numpy
> >>>>
> >>>> Josef
> >>>>
> >>>> >>> np.test()
> >>>> Running unit tests for numpy
> >>>> NumPy version 1.8.1
> >>>> NumPy is installed in C:\Python27\lib\site-packages\numpy
> >>>> Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64
> bit
> >>>> (AMD64)]
> >>>> nose version 1.1.2
> >>>>
> >>>> ======================================================================
> >>>> FAIL: test_iterator.test_iter_broadcasting_errors
> >>>> ----------------------------------------------------------------------
> >>>> Traceback (most recent call last):
> >>>>   File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
> >>>> runTest
> >>>>     self.test(*self.arg)
> >>>>   File
> >>>> "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
> line 657,
> >>>> in test_iter_broadcasting_errors
> >>>>     '(2)->(2,newaxis)') % msg)
> >>>>   File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
> 44,
> >>>> in assert_
> >>>>     raise AssertionError(msg)
> >>>> AssertionError: Message "operands could not be broadcast together with
> >>>> remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis)
> and
> >>>> requested shape (4,3)" doesn't contain remapped operand
> >>>> shape(2)->(2,newaxis)
> >>>>
> >>>> ======================================================================
> >>>> FAIL: test_iterator.test_iter_array_cast
> >>>> ----------------------------------------------------------------------
> >>>> Traceback (most recent call last):
> >>>>   File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
> >>>> runTest
> >>>>     self.test(*self.arg)
> >>>>   File
> >>>> "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
> line 836,
> >>>> in test_iter_array_cast
> >>>>     assert_equal(i.operands[0].strides, (-96,8,-32))
> >>>>   File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
> 255,
> >>>> in assert_equal
> >>>>     assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
> >>>> verbose)
> >>>>   File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
> 317,
> >>>> in assert_equal
> >>>>     raise AssertionError(msg)
> >>>> AssertionError:
> >>>> Items are not equal:
> >>>> item=0
> >>>>
> >>>>  ACTUAL: 96L
> >>>>  DESIRED: -96
> >>>>
> >>>> ----------------------------------------------------------------------
> >>>> Ran 4828 tests in 46.306s
> >>>>
> >>>> FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
> >>>> <nose.result.TextTestResult run=4828 errors=0 failures=2>
> >>>>
> >>>
> >>> Strange. That second one looks familiar, at least the "-96" part.
> Wonder
> >>> why this doesn't show up with the MKL builds.
> >>
> >>
> >> ok tried again, this time deleting the old numpy directories before
> >> installing
> >>
> >> Ran 4760 tests in 42.124s
> >>
> >> OK (KNOWNFAIL=10, SKIP=8)
> >> <nose.result.TextTestResult run=4760 errors=0 failures=0>
> >>
> >>
> >> so pip also seems to be reusing leftover files.
> >>
> >> all clear.
> >
> >
> > Running the statsmodels test suite, I get a failure in
> > test_discrete.TestProbitCG where fmin_cg converges to something that
> differs
> > in the 3rd decimal.
> >
> > I usually only test the 32-bit version, so I don't know if this is
> specific
> > to this scipy version, but we haven't seen this in a long time.
> > I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
>
> That's interesting, you saw also we're getting failures on the tests
> for powell optimization because of small unit-at-last-place
> differences in the exp function in mingw-w64.  Is there any chance you
> can track down where the optimization path is diverging and why?
> It's just that - if this is also the exp function maybe we can see if
> the error is exceeding reasonable bounds and then feed back to
> mingw-w64 and fall back to the numpy default implementation in the
> meantime.
>

I'm a bit doubtful it's exp, the probit model is based on the normal
distribution and has an exp only in the gradient via norm._pdf, the
objective function uses norm._cdf.

I can look into it.

However:
We don't use fmin_cg for anything by default, it's part of "testing all
supported scipy optimizers" and we had problems with it before on various
machines https://github.com/statsmodels/statsmodels/issues/109
The test was completely disabled on Windows for a while, and I might have
to turn some screws again.

I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs,
which we really need to use.

If minor precision issues matter, then the code is not "robust" and should
be fixed.

compared to precision issues. I'm fighting more with the large scale
properties of exp.
https://github.com/scipy/scipy/issues/3581


Neverthless,
I would really like to know why I'm running into many platform differences
and problems with scipy.optimize.

Cheers,

Josef



>
> Cheers,
>
> Matthew
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140426/7bcb74f0/attachment.html>


More information about the NumPy-Discussion mailing list