[Numpy-discussion] 64-bit windows numpy / scipy wheels for testing
josef.pktd at gmail.com
josef.pktd at gmail.com
Sat Apr 26 12:01:45 EDT 2014
On Sat, Apr 26, 2014 at 10:20 AM, <josef.pktd at gmail.com> wrote:
>
>
>
> On Sat, Apr 26, 2014 at 10:10 AM, <josef.pktd at gmail.com> wrote:
>
>>
>>
>>
>> On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett <matthew.brett at gmail.com>wrote:
>>
>>> Hi,
>>>
>>> On Thu, Apr 24, 2014 at 5:26 PM, <josef.pktd at gmail.com> wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd at gmail.com> wrote:
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
>>> >> <charlesr.harris at gmail.com> wrote:
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd at gmail.com> wrote:
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
>>> >>>> <charlesr.harris at gmail.com> wrote:
>>> >>>>>
>>> >>>>>
>>> >>>>> Hi Matthew,
>>> >>>>>
>>> >>>>> On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
>>> >>>>> <matthew.brett at gmail.com> wrote:
>>> >>>>>>
>>> >>>>>> Hi,
>>> >>>>>>
>>> >>>>>> Thanks to Cark Kleffner's toolchain and some help from Clint
>>> Whaley
>>> >>>>>> (main author of ATLAS), I've built 64-bit windows numpy and scipy
>>> >>>>>> wheels for testing.
>>> >>>>>>
>>> >>>>>> The build uses Carl's custom mingw-w64 build with static linking.
>>> >>>>>>
>>> >>>>>> There are two harmless test failures on scipy (being discussed on
>>> the
>>> >>>>>> list at the moment) - tests otherwise clean.
>>> >>>>>>
>>> >>>>>> Wheels are here:
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>
>>> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
>>> >>>>>>
>>> >>>>>>
>>> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
>>> >>>>>>
>>> >>>>>> You can test with:
>>> >>>>>>
>>> >>>>>> pip install -U pip # to upgrade pip to latest
>>> >>>>>> pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy
>>> >>>>>> scipy
>>> >>>>>>
>>> >>>>>> Please do send feedback.
>>> >>>>>>
>>> >>>>>> ATLAS binary here:
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>
>>> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
>>> >>>>>>
>>> >>>>>> Many thanks for Carl in particular for doing all the hard work,
>>> >>>>>>
>>> >>>>>
>>> >>>>> Cool. After all these long years... Now all we need is a box
>>> running
>>> >>>>> tests for CI.
>>> >>>>>
>>> >>>>> Chuck
>>> >>>>>
>>> >>>>> _______________________________________________
>>> >>>>> NumPy-Discussion mailing list
>>> >>>>> NumPy-Discussion at scipy.org
>>> >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> >>>>>
>>> >>>>
>>> >>>> I get two test failures with numpy
>>> >>>>
>>> >>>> Josef
>>> >>>>
>>> >>>> >>> np.test()
>>> >>>> Running unit tests for numpy
>>> >>>> NumPy version 1.8.1
>>> >>>> NumPy is installed in C:\Python27\lib\site-packages\numpy
>>> >>>> Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500
>>> 64 bit
>>> >>>> (AMD64)]
>>> >>>> nose version 1.1.2
>>> >>>>
>>> >>>>
>>> ======================================================================
>>> >>>> FAIL: test_iterator.test_iter_broadcasting_errors
>>> >>>>
>>> ----------------------------------------------------------------------
>>> >>>> Traceback (most recent call last):
>>> >>>> File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>>> >>>> runTest
>>> >>>> self.test(*self.arg)
>>> >>>> File
>>> >>>> "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
>>> line 657,
>>> >>>> in test_iter_broadcasting_errors
>>> >>>> '(2)->(2,newaxis)') % msg)
>>> >>>> File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>>> 44,
>>> >>>> in assert_
>>> >>>> raise AssertionError(msg)
>>> >>>> AssertionError: Message "operands could not be broadcast together
>>> with
>>> >>>> remapped shapes [original->remapped]: (2,3)->(2,3)
>>> (2,)->(2,newaxis) and
>>> >>>> requested shape (4,3)" doesn't contain remapped operand
>>> >>>> shape(2)->(2,newaxis)
>>> >>>>
>>> >>>>
>>> ======================================================================
>>> >>>> FAIL: test_iterator.test_iter_array_cast
>>> >>>>
>>> ----------------------------------------------------------------------
>>> >>>> Traceback (most recent call last):
>>> >>>> File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>>> >>>> runTest
>>> >>>> self.test(*self.arg)
>>> >>>> File
>>> >>>> "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
>>> line 836,
>>> >>>> in test_iter_array_cast
>>> >>>> assert_equal(i.operands[0].strides, (-96,8,-32))
>>> >>>> File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>>> 255,
>>> >>>> in assert_equal
>>> >>>> assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,
>>> err_msg),
>>> >>>> verbose)
>>> >>>> File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>>> 317,
>>> >>>> in assert_equal
>>> >>>> raise AssertionError(msg)
>>> >>>> AssertionError:
>>> >>>> Items are not equal:
>>> >>>> item=0
>>> >>>>
>>> >>>> ACTUAL: 96L
>>> >>>> DESIRED: -96
>>> >>>>
>>> >>>>
>>> ----------------------------------------------------------------------
>>> >>>> Ran 4828 tests in 46.306s
>>> >>>>
>>> >>>> FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
>>> >>>> <nose.result.TextTestResult run=4828 errors=0 failures=2>
>>> >>>>
>>> >>>
>>> >>> Strange. That second one looks familiar, at least the "-96" part.
>>> Wonder
>>> >>> why this doesn't show up with the MKL builds.
>>> >>
>>> >>
>>> >> ok tried again, this time deleting the old numpy directories before
>>> >> installing
>>> >>
>>> >> Ran 4760 tests in 42.124s
>>> >>
>>> >> OK (KNOWNFAIL=10, SKIP=8)
>>> >> <nose.result.TextTestResult run=4760 errors=0 failures=0>
>>> >>
>>> >>
>>> >> so pip also seems to be reusing leftover files.
>>> >>
>>> >> all clear.
>>> >
>>> >
>>> > Running the statsmodels test suite, I get a failure in
>>> > test_discrete.TestProbitCG where fmin_cg converges to something that
>>> differs
>>> > in the 3rd decimal.
>>> >
>>> > I usually only test the 32-bit version, so I don't know if this is
>>> specific
>>> > to this scipy version, but we haven't seen this in a long time.
>>> > I used our nightly binaries
>>> http://statsmodels.sourceforge.net/binaries/
>>>
>>> That's interesting, you saw also we're getting failures on the tests
>>> for powell optimization because of small unit-at-last-place
>>> differences in the exp function in mingw-w64. Is there any chance you
>>> can track down where the optimization path is diverging and why?
>>> It's just that - if this is also the exp function maybe we can see if
>>> the error is exceeding reasonable bounds and then feed back to
>>> mingw-w64 and fall back to the numpy default implementation in the
>>> meantime.
>>>
>>
>> I'm a bit doubtful it's exp, the probit model is based on the normal
>> distribution and has an exp only in the gradient via norm._pdf, the
>> objective function uses norm._cdf.
>>
>> I can look into it.
>>
>
with 32 bit official binaries MingW 32
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.400588
Iterations: 75
Function evaluations: 213
Gradient evaluations: 201
relative and absolute deviation from "desired"
[ -1.26257296e-05 -4.77535711e-05 -9.93794940e-06 -1.78815725e-05]
[ -2.05270407e-05 -2.47024202e-06 -1.41748189e-05 1.33259208e-04]
with your wheels, after increasing maxiter in the test case
Optimization terminated successfully.
Current function value: 0.400588
Iterations: 766
Function evaluations: 1591
Gradient evaluations: 1591
relative and absolute deviation from "desired"
[ -1.57311713e-07 -4.25324806e-08 -3.01557919e-08 -1.19794357e-07]
[ -2.55758996e-07 -2.20016050e-09 -4.30121820e-08 8.92745931e-07]
So actually the 64 bit wheel has the better final result, and just needs
more iterations to get close enough to what we had required in the unit
tests.
The trace of the 64bit version seems to slow down in the movement, but then
doesn't run into the "precision loss"
>From visual comparison, after the 20 iteration the parameters start to
slowly diverge in the 5th decimal.
attached is a script that replicates the testcase
Thanks
Josef
>
>> However:
>> We don't use fmin_cg for anything by default, it's part of "testing all
>> supported scipy optimizers" and we had problems with it before on various
>> machines https://github.com/statsmodels/statsmodels/issues/109
>> The test was completely disabled on Windows for a while, and I might have
>> to turn some screws again.
>>
>> I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs,
>> which we really need to use.
>>
>> If minor precision issues matter, then the code is not "robust" and
>> should be fixed.
>>
>> compared to precision issues. I'm fighting more with the large scale
>> properties of exp.
>> https://github.com/scipy/scipy/issues/3581
>>
>>
>> Neverthless,
>> I would really like to know why I'm running into many platform
>> differences and problems with scipy.optimize.
>>
>
> To avoid giving a wrong impression:
>
> Scipy.optimize works in general very well for statsmodels, we use it
> heavily and we have a large set of test cases for it.
> It's just the last 5% or so of cases where I spend a considerable amount
> of time figuring out how to get around convergence problems, which are
> sometimes platform specific and sometimes not.
>
> Josef
>
>
>
>>
>> Cheers,
>>
>> Josef
>>
>>
>>
>>>
>>> Cheers,
>>>
>>> Matthew
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140426/5028080f/attachment.html>
-------------- next part --------------
# -*- coding: utf-8 -*-
"""
Created on Sat Apr 26 11:21:07 2014
Author: josef
"""
import numpy as np
from numpy.testing import assert_almost_equal
import statsmodels.api as sm
from statsmodels.discrete.tests.results.results_discrete import Spector
hist = []
def callb(*args):
hist.append(args[0])
data = sm.datasets.spector.load()
data.exog = sm.add_constant(data.exog, prepend=False)
res2 = Spector()
res2.probit()
cls_res2 = res2
cls_res1 = sm.Probit(data.endog, data.exog).fit(method="cg",
disp=1, maxiter=1000, gtol=1e-08, callback=callb)
print cls_res1.params
print cls_res2.params
print cls_res1.params / cls_res2.params - 1
print cls_res1.params - cls_res2.params
print cls_res1.mle_retvals
import pandas
res_ = pandas.DataFrame(np.asarray(hist))
print res_.to_string()
#res_.to_csv('trace_probitcg_32.csv')
#res_.to_csv('trace_probitcg_64wheel.csv')
DECIMAL_4 = 4
assert_almost_equal(cls_res1.params, cls_res2.params, DECIMAL_4)
More information about the NumPy-Discussion
mailing list