64-bit windows numpy / scipy wheels for testing
Hi, Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing. The build uses Carl's custom mingw-w64 build with static linking. There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean. Wheels are here: https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd... https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am... You can test with: pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy Please do send feedback. ATLAS binary here: https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss... Many thanks for Carl in particular for doing all the hard work, Cheers, Matthew
On 24.04.2014 23:56, Matthew Brett wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
This is great news, thanks for working on this. Have you already documented the procedure used to create the wheels? I would like to be able to reproduce the builds. Is it possible to add this toolchain and build procedure to the vagrant/fabric based numpy release virtual machine setup? The current version doing linux + win32 builds is available here: https://github.com/juliantaylor/numpy-vendor The windows builds are done in a linux guest using wine. Wine also seems to support win64. The mac build procedure would also needs updating. Cheers, Julian
Hi Matthew, On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI. Chuck
On Fri, Apr 25, 2014 at 12:00 AM, Charles R Harris <charlesr.harris@gmail.com> wrote:
Cool. After all these long years... Now all we need is a box running tests for CI.
There is http://www.appveyor.com/ though I haven't tried doing anything with it yet... (yes it says ".NET" at the top, but then at the bottom it says that this is a lie and it doesn't care what kind of project you have) -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <charlesr.harris@gmail.com
wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy Josef
np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis) ====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0 ACTUAL: 96L DESIRED: -96 ---------------------------------------------------------------------- Ran 4828 tests in 46.306s FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
---------------------------------------------------------------------- Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds. Chuck
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris <charlesr.harris@gmail.com
wrote:
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
---------------------------------------------------------------------- Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before installing Ran 4760 tests in 42.124s OK (KNOWNFAIL=10, SKIP=8) <nose.result.TextTestResult run=4760 errors=0 failures=0> so pip also seems to be reusing leftover files. all clear. Josef
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
OT: Oh, I hate pip and packages that require numpy E:\tmp>C:\Python27\python C:\Python27\Scripts\pip-script.py install -U patsy Downloading/unpacking patsy Running setup.py (path:c:\users\josef\appdata\local\temp\pip_build_josef\patsy\setup.py) egg_info for package patsy no previously-included directories found matching 'doc\_build' Downloading/unpacking numpy from https://pypi.python.org/packages/source/n/numpy/numpy-1.8.1.tar.gz#md5=be95b... patsy) .... Found existing installation: numpy 1.6.2 Uninstalling numpy: Successfully uninstalled numpy Running setup.py install for numpy ... ... C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) error: Unable to find vcvarsall.bat ---------------------------------------- Rolling back uninstall of numpy Cleaning up... ------------------------- user error ? I have a stale numpy-1.6.2-py2.7.egg-info file in site-packages put that's a nice new feature of pip "Rolling back uninstall of numpy" numpy is still here Josef
On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com
wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
---------------------------------------------------------------------- Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8) <nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in test_discrete.TestProbitCG where fmin_cg converges to something that differs in the 3rd decimal. I usually only test the 32-bit version, so I don't know if this is specific to this scipy version, but we haven't seen this in a long time. I used our nightly binaries http://statsmodels.sourceforge.net/binaries/ Josef
Josef
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Thu, Apr 24, 2014 at 5:26 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
> np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
---------------------------------------------------------------------- Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8) <nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in test_discrete.TestProbitCG where fmin_cg converges to something that differs in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is specific to this scipy version, but we haven't seen this in a long time. I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests for powell optimization because of small unit-at-last-place differences in the exp function in mingw-w64. Is there any chance you can track down where the optimization path is diverging and why? It's just that - if this is also the exp function maybe we can see if the error is exceeding reasonable bounds and then feed back to mingw-w64 and fall back to the numpy default implementation in the meantime. Cheers, Matthew
On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Thu, Apr 24, 2014 at 5:26 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com> wrote: > > Hi, > > Thanks to Cark Kleffner's toolchain and some help from Clint Whaley > (main author of ATLAS), I've built 64-bit windows numpy and scipy > wheels for testing. > > The build uses Carl's custom mingw-w64 build with static linking. > > There are two harmless test failures on scipy (being discussed on
> list at the moment) - tests otherwise clean. > > Wheels are here: > > > https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd... > > https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am... > > You can test with: > > pip install -U pip # to upgrade pip to latest > pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy > scipy > > Please do send feedback. > > ATLAS binary here: > > > https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss... > > Many thanks for Carl in particular for doing all the hard work, >
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
>> np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
the line 657, line 836,
in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
---------------------------------------------------------------------- Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8) <nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in test_discrete.TestProbitCG where fmin_cg converges to something that differs in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is specific to this scipy version, but we haven't seen this in a long time. I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests for powell optimization because of small unit-at-last-place differences in the exp function in mingw-w64. Is there any chance you can track down where the optimization path is diverging and why? It's just that - if this is also the exp function maybe we can see if the error is exceeding reasonable bounds and then feed back to mingw-w64 and fall back to the numpy default implementation in the meantime.
I'm a bit doubtful it's exp, the probit model is based on the normal distribution and has an exp only in the gradient via norm._pdf, the objective function uses norm._cdf. I can look into it. However: We don't use fmin_cg for anything by default, it's part of "testing all supported scipy optimizers" and we had problems with it before on various machines https://github.com/statsmodels/statsmodels/issues/109 The test was completely disabled on Windows for a while, and I might have to turn some screws again. I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs, which we really need to use. If minor precision issues matter, then the code is not "robust" and should be fixed. compared to precision issues. I'm fighting more with the large scale properties of exp. https://github.com/scipy/scipy/issues/3581 Neverthless, I would really like to know why I'm running into many platform differences and problems with scipy.optimize. Cheers, Josef
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sat, Apr 26, 2014 at 10:10 AM, <josef.pktd@gmail.com> wrote:
On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Thu, Apr 24, 2014 at 5:26 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <charlesr.harris@gmail.com> wrote: > > > Hi Matthew, > > On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett > <matthew.brett@gmail.com> wrote: >> >> Hi, >> >> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley >> (main author of ATLAS), I've built 64-bit windows numpy and scipy >> wheels for testing. >> >> The build uses Carl's custom mingw-w64 build with static linking. >> >> There are two harmless test failures on scipy (being discussed on
the
>> list at the moment) - tests otherwise clean. >> >> Wheels are here: >> >> >> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd... >> >> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am... >> >> You can test with: >> >> pip install -U pip # to upgrade pip to latest >> pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy >> scipy >> >> Please do send feedback. >> >> ATLAS binary here: >> >> >> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss... >> >> Many thanks for Carl in particular for doing all the hard work, >> > > Cool. After all these long years... Now all we need is a box running > tests for CI. > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion >
I get two test failures with numpy
Josef
>>> np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8) <nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in test_discrete.TestProbitCG where fmin_cg converges to something that differs in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is specific to this scipy version, but we haven't seen this in a long time. I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests for powell optimization because of small unit-at-last-place differences in the exp function in mingw-w64. Is there any chance you can track down where the optimization path is diverging and why? It's just that - if this is also the exp function maybe we can see if the error is exceeding reasonable bounds and then feed back to mingw-w64 and fall back to the numpy default implementation in the meantime.
I'm a bit doubtful it's exp, the probit model is based on the normal distribution and has an exp only in the gradient via norm._pdf, the objective function uses norm._cdf.
I can look into it.
However: We don't use fmin_cg for anything by default, it's part of "testing all supported scipy optimizers" and we had problems with it before on various machines https://github.com/statsmodels/statsmodels/issues/109 The test was completely disabled on Windows for a while, and I might have to turn some screws again.
I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs, which we really need to use.
If minor precision issues matter, then the code is not "robust" and should be fixed.
compared to precision issues. I'm fighting more with the large scale properties of exp. https://github.com/scipy/scipy/issues/3581
Neverthless, I would really like to know why I'm running into many platform differences and problems with scipy.optimize.
To avoid giving a wrong impression: Scipy.optimize works in general very well for statsmodels, we use it heavily and we have a large set of test cases for it. It's just the last 5% or so of cases where I spend a considerable amount of time figuring out how to get around convergence problems, which are sometimes platform specific and sometimes not. Josef
Cheers,
Josef
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sat, Apr 26, 2014 at 10:20 AM, <josef.pktd@gmail.com> wrote:
On Sat, Apr 26, 2014 at 10:10 AM, <josef.pktd@gmail.com> wrote:
On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Thu, Apr 24, 2014 at 5:26 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:29 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Thu, Apr 24, 2014 at 5:08 PM, <josef.pktd@gmail.com> wrote: > > > > > On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris > <charlesr.harris@gmail.com> wrote: >> >> >> Hi Matthew, >> >> On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett >> <matthew.brett@gmail.com> wrote: >>> >>> Hi, >>> >>> Thanks to Cark Kleffner's toolchain and some help from Clint
>>> (main author of ATLAS), I've built 64-bit windows numpy and scipy >>> wheels for testing. >>> >>> The build uses Carl's custom mingw-w64 build with static linking. >>> >>> There are two harmless test failures on scipy (being discussed on
Whaley the
>>> list at the moment) - tests otherwise clean. >>> >>> Wheels are here: >>> >>> >>> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd... >>> >>> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am... >>> >>> You can test with: >>> >>> pip install -U pip # to upgrade pip to latest >>> pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy >>> scipy >>> >>> Please do send feedback. >>> >>> ATLAS binary here: >>> >>> >>> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss... >>> >>> Many thanks for Carl in particular for doing all the hard work, >>> >> >> Cool. After all these long years... Now all we need is a box running >> tests for CI. >> >> Chuck >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > I get two test failures with numpy > > Josef > > >>> np.test() > Running unit tests for numpy > NumPy version 1.8.1 > NumPy is installed in C:\Python27\lib\site-packages\numpy > Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit > (AMD64)] > nose version 1.1.2 > > ====================================================================== > FAIL: test_iterator.test_iter_broadcasting_errors >
> Traceback (most recent call last): > File "C:\Python27\lib\site-packages\nose\case.py", line 197, in > runTest > self.test(*self.arg) > File > "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, > in test_iter_broadcasting_errors > '(2)->(2,newaxis)') % msg) > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, > in assert_ > raise AssertionError(msg) > AssertionError: Message "operands could not be broadcast together with > remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and > requested shape (4,3)" doesn't contain remapped operand > shape(2)->(2,newaxis) > > ====================================================================== > FAIL: test_iterator.test_iter_array_cast >
> Traceback (most recent call last): > File "C:\Python27\lib\site-packages\nose\case.py", line 197, in > runTest > self.test(*self.arg) > File > "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, > in test_iter_array_cast > assert_equal(i.operands[0].strides, (-96,8,-32)) > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, > in assert_equal > assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), > verbose) > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, > in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > item=0 > > ACTUAL: 96L > DESIRED: -96 > >
> Ran 4828 tests in 46.306s > > FAILED (KNOWNFAIL=10, SKIP=8, failures=2) > <nose.result.TextTestResult run=4828 errors=0 failures=2> >
Strange. That second one looks familiar, at least the "-96" part. Wonder why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8) <nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in test_discrete.TestProbitCG where fmin_cg converges to something that differs in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is specific to this scipy version, but we haven't seen this in a long time. I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests for powell optimization because of small unit-at-last-place differences in the exp function in mingw-w64. Is there any chance you can track down where the optimization path is diverging and why? It's just that - if this is also the exp function maybe we can see if the error is exceeding reasonable bounds and then feed back to mingw-w64 and fall back to the numpy default implementation in the meantime.
I'm a bit doubtful it's exp, the probit model is based on the normal distribution and has an exp only in the gradient via norm._pdf, the objective function uses norm._cdf.
I can look into it.
with 32 bit official binaries MingW 32 Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.400588 Iterations: 75 Function evaluations: 213 Gradient evaluations: 201 relative and absolute deviation from "desired" [ -1.26257296e-05 -4.77535711e-05 -9.93794940e-06 -1.78815725e-05] [ -2.05270407e-05 -2.47024202e-06 -1.41748189e-05 1.33259208e-04] with your wheels, after increasing maxiter in the test case Optimization terminated successfully. Current function value: 0.400588 Iterations: 766 Function evaluations: 1591 Gradient evaluations: 1591 relative and absolute deviation from "desired" [ -1.57311713e-07 -4.25324806e-08 -3.01557919e-08 -1.19794357e-07] [ -2.55758996e-07 -2.20016050e-09 -4.30121820e-08 8.92745931e-07] So actually the 64 bit wheel has the better final result, and just needs more iterations to get close enough to what we had required in the unit tests. The trace of the 64bit version seems to slow down in the movement, but then doesn't run into the "precision loss"
From visual comparison, after the 20 iteration the parameters start to slowly diverge in the 5th decimal.
attached is a script that replicates the testcase Thanks Josef
However: We don't use fmin_cg for anything by default, it's part of "testing all supported scipy optimizers" and we had problems with it before on various machines https://github.com/statsmodels/statsmodels/issues/109 The test was completely disabled on Windows for a while, and I might have to turn some screws again.
I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs, which we really need to use.
If minor precision issues matter, then the code is not "robust" and should be fixed.
compared to precision issues. I'm fighting more with the large scale properties of exp. https://github.com/scipy/scipy/issues/3581
Neverthless, I would really like to know why I'm running into many platform differences and problems with scipy.optimize.
To avoid giving a wrong impression:
Scipy.optimize works in general very well for statsmodels, we use it heavily and we have a large set of test cases for it. It's just the last 5% or so of cases where I spend a considerable amount of time figuring out how to get around convergence problems, which are sometimes platform specific and sometimes not.
Josef
Cheers,
Josef
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Apr 24, 2014 at 7:08 PM, <josef.pktd@gmail.com> wrote:
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
scipy looks good, just two powell trace failures Josef
Josef
np.test() Running unit tests for numpy NumPy version 1.8.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] nose version 1.1.2
====================================================================== FAIL: test_iterator.test_iter_broadcasting_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657, in test_iter_broadcasting_errors '(2)->(2,newaxis)') % msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Message "operands could not be broadcast together with remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and requested shape (4,3)" doesn't contain remapped operand shape(2)->(2,newaxis)
====================================================================== FAIL: test_iterator.test_iter_array_cast ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836, in test_iter_array_cast assert_equal(i.operands[0].strides, (-96,8,-32)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg), verbose) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=0
ACTUAL: 96L DESIRED: -96
---------------------------------------------------------------------- Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2) <nose.result.TextTestResult run=4828 errors=0 failures=2>
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <charlesr.harris@gmail.com
wrote:
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the list at the moment) - tests otherwise clean.
Wheels are here:
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
You can test with:
pip install -U pip # to upgrade pip to latest pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
ATLAS binary here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-ss...
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests for CI.
Very good news, after 3 years interruption I might be able to build scipy again, and switch now to a 64bit development version. Thanks for pushing for this, and doing all the hard work. Josef
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett <matthew.brett@gmail.com> wrote:
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
Thanks for your great effort to solve this mess. By Murphy's law, I do not have access to a Windows computer on which to test now. :-( This approach worries me a bit though: Will we have to maintain a fork of MinGW-w64 for building NumPy and SciPy? Should this toolset be distributed along with NumPy and SciPy on Windows? I presume it is needed to build C and Cython extensions? On the positive side: Does this mean we finally can use gfortran on Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy now? Or is Mac OS X a blocker? Sturla
25.04.2014 08:57, Sturla Molden kirjoitti: [clip]
On the positive side: Does this mean we finally can use gfortran on Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy now? Or is Mac OS X a blocker?
Yes, Windows is the only platform on which Fortran was problematic. OSX is somewhat saner in this respect. -- Pauli Virtanen
Pauli Virtanen <pav@iki.fi> wrote:
Yes, Windows is the only platform on which Fortran was problematic. OSX is somewhat saner in this respect.
Oh yes, it seems there are official "unofficial gfortran binaries" available for OSX: http://gcc.gnu.org/wiki/GFortranBinaries#MacOS Cool :) Sturla
On Mon, Apr 28, 2014 at 12:39 AM, Sturla Molden <sturla.molden@gmail.com>wrote:
Pauli Virtanen <pav@iki.fi> wrote:
Yes, Windows is the only platform on which Fortran was problematic. OSX is somewhat saner in this respect.
Oh yes, it seems there are official "unofficial gfortran binaries" available for OSX:
I'd be interested to hear if those work well for you. For people that just want to get things working, I would recommend to use the gfortran installers recommended at http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython. Those work for sure, and alternatives have usually proven to be problematic in the past. Ralf
Cool :)
Sturla
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Ralf Gommers <ralf.gommers@gmail.com> wrote:
I'd be interested to hear if those work well for you. For people that just want to get things working, I would recommend to use the gfortran installers recommended at <a href="http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython.">http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython.</a> Those work for sure, and alternatives have usually proven to be problematic in the past.
No problems thus far, but I only installed it yesterday. :-) I am not sure gcc-4.2 is needed anymore. Apple has retired it as platform C compiler on OS X. We need a Fortran compiler that can be used together with clang as C compiler. Sturla
On Mon, Apr 28, 2014 at 6:06 PM, Sturla Molden <sturla.molden@gmail.com>wrote:
Ralf Gommers <ralf.gommers@gmail.com> wrote:
I'd be interested to hear if those work well for you. For people that just want to get things working, I would recommend to use the gfortran installers recommended at <a href=" http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython ."> http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython .</a> Those work for sure, and alternatives have usually proven to be problematic in the past.
No problems thus far, but I only installed it yesterday. :-)
Sounds good. Let's give it a bit more time, once you've given it a good workout we can add that those gfortran 4.8.x compilers seem to work fine to the scipy build instructions. I am not sure gcc-4.2 is needed anymore. Apple has retired it as platform C
compiler on OS X. We need a Fortran compiler that can be used together with clang as C compiler.
Clang together with gfortran 4.2 works fine on OS X 10.9. Ralf
Ralf Gommers <ralf.gommers@gmail.com> wrote:
Sounds good. Let's give it a bit more time, once you've given it a good workout we can add that those gfortran 4.8.x compilers seem to work fine to the scipy build instructions.
Yes, it needs to be tested properly. The build instructions for OS X Mavericks should also mention where to obtain Xcode (Appstore) and the secret command to retrieve the command-line utils after Xcode is installed: $ /usr/bin/xcode-select --install Probably it should also mention how to use alternative BLAS and LAPACK versions (MKL and OpenBLAS), although all three are equally performant on Mavericks (except Accelerate is not fork safe): https://twitter.com/nedlom/status/437427557919891457 Sturla
On 28/04/14 18:21, Ralf Gommers wrote:
No problems thus far, but I only installed it yesterday. :-)
Sounds good. Let's give it a bit more time, once you've given it a good workout we can add that those gfortran 4.8.x compilers seem to work fine to the scipy build instructions.
I have not looked at building SciPy yet, but I was able to build MPICH 3.0.4 from source without a problem. I worked on the first attempt without any error or warnings. That is more than I hoped for... Using BLAS and LAPACK from Accelerate also worked correctly with flags -ff2c and -framework Accelerate. I can use it from Python (NumPy) with ctypes and Cython. I get correct results and it does not segfault. (It does segfault without -ff2c, but that is as expected, given that Accelerate has f2c/g77 ABI.) I was also able to build OpenBLAS with Clang as C compiler and gfortran as Fortran compiler. It works correctly as well (both the build process and the binaries I get). So far it looks damn good :-) The next step is to build NumPy and SciPy and run some tests :-) Sturla P.S. Here is what I did to build MPICH from source, for those interested: $./configure CC=clang CXX=clang++ F77=gfortran FC=gfortran --enable-fast=all,O3 --with-pm=gforker --prefix=/opt/mpich $ make $ sudo make install $ export PATH="/opt/mpich/bin:$PATH" # actually in ~/.bash_profile Now testing with some hello worlds: $ mpif77 -o hello hello.f $ mpiexec -np 4 ./hello Hello world Hello world Hello world Hello world $ rm hello $ mpicc -o hello hello.c $ mpiexec -np 4 ./hello Hello world from process 0 of 4 Hello world from process 1 of 4 Hello world from process 2 of 4 Hello world from process 3 of 4 The hello world programs looked like this: #include <stdio.h> #include <mpi.h> int main (int argc, char *argv[]) { int rank, size; MPI_Init (&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &rank); MPI_Comm_size (MPI_COMM_WORLD, &size); printf( "Hello world from process %d of %d\n", rank, size); MPI_Finalize(); return 0; } program hello_world include 'mpif.h' integer ierr call MPI_INIT(ierr) print *, "Hello world" call MPI_FINALIZE(ierr) stop end
Hi, basically the toolchain was created with a local fork of the "mingw-builds" build process along with some addons and patches. It is NOT a mingw-w64 fork. BTW: there are numerous mingw-w64 based toolchains out there, most of them build without any information about the build process and patches they used. As long as the "mingw-builds" maintainers continue working on their project, maintaining usuable toolchain for Python development on Windows should be feasible. More details are given here: http://article.gmane.org/gmane.comp.python.numeric.general/57446 Regards Carl 2014-04-25 7:57 GMT+02:00 Sturla Molden <sturla.molden@gmail.com>:
Matthew Brett <matthew.brett@gmail.com> wrote:
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley (main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
Thanks for your great effort to solve this mess.
By Murphy's law, I do not have access to a Windows computer on which to test now. :-(
This approach worries me a bit though: Will we have to maintain a fork of MinGW-w64 for building NumPy and SciPy? Should this toolset be distributed along with NumPy and SciPy on Windows? I presume it is needed to build C and Cython extensions?
On the positive side: Does this mean we finally can use gfortran on Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy now? Or is Mac OS X a blocker?
Sturla
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Carl, On Sat, Apr 26, 2014 at 11:10 AM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
basically the toolchain was created with a local fork of the "mingw-builds" build process along with some addons and patches. It is NOT a mingw-w64 fork. BTW: there are numerous mingw-w64 based toolchains out there, most of them build without any information about the build process and patches they used.
As long as the "mingw-builds" maintainers continue working on their project, maintaining usuable toolchain for Python development on Windows should be feasible.
More details are given here: http://article.gmane.org/gmane.comp.python.numeric.general/57446
I hope you don't mind, but I took the liberty of putting some of your email explanations and notes into the numpy wiki: https://github.com/numpy/numpy/wiki/Mingw-w64-faq https://github.com/numpy/numpy/wiki/Mingw-static-toolchain Do you have anywhere a description of what you did to create your fork of the build process? Maybe we can automate this using a Fedora or other cross-compiler? I think we need to make sure that the build system need not die if you get hired by some great company and can't work on this anymore. Do you think that is possible? Thanks again for the all the hard work you've done here. I think we are getting very close to a good solution, and that has seemed a long way off until now... Cheers, Matthew
Hi, 25.04.2014 00:56, Matthew Brett kirjoitti:> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
Where can I get your numpy.patch scipy.patch and what's in them? Cheers, Pauli
Hi, On Sun, Apr 27, 2014 at 6:09 AM, Pauli Virtanen <pav@iki.fi> wrote:
Hi,
25.04.2014 00:56, Matthew Brett kirjoitti:> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
Where can I get your
numpy.patch scipy.patch
They are Carl's patches - here: https://bitbucket.org/carlkl/mingw-w64-for-python/downloads The scipy patch is tiny, the numpy patch more substantial. Carl - any interest into working up a pull-request for these? I'm happy to do it if you don't have time. Cheers, Matthew
Hi, I will definitly don't have not time until thursday this week working out the github workflow for a numpy pull request. So feel free to do it for me. BTW: There is a missing feature in the mingw-w64 toolchain. By now it features linking to msvcrt90 runtime only. I have do extend the specs file to allow linking to msvcr100 with an addional flag. Or create a dedicated toolchain - what do you think? Cheers, Carl 2014-04-27 20:29 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
Hi,
On Sun, Apr 27, 2014 at 6:09 AM, Pauli Virtanen <pav@iki.fi> wrote:
Hi,
25.04.2014 00:56, Matthew Brett kirjoitti:> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy wheels for testing.
Where can I get your
numpy.patch scipy.patch
They are Carl's patches - here:
https://bitbucket.org/carlkl/mingw-w64-for-python/downloads
The scipy patch is tiny, the numpy patch more substantial.
Carl - any interest into working up a pull-request for these? I'm happy to do it if you don't have time.
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Sun, Apr 27, 2014 at 2:34 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
I will definitly don't have not time until thursday this week working out the github workflow for a numpy pull request. So feel free to do it for me.
OK - I will have a go at this tomorrow.
BTW: There is a missing feature in the mingw-w64 toolchain. By now it features linking to msvcrt90 runtime only. I have do extend the specs file to allow linking to msvcr100 with an addional flag. Or create a dedicated toolchain - what do you think?
I don't know. Is this a discussion that should go to the mingw-w64 list do you think? It must be a very common feature. As you know, I'm really hoping it will be possible make a devkit for Python similar to the Ruby devkits [1]. The ideal would be a devkit that transparently picked up 32 vs 64 bit, and MSVC runtime according to the Python version. For example, OSX compilation automatically picks up the OSX SDK with which the relevant Python was built. Do you think something like this is possible? That would be a great improvement for people building extensions and wheels on Windows. Cheers, Matthew [1] http://rubyinstaller.org/add-ons/devkit/
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries. Cheers, Carl 2014-04-27 23:46 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
Hi,
On Sun, Apr 27, 2014 at 2:34 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
I will definitly don't have not time until thursday this week working out the github workflow for a numpy pull request. So feel free to do it for me.
OK - I will have a go at this tomorrow.
BTW: There is a missing feature in the mingw-w64 toolchain. By now it features linking to msvcrt90 runtime only. I have do extend the specs file to allow linking to msvcr100 with an addional flag. Or create a dedicated toolchain - what do you think?
I don't know.
Is this a discussion that should go to the mingw-w64 list do you think? It must be a very common feature.
As you know, I'm really hoping it will be possible make a devkit for Python similar to the Ruby devkits [1].
The ideal would be a devkit that transparently picked up 32 vs 64 bit, and MSVC runtime according to the Python version. For example, OSX compilation automatically picks up the OSX SDK with which the relevant Python was built. Do you think something like this is possible? That would be a great improvement for people building extensions and wheels on Windows.
Cheers,
Matthew
[1] http://rubyinstaller.org/add-ons/devkit/ _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the devkit package. It sounds like OpenBLAS will be much easier to build, so we could start with ATLAS binaries as a default, expecting OpenBLAS to be built more often with the toolchain. I think that's how numpy binary installers are built at the moment - using old binary builds of ATLAS. I'm happy to provide the builds of ATLAS - e.g. here: https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds I can also give access to the dedicated machine doing the builds. Cheers, Matthew
Aha, On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the devkit package. It sounds like OpenBLAS will be much easier to build, so we could start with ATLAS binaries as a default, expecting OpenBLAS to be built more often with the toolchain. I think that's how numpy binary installers are built at the moment - using old binary builds of ATLAS.
I'm happy to provide the builds of ATLAS - e.g. here:
I just found the official numpy binary builds of ATLAS: https://github.com/numpy/vendor/tree/master/binaries But - they are from an old version of ATLAS / Lapack, and only for 32-bit. David - what say we update these to latest ATLAS stable? Cheers, Matthew
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the devkit package. It sounds like OpenBLAS will be much easier to build, so we could start with ATLAS binaries as a default, expecting OpenBLAS to be built more often with the toolchain. I think that's how numpy binary installers are built at the moment - using old binary builds of ATLAS.
I'm happy to provide the builds of ATLAS - e.g. here:
I just found the official numpy binary builds of ATLAS:
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !). How easy is it to build ATLAS targetting a specific CPU these days ? I think we need to at least support nosse and sse2 and above. David
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com> wrote:
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the devkit package. It sounds like OpenBLAS will be much easier to build, so we could start with ATLAS binaries as a default, expecting OpenBLAS to be built more often with the toolchain. I think that's how numpy binary installers are built at the moment - using old binary builds of ATLAS.
I'm happy to provide the builds of ATLAS - e.g. here:
I just found the official numpy binary builds of ATLAS:
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days ? I think we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week. I did some analysis of SSE2 prevalence here: https://github.com/numpy/numpy/wiki/Window-versions Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess. I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2. Cheers, Matthew
On 09/05/14 02:51, Matthew Brett wrote:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess.
Ok, so that is 1 % of Windows users. https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2.
Supporting Pentium II and Pentium III might not be the highest priority today. I would say just let the install fail and tell them to compile from source. Sturla
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com> wrote:
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com
wrote:
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com
wrote:
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages
to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats
best strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the devkit package. It sounds like OpenBLAS will be much easier to build, so we could start with ATLAS binaries as a default, expecting OpenBLAS to be built more often with the toolchain. I think that's how numpy binary installers are built at the moment - using old binary builds of ATLAS.
I'm happy to provide the builds of ATLAS - e.g. here:
I just found the official numpy binary builds of ATLAS:
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days ? I
and the think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
I did some analysis of SSE2 prevalence here:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess.
I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only (as you need some ASM). I can take a quick look at a simple cython extension that could be imported before anything else, and would raise an ImportError if the wrong arch is detected. David
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 09.05.2014 12:42, David Cournapeau wrote:
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com <mailto:cournape@gmail.com>> wrote: > > > > On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> > wrote: >> >> Aha, >> >> On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> >> wrote: >> > Hi, >> > >> > On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com <mailto:cmkleffner@gmail.com>> >> > wrote: >> >> A possible option is to install the toolchain inside site-packages and >> >> to >> >> deploy it as PYPI wheel or wininst packages. The PATH to the toolchain >> >> could >> >> be extended during import of the package. But I have no idea, whats the >> >> best >> >> strategy to additionaly install ATLAS or other third party libraries. >> > >> > Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the >> > devkit package. It sounds like OpenBLAS will be much easier to build, >> > so we could start with ATLAS binaries as a default, expecting OpenBLAS >> > to be built more often with the toolchain. I think that's how numpy >> > binary installers are built at the moment - using old binary builds of >> > ATLAS. >> > >> > I'm happy to provide the builds of ATLAS - e.g. here: >> > >> > https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds >> >> I just found the official numpy binary builds of ATLAS: >> >> https://github.com/numpy/vendor/tree/master/binaries >> >> But - they are from an old version of ATLAS / Lapack, and only for 32-bit. >> >> David - what say we update these to latest ATLAS stable? > > > Fine by me (not that you need my approval !). > > How easy is it to build ATLAS targetting a specific CPU these days ? I think > we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
I did some analysis of SSE2 prevalence here:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess.
I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only (as you need some ASM).
I can take a quick look at a simple cython extension that could be imported before anything else, and would raise an ImportError if the wrong arch is detected.
assuming mingw is new enough #ifdef __SSE2___ raise_if(!__builtin_cpu_supports("sse")) #endof in import_array() should do it
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor < jtaylor.debian@googlemail.com> wrote:
On 09.05.2014 12:42, David Cournapeau wrote:
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com <mailto:cournape@gmail.com>> wrote: > > > > On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> > wrote: >> >> Aha, >> >> On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> >> wrote: >> > Hi, >> > >> > On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com <mailto:cmkleffner@gmail.com>> >> > wrote: >> >> A possible option is to install the toolchain inside site-packages and >> >> to >> >> deploy it as PYPI wheel or wininst packages. The PATH to the toolchain >> >> could >> >> be extended during import of the package. But I have no idea, whats the >> >> best >> >> strategy to additionaly install ATLAS or other third party libraries. >> > >> > Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the >> > devkit package. It sounds like OpenBLAS will be much easier to build, >> > so we could start with ATLAS binaries as a default, expecting OpenBLAS >> > to be built more often with the toolchain. I think that's how numpy >> > binary installers are built at the moment - using old binary builds of >> > ATLAS. >> > >> > I'm happy to provide the builds of ATLAS - e.g. here: >> > >> > https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds >> >> I just found the official numpy binary builds of ATLAS: >> >> https://github.com/numpy/vendor/tree/master/binaries >> >> But - they are from an old version of ATLAS / Lapack, and only for 32-bit. >> >> David - what say we update these to latest ATLAS stable? > > > Fine by me (not that you need my approval !). > > How easy is it to build ATLAS targetting a specific CPU these days ? I think > we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
I did some analysis of SSE2 prevalence here:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess.
I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only (as you need some ASM).
I can take a quick look at a simple cython extension that could be imported before anything else, and would raise an ImportError if the wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___ raise_if(!__builtin_cpu_supports("sse")) #endof
We need to support it for VS as well, but it looks like win32 API has a function to do it: http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx Makes it even easier. David
in import_array() should do it _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
this is from: http://gcc.gnu.org/onlinedocs/gcc/X86-Built-in-Functions.html // ifunc resolvers fire before constructors, explicitly call the init function. __builtin_cpu_init (); if (__builtin_cpu_supports ("ssse2")) <code> else <code> Cheers, Carl 2014-05-09 13:06 GMT+02:00 David Cournapeau <cournape@gmail.com>:
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor < jtaylor.debian@googlemail.com> wrote:
On 09.05.2014 12:42, David Cournapeau wrote:
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com <mailto:cournape@gmail.com>> wrote: > > > > On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> > wrote: >> >> Aha, >> >> On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> >> wrote: >> > Hi, >> > >> > On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com <mailto:cmkleffner@gmail.com>> >> > wrote: >> >> A possible option is to install the toolchain inside site-packages and >> >> to >> >> deploy it as PYPI wheel or wininst packages. The PATH to the toolchain >> >> could >> >> be extended during import of the package. But I have no idea, whats the >> >> best >> >> strategy to additionaly install ATLAS or other third party libraries. >> > >> > Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the >> > devkit package. It sounds like OpenBLAS will be much easier to build, >> > so we could start with ATLAS binaries as a default, expecting OpenBLAS >> > to be built more often with the toolchain. I think that's how numpy >> > binary installers are built at the moment - using old binary builds of >> > ATLAS. >> > >> > I'm happy to provide the builds of ATLAS - e.g. here: >> > >> > https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds >> >> I just found the official numpy binary builds of ATLAS: >> >> https://github.com/numpy/vendor/tree/master/binaries >> >> But - they are from an old version of ATLAS / Lapack, and only for 32-bit. >> >> David - what say we update these to latest ATLAS stable? > > > Fine by me (not that you need my approval !). > > How easy is it to build ATLAS targetting a specific CPU these days ? I think > we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
I did some analysis of SSE2 prevalence here:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's
only
a guess.
I wonder if we could add a CPU check on numpy import to give a
polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only (as you need some ASM).
I can take a quick look at a simple cython extension that could be imported before anything else, and would raise an ImportError if the wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___ raise_if(!__builtin_cpu_supports("sse")) #endof
We need to support it for VS as well, but it looks like win32 API has a function to do it: http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx
Makes it even easier.
David
in import_array() should do it _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Fri, May 9, 2014 at 4:06 AM, David Cournapeau <cournape@gmail.com> wrote:
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor <jtaylor.debian@googlemail.com> wrote:
On 09.05.2014 12:42, David Cournapeau wrote:
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com <mailto:cournape@gmail.com>> wrote: > > > > On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> > wrote: >> >> Aha, >> >> On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> >> wrote: >> > Hi, >> > >> > On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com <mailto:cmkleffner@gmail.com>> >> > wrote: >> >> A possible option is to install the toolchain inside site-packages and >> >> to >> >> deploy it as PYPI wheel or wininst packages. The PATH to the toolchain >> >> could >> >> be extended during import of the package. But I have no idea, whats the >> >> best >> >> strategy to additionaly install ATLAS or other third party libraries. >> > >> > Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the >> > devkit package. It sounds like OpenBLAS will be much easier to build, >> > so we could start with ATLAS binaries as a default, expecting OpenBLAS >> > to be built more often with the toolchain. I think that's how numpy >> > binary installers are built at the moment - using old binary builds of >> > ATLAS. >> > >> > I'm happy to provide the builds of ATLAS - e.g. here: >> > >> > https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds >> >> I just found the official numpy binary builds of ATLAS: >> >> https://github.com/numpy/vendor/tree/master/binaries >> >> But - they are from an old version of ATLAS / Lapack, and only for 32-bit. >> >> David - what say we update these to latest ATLAS stable? > > > Fine by me (not that you need my approval !). > > How easy is it to build ATLAS targetting a specific CPU these days ? I think > we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
I did some analysis of SSE2 prevalence here:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess.
I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only (as you need some ASM).
I can take a quick look at a simple cython extension that could be imported before anything else, and would raise an ImportError if the wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___ raise_if(!__builtin_cpu_supports("sse")) #endof
We need to support it for VS as well, but it looks like win32 API has a function to do it: http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx
Makes it even easier.
Nice. So all we would need is something like: try: from ctypes import windll, wintypes except (ImportError, ValueError): pass else: has_feature = windll.kernel32.IsProcessorFeaturePresent has_feature.argtypes = [wintypes.DWORD] if not has_feature(10): msg = ("This version of numpy needs a CPU capable of SSE2, " "but Windows says - not so.\n", "Please reinstall numpy using a superpack installer") raise RuntimeError(msg) At the top of numpy/__init__.py What would be the best way of including that code in the 32-bit wheel? (The 64-bit wheel can depend on SSE2). Cheers, Matthew
On Fri, May 23, 2014 at 2:41 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Fri, May 9, 2014 at 4:06 AM, David Cournapeau <cournape@gmail.com> wrote:
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor <jtaylor.debian@googlemail.com> wrote:
On 09.05.2014 12:42, David Cournapeau wrote:
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <
matthew.brett@gmail.com
<mailto:matthew.brett@gmail.com>> wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com <mailto:cournape@gmail.com>> wrote: > > > > On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> > wrote: >> >> Aha, >> >> On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com <mailto:matthew.brett@gmail.com>> >> wrote: >> > Hi, >> > >> > On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com <mailto:cmkleffner@gmail.com>> >> > wrote: >> >> A possible option is to install the toolchain inside site-packages and >> >> to >> >> deploy it as PYPI wheel or wininst packages. The PATH to the toolchain >> >> could >> >> be extended during import of the package. But I have no idea, whats the >> >> best >> >> strategy to additionaly install ATLAS or other third party libraries. >> > >> > Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the >> > devkit package. It sounds like OpenBLAS will be much easier to build, >> > so we could start with ATLAS binaries as a default, expecting OpenBLAS >> > to be built more often with the toolchain. I think that's how numpy >> > binary installers are built at the moment - using old binary builds of >> > ATLAS. >> > >> > I'm happy to provide the builds of ATLAS - e.g. here: >> > >> > https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds >> >> I just found the official numpy binary builds of ATLAS: >> >> https://github.com/numpy/vendor/tree/master/binaries >> >> But - they are from an old version of ATLAS / Lapack, and only for 32-bit. >> >> David - what say we update these to latest ATLAS stable? > > > Fine by me (not that you need my approval !). > > How easy is it to build ATLAS targetting a specific CPU these days ? I think > we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
I did some analysis of SSE2 prevalence here:
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without SSE2. I suspect that people running new installs of numpy will have slightly better machines on average than Firefox users, but it's only a guess.
I wonder if we could add a CPU check on numpy import to give a polite 'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only (as you need some ASM).
I can take a quick look at a simple cython extension that could be imported before anything else, and would raise an ImportError if the wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___ raise_if(!__builtin_cpu_supports("sse")) #endof
We need to support it for VS as well, but it looks like win32 API has a function to do it: http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx
Makes it even easier.
Nice. So all we would need is something like:
try: from ctypes import windll, wintypes except (ImportError, ValueError): pass else: has_feature = windll.kernel32.IsProcessorFeaturePresent has_feature.argtypes = [wintypes.DWORD] if not has_feature(10): msg = ("This version of numpy needs a CPU capable of SSE2, " "but Windows says - not so.\n", "Please reinstall numpy using a superpack installer") raise RuntimeError(msg)
At the top of numpy/__init__.py
What would be the best way of including that code in the 32-bit wheel? (The 64-bit wheel can depend on SSE2).
Maybe write a separate file `_check_win32_sse2.py.in`, and ensure that when you generate `_check_win32_sse2.py` from setup.py you only end up with the above code when you go through the if len(sys.argv) >= 2 and sys.argv[1] == 'bdist_wheel': branch. Ralf
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Matthew and Ralf, Has anyone managed to build working whl packages for numpy and scipy on win32 using the static mingw-w64 toolchain? -- Olivier
Hi all, I do regulary builds for python-2.7. Due to my limited resources I didn't build for 3.3 or 3.4 right now. I didn't updated my toolchhain from february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy development right now, thanks to Werner Saar, see: https://github.com/wernsaar/OpenBLAS . A lot of bugs have been canceled out at the cost of performance, see the kernel TODO list: https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List . Many bugs related to Windows have been corrected. A very weird bug i.e.: https://github.com/xianyi/OpenBLAS/issues/394 and https://github.com/JuliaLang/julia/issues/5574 . I got the impression, that the Julia community (and maybe the R and octave community) is very interested getting towards a stable Windows OpenBLAS. OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for Windows today. Atlas seems not to be maintained for Windows anymore (is this true Matthew?) somewhat older test wheels for python-2.7 can be downloaded here: see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0 (2014-06-10) numpy and scipy wheels for py-2.7 The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS, but is stable with single thread (see the log files). I didn't dig into this further. Win32 works with MT OpenBLAS, but has some test failures with atan2 and hypot. The is more or less the status today. I can upload new wheels linked against a recent OpenBLAS, maybe tomorrow on Binstar. Regards, Carl 2014-07-02 9:24 GMT+02:00 Olivier Grisel <olivier.grisel@ensta.org>:
Hi Matthew and Ralf,
Has anyone managed to build working whl packages for numpy and scipy on win32 using the static mingw-w64 toolchain?
-- Olivier _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Wed, Jul 2, 2014 at 10:36 AM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi all,
I do regulary builds for python-2.7. Due to my limited resources I didn't build for 3.3 or 3.4 right now. I didn't updated my toolchhain from february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy development right now, thanks to Werner Saar, see: https://github.com/wernsaar/OpenBLAS . A lot of bugs have been canceled out at the cost of performance, see the kernel TODO list: https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List . Many bugs related to Windows have been corrected. A very weird bug i.e.: https://github.com/xianyi/OpenBLAS/issues/394 and https://github.com/JuliaLang/julia/issues/5574 . I got the impression, that the Julia community (and maybe the R and octave community) is very interested getting towards a stable Windows OpenBLAS. OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for Windows today. Atlas seems not to be maintained for Windows anymore (is this true Matthew?)
No, it's not true, but it's not really false either. Clint Whaley is the ATLAS maintainer and his interests are firmly in high-performance-computing so he is much more interested in exotic new chips than in Windows. But, he does aim to make the latest stable release buildable on Windows, and he's helped me do that for the latest stable, with some hope he'll continue to work on the 64-bit Windows kernels which are hobbled at the moment because of differences in the Windows / other OS 64-bit ABI. Builds here: https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/
somewhat older test wheels for python-2.7 can be downloaded here: see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0 (2014-06-10) numpy and scipy wheels for py-2.7 The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS, but is stable with single thread (see the log files). I didn't dig into this further. Win32 works with MT OpenBLAS, but has some test failures with atan2 and hypot. The is more or less the status today. I can upload new wheels linked against a recent OpenBLAS, maybe tomorrow on Binstar.
I built some 64-bit wheels against Carl's toolchain and the ATLAS above, I think they don't have any threading issues, but the scipy wheel fails one scipy test due to some very small precision differences in the mingw runtime. I think we agreed this failure wasn't important. https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd... https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am... Cheers, Matthew
Hi, On Wed, Jul 2, 2014 at 11:29 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Wed, Jul 2, 2014 at 10:36 AM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi all,
I do regulary builds for python-2.7. Due to my limited resources I didn't build for 3.3 or 3.4 right now. I didn't updated my toolchhain from february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy development right now, thanks to Werner Saar, see: https://github.com/wernsaar/OpenBLAS . A lot of bugs have been canceled out at the cost of performance, see the kernel TODO list: https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List . Many bugs related to Windows have been corrected. A very weird bug i.e.: https://github.com/xianyi/OpenBLAS/issues/394 and https://github.com/JuliaLang/julia/issues/5574 . I got the impression, that the Julia community (and maybe the R and octave community) is very interested getting towards a stable Windows OpenBLAS. OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for Windows today. Atlas seems not to be maintained for Windows anymore (is this true Matthew?)
No, it's not true, but it's not really false either. Clint Whaley is the ATLAS maintainer and his interests are firmly in high-performance-computing so he is much more interested in exotic new chips than in Windows. But, he does aim to make the latest stable release buildable on Windows, and he's helped me do that for the latest stable, with some hope he'll continue to work on the 64-bit Windows kernels which are hobbled at the moment because of differences in the Windows / other OS 64-bit ABI. Builds here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/
somewhat older test wheels for python-2.7 can be downloaded here: see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0 (2014-06-10) numpy and scipy wheels for py-2.7 The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS, but is stable with single thread (see the log files). I didn't dig into this further. Win32 works with MT OpenBLAS, but has some test failures with atan2 and hypot. The is more or less the status today. I can upload new wheels linked against a recent OpenBLAS, maybe tomorrow on Binstar.
I built some 64-bit wheels against Carl's toolchain and the ATLAS above, I think they don't have any threading issues, but the scipy wheel fails one scipy test due to some very small precision differences in the mingw runtime. I think we agreed this failure wasn't important.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd... https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
Sorry - I wasn't paying attention - you asked about 32-bit wheels. Honestly, using the same toolchain, they wouldn't be at all hard to build. One issue is that the ATLAS builds depend on SSE2. That isn't an issue for 64 bit builds because the 64-bit ABI requires SSE2, but it is an issue for 32-bit where we have no such guarantee. It looks like 99% of Windows users do have SSE2 though [1]. So I think what is required is * Build the wheels for 32-bit (easy) * Patch the wheels to check and give helpful error in absence of SSE2 (fairly easy) * Get agreement these should go up on pypi and be maintained (feedback anyone?) Cheers, Matthew [1] https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2
Hi, The mingw-w64 based wheels (Atlas and openBLAS) are based on a patched numpy version, that hasn't been published as numpy pull for revision until now (my failure). I could try to do this tomorrow in the evening. Another important point is, that the toolchain, that is capable to compile numpy/scipy was adapted to allow for MSVC / mingw runtime compatibility and does not create any gcc/mingw runtime dependency anymore. OpenBLAS has one advantage over Atlas: numpy/scipy are linked dynamically against OpenBLAS. Statically linked BLAS like MKL or ATLAS creates huge python extensions and have considerable higher memory consumption compared to dynamically linkage. On the other hand correctness is more important, so ATLAS has to be preferred now. Users with non SEE processors could be provided with wheels distributed on binstar. Regards Carl 2014-07-02 12:37 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
Hi,
Hi,
On Wed, Jul 2, 2014 at 10:36 AM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi all,
I do regulary builds for python-2.7. Due to my limited resources I didn't build for 3.3 or 3.4 right now. I didn't updated my toolchhain from february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy development right now, thanks to Werner Saar, see: https://github.com/wernsaar/OpenBLAS . A lot of bugs have been canceled out at the cost of performance, see the kernel TODO list:
https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List .
Many bugs related to Windows have been corrected. A very weird bug i.e.: https://github.com/xianyi/OpenBLAS/issues/394 and https://github.com/JuliaLang/julia/issues/5574 . I got the impression, that the Julia community (and maybe the R and octave community) is very interested getting towards a stable Windows OpenBLAS. OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for Windows today. Atlas seems not to be maintained for Windows anymore (is
true Matthew?)
No, it's not true, but it's not really false either. Clint Whaley is the ATLAS maintainer and his interests are firmly in high-performance-computing so he is much more interested in exotic new chips than in Windows. But, he does aim to make the latest stable release buildable on Windows, and he's helped me do that for the latest stable, with some hope he'll continue to work on the 64-bit Windows kernels which are hobbled at the moment because of differences in the Windows / other OS 64-bit ABI. Builds here:
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/
somewhat older test wheels for python-2.7 can be downloaded here: see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0 (2014-06-10) numpy and scipy wheels for py-2.7 The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS, but is stable with single thread (see the log files). I didn't dig into
On Wed, Jul 2, 2014 at 11:29 AM, Matthew Brett <matthew.brett@gmail.com> wrote: this this
further. Win32 works with MT OpenBLAS, but has some test failures with atan2 and hypot. The is more or less the status today. I can upload new wheels linked against a recent OpenBLAS, maybe tomorrow on Binstar.
I built some 64-bit wheels against Carl's toolchain and the ATLAS above, I think they don't have any threading issues, but the scipy wheel fails one scipy test due to some very small precision differences in the mingw runtime. I think we agreed this failure wasn't important.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd...
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_am...
Sorry - I wasn't paying attention - you asked about 32-bit wheels. Honestly, using the same toolchain, they wouldn't be at all hard to build.
One issue is that the ATLAS builds depend on SSE2. That isn't an issue for 64 bit builds because the 64-bit ABI requires SSE2, but it is an issue for 32-bit where we have no such guarantee. It looks like 99% of Windows users do have SSE2 though [1]. So I think what is required is
* Build the wheels for 32-bit (easy) * Patch the wheels to check and give helpful error in absence of SSE2 (fairly easy) * Get agreement these should go up on pypi and be maintained (feedback anyone?)
Cheers,
Matthew
[1] https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2 _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Wed, Jul 2, 2014 at 12:18 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
The mingw-w64 based wheels (Atlas and openBLAS) are based on a patched numpy version, that hasn't been published as numpy pull for revision until now (my failure). I could try to do this tomorrow in the evening.
That would be really good. I'll try and help with review if I can.
Another important point is, that the toolchain, that is capable to compile numpy/scipy was adapted to allow for MSVC / mingw runtime compatibility and does not create any gcc/mingw runtime dependency anymore.
OpenBLAS has one advantage over Atlas: numpy/scipy are linked dynamically against OpenBLAS. Statically linked BLAS like MKL or ATLAS creates huge python extensions and have considerable higher memory consumption compared to dynamically linkage. On the other hand correctness is more important, so ATLAS has to be preferred now.
Do you have any index of what the memory cost is? If it's in the order of 20M presumably that won't have much practical impact?
Users with non SEE processors could be provided with wheels distributed on binstar.
The last plan we seemed to have was to continue making the 'superpack' exe installers which contain no-SSE, SSE2 and SSE3 builds where the installer selects which one to install at runtime. The warning from the wheel would point to these installers as the backup option. If we did want to produce alternative wheels, I guess a specific static https directory would be easiest; otherwise the user would get the odd effect that they'd get a hobbled wheel by default when installing from binstar (assuming they did in fact have SSE2). I mean, this pip install -f https://somewhere.org/no_sse_wheels --no-index numpy seems to make more sense as an alternative install command for non-SSE, than this: pip install -i http://binstar.org numpy because in the former case, you can see what is special about the command. Cheers, Matthew
Hi, personally I don't have a preference of Binstar over somewhere.org. More important is that one has to agree where to find the binaries. Binstar has the concept of channels and allow wheels. So one could provide a channel for NOSSE and more channels for other specialized builds: ATLAS/OpenBLAS/RefBLAS, SSE4/AVX and so on. A generic binary should be build with generic optimizing GCC switches and SSE2 per default. I propose to provide generic binaries for PYPI instead of superbinaries. and specialized binaries on Binstar or somewhere else. Just thinking two or three steps ahead. Regards Carl 2014-07-02 13:35 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
Hi,
On Wed, Jul 2, 2014 at 12:18 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
The mingw-w64 based wheels (Atlas and openBLAS) are based on a patched numpy version, that hasn't been published as numpy pull for revision until now (my failure). I could try to do this tomorrow in the evening.
That would be really good. I'll try and help with review if I can.
Another important point is, that the toolchain, that is capable to compile numpy/scipy was adapted to allow for MSVC / mingw runtime compatibility and does not create any gcc/mingw runtime dependency anymore.
OpenBLAS has one advantage over Atlas: numpy/scipy are linked dynamically against OpenBLAS. Statically linked BLAS like MKL or ATLAS creates huge python extensions and have considerable higher memory consumption compared to dynamically linkage. On the other hand correctness is more important, so ATLAS has to be preferred now.
Do you have any index of what the memory cost is? If it's in the order of 20M presumably that won't have much practical impact?
Users with non SEE processors could be provided with wheels distributed on binstar.
The last plan we seemed to have was to continue making the 'superpack' exe installers which contain no-SSE, SSE2 and SSE3 builds where the installer selects which one to install at runtime. The warning from the wheel would point to these installers as the backup option.
If we did want to produce alternative wheels, I guess a specific static https directory would be easiest; otherwise the user would get the odd effect that they'd get a hobbled wheel by default when installing from binstar (assuming they did in fact have SSE2). I mean, this
pip install -f https://somewhere.org/no_sse_wheels --no-index numpy
seems to make more sense as an alternative install command for non-SSE, than this:
pip install -i http://binstar.org numpy
because in the former case, you can see what is special about the command.
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Wed, Jul 2, 2014 at 2:24 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
personally I don't have a preference of Binstar over somewhere.org. More important is that one has to agree where to find the binaries. Binstar has the concept of channels and allow wheels. So one could provide a channel for NOSSE and more channels for other specialized builds: ATLAS/OpenBLAS/RefBLAS, SSE4/AVX and so on.
Having a noSSE channel would make sense.
A generic binary should be build with generic optimizing GCC switches and SSE2 per default. I propose to provide generic binaries for PYPI instead of superbinaries. and specialized binaries on Binstar or somewhere else.
The exe superbinary installers can also go on pypi without causing confusion to pip at least, but it would be good to have wheels as well.
Just thinking two or three steps ahead.
It's good to have a plan :) Cheers, Matthew
On Wed, Jul 2, 2014 at 6:36 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Having a noSSE channel would make sense.
Indeed -- the default (i.e what you get with pip install numpy) should be SSE2 -- I":d much rather have a few folks with old hardware have to go through some hoops that n have most people get something that is "much slower than MATLAB".
The exe superbinary installers can also go on pypi without causing confusion to pip at least, but it would be good to have wheels as well.
it doesn't hurt to have them, but we really need to get Windows away from the exe installers into the pip / virtualenv / etc world. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On 02/07/14 19:55, Chris Barker wrote:
Indeed -- the default (i.e what you get with pip install numpy) should be SSE2 -- I":d much rather have a few folks with old hardware have to go through some hoops that n have most people get something that is "much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most users (99.999 %) who want binary wheels have an SSE3 capable CPU. According to Wikipedia: AMD: Athlon 64 (since Venice Stepping E3 and San Diego Stepping E4) Athlon 64 X2 Athlon 64 FX (since San Diego Stepping E4) Opteron (since Stepping E4) Sempron (since Palermo. Stepping E3) Phenom Phenom II Athlon II Turion 64 Turion 64 X2 Turion X2 Turion X2 Ultra Turion II X2 Mobile Turion II X2 Ultra APU FX Series Intel: Celeron D Celeron (starting with Core microarchitecture) Pentium 4 (since Prescott) Pentium D Pentium Extreme Edition (but NOT Pentium 4 Extreme Edition) Pentium Dual-Core Pentium (starting with Core microarchitecture) Core Xeon (since Nocona) Atom If you have Pentium II, you can build your own NumPy... Sturla
On 03.07.2014 05:56, Sturla Molden wrote:
On 02/07/14 19:55, Chris Barker wrote:
Indeed -- the default (i.e what you get with pip install numpy) should be SSE2 -- I":d much rather have a few folks with old hardware have to go through some hoops that n have most people get something that is "much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most users (99.999 %) who want binary wheels have an SSE3 capable CPU.
while true that pretty much all cpus currently around have it there is no technical requirement for even new cpus to have SSE3. Compared to SSE2 you do not have to implement it to sell a compatible 64 bit cpu. Not even the new x32 ABI requires it. In practice I think we could easily get away with using SSE3 as default but I still would like to see if it makes any performance difference in benchmarks. In my experience (which is exclusively on pre-haswell machines) the horizontal operations it offers tend to be slower than other solutions.
Hi, On Thu, Jul 3, 2014 at 4:56 AM, Sturla Molden <sturla.molden@gmail.com> wrote:
On 02/07/14 19:55, Chris Barker wrote:
Indeed -- the default (i.e what you get with pip install numpy) should be SSE2 -- I":d much rather have a few folks with old hardware have to go through some hoops that n have most people get something that is "much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most users (99.999 %) who want binary wheels have an SSE3 capable CPU.
The 99% for SSE2 comes from the Firefox crash reports, where the large majority are for very recent Firefox downloads. If you can identify SSE3 machines from the reported CPU string (as the Firefox people did for SSE2), please do have a look a see if you can get a count for SSE3 in the Firefox crash reports; if it's close to 99% that would make a strong argument: https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2 https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0 Cheers, Matthew
I guess this one's mainly for Carl: On Thu, Jul 3, 2014 at 11:06 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Thu, Jul 3, 2014 at 4:56 AM, Sturla Molden <sturla.molden@gmail.com> wrote:
On 02/07/14 19:55, Chris Barker wrote:
Indeed -- the default (i.e what you get with pip install numpy) should be SSE2 -- I":d much rather have a few folks with old hardware have to go through some hoops that n have most people get something that is "much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most users (99.999 %) who want binary wheels have an SSE3 capable CPU.
The 99% for SSE2 comes from the Firefox crash reports, where the large majority are for very recent Firefox downloads.
If you can identify SSE3 machines from the reported CPU string (as the Firefox people did for SSE2), please do have a look a see if you can get a count for SSE3 in the Firefox crash reports; if it's close to 99% that would make a strong argument:
https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2 https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
Jonathan Helmus recently pointed out https://ci.appveyor.com in a discussion on the scikit-image mailing list. The scikit-image team are trying to get builds and tests working there. The configuration file allows arbitrary cmd and powershell commands executed in a clean Windows virtual machine. Do you think it would be possible to get the wheel builds working on something like that? That would be a big step forward, just because the current procedure is rather fiddly, even if not very difficult. Any news on the pull request to numpy? Waiting eagerly :) Cheers, Matthew
Hi Matthew, I can make it in the late evening (MEZ timezone), so you have to wait a bit ... I also will try to create new numpy/scipy wheels. I now have the latest OpenBLAS version ready. Olivier gaves me access to rackspace. I wil try it out on the weekend. Regards Carl 2014-07-03 12:46 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
I guess this one's mainly for Carl:
On Thu, Jul 3, 2014 at 11:06 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Thu, Jul 3, 2014 at 4:56 AM, Sturla Molden <sturla.molden@gmail.com> wrote:
On 02/07/14 19:55, Chris Barker wrote:
Indeed -- the default (i.e what you get with pip install numpy) should be SSE2 -- I":d much rather have a few folks with old hardware have to go through some hoops that n have most people get something that is "much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most users (99.999 %) who want binary wheels have an SSE3 capable CPU.
The 99% for SSE2 comes from the Firefox crash reports, where the large majority are for very recent Firefox downloads.
If you can identify SSE3 machines from the reported CPU string (as the Firefox people did for SSE2), please do have a look a see if you can get a count for SSE3 in the Firefox crash reports; if it's close to 99% that would make a strong argument:
https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2 https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
Jonathan Helmus recently pointed out https://ci.appveyor.com in a discussion on the scikit-image mailing list. The scikit-image team are trying to get builds and tests working there. The configuration file allows arbitrary cmd and powershell commands executed in a clean Windows virtual machine. Do you think it would be possible to get the wheel builds working on something like that? That would be a big step forward, just because the current procedure is rather fiddly, even if not very difficult.
Any news on the pull request to numpy? Waiting eagerly :)
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, On Thu, Jul 3, 2014 at 12:51 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi Matthew,
I can make it in the late evening (MEZ timezone), so you have to wait a bit ... I also will try to create new numpy/scipy wheels. I now have the latest OpenBLAS version ready. Olivier gaves me access to rackspace. I wil try it out on the weekend.
Great - thanks a lot, Matthew
Hi! I gave appveyor a try this WE so as to build a minimalistic Python 3 project with a Cython extension. It works both with 32 and 64 bit MSVC++ and can generate wheel packages. See: https://github.com/ogrisel/python-appveyor-demo However 2008 is not (yet) installed so it cannot be used for Python 2.7. The Feodor Fitsner seems to be open to install older versions of MSVC++ on the worker VM image so this might be possible in the future. Let's see. Off-course for numpy / scipy this does not solve the fortran compiler issue, so Carl's static mingw-w64 toolchain still looks like a very promising solution (and could probably be run on the appveyor infra as well). Best, -- Olivier
Feodor updated the AppVeyor nodes to have the Windows SDK matching MSVC 2008 Express for Python 2. I have updated my sample scripts and we now have a working example of a free CI system for: Python 2 and 3 both for 32 and 64 bit architectures. https://github.com/ogrisel/python-appveyor-demo Best, -- Olivier
This is an awesome resource for tons of projects. Thanks Olivier! -Robert On Wed, Jul 9, 2014 at 7:00 AM, Olivier Grisel <olivier.grisel@ensta.org> wrote:
Feodor updated the AppVeyor nodes to have the Windows SDK matching MSVC 2008 Express for Python 2. I have updated my sample scripts and we now have a working example of a free CI system for:
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Best,
-- Olivier _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-07-10 0:53 GMT+02:00 Robert McGibbon <rmcgibbo@gmail.com>:
This is an awesome resource for tons of projects.
Thanks. FYI here is the PR for sklearn to use AppVeyor CI: https://github.com/scikit-learn/scikit-learn/pull/3363 It's slightly different from the minimalistic sample I wrote for python-appveyor-demo in the sense that for sklearn I decided to actually install the generated wheel package and run the tests on the resulting installed library rather than on the project source folder. -- Olivier
I forked Olivier's example project to use the same infrastructure for building conda binaries and deploying them to binstar, which might also be useful for some projects. https://github.com/rmcgibbo/python-appveyor-conda-example -Robert On Wed, Jul 9, 2014 at 3:53 PM, Robert McGibbon <rmcgibbo@gmail.com> wrote:
This is an awesome resource for tons of projects.
Thanks Olivier!
-Robert
On Wed, Jul 9, 2014 at 7:00 AM, Olivier Grisel <olivier.grisel@ensta.org> wrote:
Feodor updated the AppVeyor nodes to have the Windows SDK matching MSVC 2008 Express for Python 2. I have updated my sample scripts and we now have a working example of a free CI system for:
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Best,
-- Olivier _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi, on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I uploaded 7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64 bit. To use mingw-w64 for Python >= 3.3 you have to manually tweak the so called specs file - see readme.txt in the archive. Regards Carl 2014-07-28 4:32 GMT+02:00 Robert McGibbon <rmcgibbo@gmail.com>:
I forked Olivier's example project to use the same infrastructure for building conda binaries and deploying them to binstar, which might also be useful for some projects.
https://github.com/rmcgibbo/python-appveyor-conda-example
-Robert
On Wed, Jul 9, 2014 at 3:53 PM, Robert McGibbon <rmcgibbo@gmail.com> wrote:
This is an awesome resource for tons of projects.
Thanks Olivier!
-Robert
On Wed, Jul 9, 2014 at 7:00 AM, Olivier Grisel <olivier.grisel@ensta.org> wrote:
Feodor updated the AppVeyor nodes to have the Windows SDK matching MSVC 2008 Express for Python 2. I have updated my sample scripts and we now have a working example of a free CI system for:
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Best,
-- Olivier _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-07-28 15:25 GMT+02:00 Carl Kleffner <cmkleffner@gmail.com>:
Hi,
on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I uploaded 7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64 bit. To use mingw-w64 for Python >= 3.3 you have to manually tweak the so called specs file - see readme.txt in the archive.
Have the patches to build numpy and scipy with mingw-w64 been merged in the master branches of those projects? -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel
I had to move my development enviroment on different windows box recently (stilll in progress). On this box I don't have full access unfortunately. The patch for scipy build was merged into scipy master some time ago, see https://github.com/scipy/scipy/pull/3484 . I have some additional patches for scipy.test. The pull request for numpy build has not yet been made for the reasons I mentioned. Cheers, Carl 2014-07-28 16:46 GMT+02:00 Olivier Grisel <olivier.grisel@ensta.org>:
2014-07-28 15:25 GMT+02:00 Carl Kleffner <cmkleffner@gmail.com>:
Hi,
on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I uploaded 7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64 bit. To use mingw-w64 for Python >= 3.3 you have to manually tweak the so called specs file - see readme.txt in the archive.
Have the patches to build numpy and scipy with mingw-w64 been merged in the master branches of those projects?
-- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Carl, All the items you suggest would be very appreciated. Don't hesitate to ping me if you need me to test new packages. Also the sklearn project has a free Rackspace Cloud account that Matthew is already using to make travis upload OSX wheels for the master branch of various scipy stack projects. Rackspace cloud can also be used to start windows VMs if needed. Please tell me if you want a some user credentials and API key. Myself I use the Rackspace Cloud account to build sklearn wheels following those instructions: https://github.com/scikit-learn/scikit-learn/wiki/How-to-make-a-release#buil... We are using msvc express (but only for 32bit Python) right now. I have yet to try to build sklearn with your mingw-w64 static toolchain. Rackspace granted us $2000 worth of cloud resource per month (e.g. bandwith and VM time) so there is plenty of resource left to help with upstream projects such as numpy and scipy. Best, -- Olivier
On Wed, Jul 2, 2014 at 3:37 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
It looks like 99% of Windows users do have SSE2 though [1]. So I think what is required is
* Build the wheels for 32-bit (easy) * Patch the wheels to check and give helpful error in absence of SSE2 (fairly easy) * Get agreement these should go up on pypi and be maintained (feedback anyone?)
+Inf It would benefit the community a LOT to have binary wheels up on PyPi, and the very small number of failures due to old hardware will be no big deal, as long as the users get a meaningful message, rather than a hard crash. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
Hi, On Thu, May 8, 2014 at 5:51 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau <cournape@gmail.com> wrote:
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the devkit package. It sounds like OpenBLAS will be much easier to build, so we could start with ATLAS binaries as a default, expecting OpenBLAS to be built more often with the toolchain. I think that's how numpy binary installers are built at the moment - using old binary builds of ATLAS.
I'm happy to provide the builds of ATLAS - e.g. here:
I just found the official numpy binary builds of ATLAS:
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days ? I think we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I think Clint will have some time to help out next week.
Clint spent an hour on the phone working through the 32-bit build. There was a nasty gcc bug revealed by some oddness to the input flags. Fixed now: https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/ Configure flags needed for 32-bit: config_opts="-b 32 -Si archdef 0 -A 13 -V 384 \ --with-netlib-lapack-tarfile=${lapack_tarfile} \ -Fa al '-mincoming-stack-boundary=2 -mfpmath=sse -msse2'" For 64-bit: config_opts="-b 64 -V 384 --with-netlib-lapack-tarfile=${lapack_tarfile}" Cheers, Matthew
On Sun, Apr 27, 2014 at 6:06 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
A possible option is to install the toolchain inside site-packages and to deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could be extended during import of the package. But I have no idea, whats the best strategy to additionaly install ATLAS or other third party libraries.
What I did in the past is just to download the ATLAS binaries from the scipy/numpy wiki and move them into the Python/Dlls directory. IIRC My impression was that finding ATLAS binaries was the difficult part, not moving them into the right directory.
Cheers,
Carl
2014-04-27 23:46 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
Hi,
On Sun, Apr 27, 2014 at 2:34 PM, Carl Kleffner <cmkleffner@gmail.com> wrote:
Hi,
I will definitly don't have not time until thursday this week working out the github workflow for a numpy pull request. So feel free to do it for me.
OK - I will have a go at this tomorrow.
BTW: There is a missing feature in the mingw-w64 toolchain. By now it features linking to msvcrt90 runtime only. I have do extend the specs file to allow linking to msvcr100 with an addional flag. Or create a dedicated toolchain - what do you think?
I don't know.
Is this a discussion that should go to the mingw-w64 list do you think? It must be a very common feature.
As you know, I'm really hoping it will be possible make a devkit for Python similar to the Ruby devkits [1].
I got my entire initial setup on the computer I'm using right now through python-xy, including MingW 32. The only thing I ever had to do was to create the `distutils.cfg` in new python install. python-xy relies on the availability of a open source development environment for numpy and scipy, and has been restricted so far to python 32 versions. winpython is only a python distribution and is also available for 64bit (with Gohlke binaries I think) I think it would be very helpful to get python-xy set up for development for 64 bit versions, now that the toolchain with MingW is available. I'm skeptical about having lot's of distributions that install all their own full toolchain (I always worry about which one is actually on the path. I deleted my first git for Windows version because it came with a built-in MSYS/MingW toolchain and now just use the nice and small portable version.)
The ideal would be a devkit that transparently picked up 32 vs 64 bit, and MSVC runtime according to the Python version. For example, OSX compilation automatically picks up the OSX SDK with which the relevant Python was built. Do you think something like this is possible? That would be a great improvement for people building extensions and wheels on Windows.
How does MingW64 decide whether to build 32 or to build 64 bit versions? Does the python version matter for MingW? or should this pick up one of the Visual SDK's that the user needs to install? Josef
Cheers,
Matthew
[1] http://rubyinstaller.org/add-ons/devkit/ _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sun, Apr 27, 2014 at 2:46 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
As you know, I'm really hoping it will be possible make a devkit for Python similar to the Ruby devkits [1].
That would be great!
From a really quick glance, it looks like we could almost use the Ruby Devkit, maybe adding a couple add-ons..
What do they do for 64 bit? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
Hi, On Mon, Apr 28, 2014 at 10:54 AM, Chris Barker <chris.barker@noaa.gov> wrote:
On Sun, Apr 27, 2014 at 2:46 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
As you know, I'm really hoping it will be possible make a devkit for Python similar to the Ruby devkits [1].
That would be great!
From a really quick glance, it looks like we could almost use the Ruby Devkit, maybe adding a couple add-ons..
Please, Carl correct me if I'm wrong, but I think the main issues are: 1) Linking to the right MSVC runtime for each Python version (seems to need different gcc specs) 2) Static linking - Carl's toolchain does full static linking including C runtimes so we don't need to deal with the DLL path. I'm not sure what the ruby devkit does to solve that.
What do they do for 64 bit?
It looks like they have their own Mingw-w64 toolchain. Cheers, Matthew
(1) Yes, Support for MSVC100 (python-3.3 and up) is on the TODO list (2) both toolchains are configured for static linking. No need to deploy: libgcc_s_dw2-1.dll, libgomp-1.dll, libquadmath-0.dll, libstdc++-6.dll, libgfortran-3.dll or libwinpthread-1.dll (3) I decided to create two dedicated toolchains for 32bit and for 64bit Regards, Carl 2014-04-29 11:19 GMT+02:00 Matthew Brett <matthew.brett@gmail.com>:
Hi,
On Mon, Apr 28, 2014 at 10:54 AM, Chris Barker <chris.barker@noaa.gov> wrote:
On Sun, Apr 27, 2014 at 2:46 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
As you know, I'm really hoping it will be possible make a devkit for Python similar to the Ruby devkits [1].
That would be great!
From a really quick glance, it looks like we could almost use the Ruby Devkit, maybe adding a couple add-ons..
Please, Carl correct me if I'm wrong, but I think the main issues are:
1) Linking to the right MSVC runtime for each Python version (seems to need different gcc specs) 2) Static linking - Carl's toolchain does full static linking including C runtimes so we don't need to deal with the DLL path. I'm not sure what the ruby devkit does to solve that.
What do they do for 64 bit?
It looks like they have their own Mingw-w64 toolchain.
Cheers,
Matthew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett <matthew.brett@gmail.com> wrote:
2) Static linking - Carl's toolchain does full static linking including C runtimes
The C runtime cannot be statically linked. It would mean that we get multiple copies of errno and multiple malloc heaps in the process – one of each static CRT. We must use the same C runtime DLL as Python. But loading it is not a problem because Python has done that before NumPy is imported. Sturla
Correction: gcc (mingw) runtimes are statically linked. The C-runtime DLL msvcrXXX is linked dynamically. Carl 2014-04-29 17:10 GMT+02:00 Sturla Molden <sturla.molden@gmail.com>:
Matthew Brett <matthew.brett@gmail.com> wrote:
2) Static linking - Carl's toolchain does full static linking including C runtimes
The C runtime cannot be statically linked. It would mean that we get multiple copies of errno and multiple malloc heaps in the process – one of each static CRT. We must use the same C runtime DLL as Python. But loading it is not a problem because Python has done that before NumPy is imported.
Sturla
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
participants (13)
-
Carl Kleffner
-
Charles R Harris
-
Chris Barker
-
David Cournapeau
-
josef.pktd@gmail.com
-
Julian Taylor
-
Matthew Brett
-
Nathaniel Smith
-
Olivier Grisel
-
Pauli Virtanen
-
Ralf Gommers
-
Robert McGibbon
-
Sturla Molden