Numpy 1.11.0b2 released
Hi All, I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta was a damp squib due to missing files in the released source files, this release fixes that. The new source filese may be downloaded from sourceforge, no binaries will be released until the mingw tool chain problems are sorted. Please test and report any problem. Chuck
Hi, On Thu, Jan 28, 2016 at 12:51 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
OSX wheels build OK: https://travis-ci.org/MacPython/numpy-wheels/builds/105521850 Y'all can test with: pip install --pre --trusted-host wheels.scipy.org -f http://wheels.scipy.org numpy Cheers, Matthew
Maybe we should upload to pypi? This allows us to upload binaries for osx at least, and in general will make the beta available to anyone who does 'pip install --pre numpy'. (But not regular 'pip install numpy', because pip is clever enough to recognize that this is a prerelease and should not be used by default.) (For bonus points, start a campaign to convince everyone to add --pre to their ci setups, so that merely uploading a prerelease will ensure that it starts getting tested automatically.) On Jan 28, 2016 12:51 PM, "Charles R Harris" <charlesr.harris@gmail.com> wrote:
On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
One of the things that will probably happen but needs to be avoided is that 1.11b2 becomes the visible release at https://pypi.python.org/pypi/numpy. By default I think the status of all releases but the last uploaded one (or highest version number?) is set to hidden. Other ways that users can get a pre-release by accident are: - they have pip <1.4 (released in July 2013) - other packages have a requirement on numpy with a prerelease version (see https://pip.pypa.io/en/stable/reference/pip_install/#pre-release-versions) Ralf
On Thu, Jan 28, 2016 at 2:23 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Huh, I had the impression that if it was ambiguous whether the "latest version" was a pre-release or not, then pypi would list all of them on that page -- at least I know I've seen projects where going to the main pypi URL gives a list of several versions like that. Or maybe the next-to-latest one gets hidden by default and you're supposed to go back and "un-hide" the last release manually. Could try uploading to https://testpypi.python.org/pypi and see what happens...
Other ways that users can get a pre-release by accident are: - they have pip <1.4 (released in July 2013)
It looks like ~a year ago this was ~20% of users -- https://caremad.io/2015/04/a-year-of-pypi-downloads/ I wouldn't be surprised if it dropped quite a bit since then, but if this is something that will affect our decision then we can ping @dstufft to ask for updated numbers. -n -- Nathaniel J. Smith -- https://vorpus.org
On Thu, Jan 28, 2016 at 11:57 PM, Nathaniel Smith <njs@pobox.com> wrote:
That's worth a try, would be good to know what the behavior is.
Hmm, that's more than I expected. Even if it dropped by a factor of 10 over the last year, that would still be a lot of failed installs for the current beta1. It looks to me like this is a bad trade-off. It would be much better to encourage people to test against numpy master instead of a pre-release (and we were trying to do that anyway). So the benefit is then fairly limited, mostly typing the longer line including wheels.scipy.org when someone wants to test a pre-release. Ralf
On Jan 28, 2016 3:25 PM, "Ralf Gommers" <ralf.gommers@gmail.com> wrote:
After the disastrous lack of testing for the 1.10 prereleases, it might almost be a good thing if we accidentally swept up some pip 1.3 users into doing prerelease testing... I mean, if they don't test it now, they'll just end up testing it later, and at least there will be fewer of them to start with? Plus all they have to do to opt out is to maintain a vaguely up-to-date environment, which is a good thing for the ecosystem anyway :-). It's bad for everyone if pip and PyPI are collaborating to provide this rather nice, standard feature for distributing and QAing pre-releases, but we can't actually use it because of people not upgrading pip... Regarding CI setups and testing against master: I think of these as being complementary. The fact is that master *will* sometimes just be broken, or contain tentative API changes that get changed before the release, etc. So it's really great that there are some projects who are willing to take on the work of testing numpy master directly as part of their own CI setups, but it is going to be extra work and risk for them, they'll probably have to switch it off sometimes and then turn it back on, and they really need to have decent channels of communication with us whenever things go wrong because sometimes the answer will be "doh, we didn't mean to change that, please leave your code alone and we'll fix it on our end". (My nightmare here is that downstream projects start working around bugs in master, and then we find ourselves having to jump through hoops to maintain backcompat with code that was never even released. __numpy_ufunc__ is stuck in this situation -- we know that the final version will have to change its name, because scipy has been shipping code that assumes a different calling convention than the final released version will have.) So, testing master is *great*, but also tricky and not really something I think we should be advocating to all 5000 downstream projects [1]. OTOH, once a project has put up a prerelease, then *everyone* wants to be testing that, because if they don't then things definitely *will* break soon. (And this isn't specific to numpy -- this applies to pretty much all upstream dependencies.) So IMO we should be teaching everyone that their CI setups should just always use --pre when running pip install, and this will automatically improve QA coverage for the whole ecosystem. ...It does help if we run at least some minimal QA against the sdist before uploading it though, to avoid the 1.11.0b1 problem :-). (Though the new travis test for sdists should cover that.) Something else for the release checklist I guess... -n [1] http://depsy.org/package/python/numpy
On Fri, Jan 29, 2016 at 4:21 AM, Nathaniel Smith <njs@pobox.com> wrote:
That's a fair point. And given the amount of brokenness in (especially older versions of ) pip, plus how easy it is to upgrade pip, we should probably just say that we expect a recent pip (say last 3 major releases).
OK, persuasive argument. In the past this wouldn't have worked, but our CI setup is much better now. Until we had Appveyor testing for example, it was almost the rule that MSVC builds were broken for every first beta. So, with some hesitation: let's go for it.
There's still a large number of ways that one can install numpy that aren't tested (see list in https://github.com/numpy/numpy/issues/6599), but the only one relevant for pip is when easy_install is triggered by `setup_requires=numpy`. It's actually not too hard to add that to TravisCI testing (just install a dummy package that uses setup_requires. I'll add that to the todo list. Ralf
Is this the point when scikit-learn should build against it? Or do we wait for an RC? Also, we need a scipy build against it. Who does that? Our continuous integration doesn't usually build scipy or numpy, so it will be a bit tricky to add to our config. Would you run our master tests? [did we ever finish this discussion?] Andy On 01/28/2016 03:51 PM, Charles R Harris wrote:
You most likely don't need a scipy build against it. You should be able to use the oldest scipy our project supports. Numpy does try to not break its reverse dependencies, if stuff breaks it should only occur in edge cases not affecting functionality of real applications (like warnings or overzealous testing). Of course that only works if people bother to test against the numpy prereleases. On 01/29/2016 06:45 PM, Andreas Mueller wrote:
On Jan 29, 2016 9:46 AM, "Andreas Mueller" <t3kcit@gmail.com> wrote:
Is this the point when scikit-learn should build against it?
Yes please!
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule that says it can't become an RC until after we've tested scikit-learn (and a list of similarly prominent packages). On the theory that RC means "we think this is actually good enough to release" :-). OTOH I'm not sure the alpha/beta/RC distinction is very helpful; maybe they should all just be betas.
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old builds of scipy and scikit-learn is even better than rebuilding them, because it tests numpy's ABI compatibility -- if you find you *have* to rebuild something then we *definitely* want to know that.
We didn't, and probably should... :-) It occurs to me that the best solution might be to put together a .travis.yml for the release branches that does: "for pkg in IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'" This might not be viable right now, but will be made more viable if pypi starts allowing official Linux wheels, which looks likely to happen before 1.12... (see PEP 513) -n
On Fri, Jan 29, 2016 at 11:39 PM, Nathaniel Smith <njs@pobox.com> wrote:
There's also https://github.com/MacPython/scipy-stack-osx-testing by the way, which could have scikit-learn and scikit-image added to it. That's two options that are imho both better than adding more workload for the numpy release manager. Also from a principled point of view, packages should test with new versions of their dependencies, not the other way around. Ralf
On Jan 30, 2016 9:27 AM, "Ralf Gommers" <ralf.gommers@gmail.com> wrote:
On Fri, Jan 29, 2016 at 11:39 PM, Nathaniel Smith <njs@pobox.com> wrote:
It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'" that says it can't become an RC until after we've tested scikit-learn (and a list of similarly prominent packages). On the theory that RC means "we think this is actually good enough to release" :-). OTOH I'm not sure the alpha/beta/RC distinction is very helpful; maybe they should all just be betas. packages should test with new versions of their dependencies, not the other way around. Sorry, that was unclear. I meant that we should finish the discussion, not that we should necessarily be the ones running the tests. "The discussion" being this one: https://github.com/numpy/numpy/issues/6462#issuecomment-148094591 https://github.com/numpy/numpy/issues/6494 I'm not saying that the release manager necessarily should be running the tests (though it's one option). But the 1.10 experience seems to indicate that we need *some* process for the release manager to make sure that some basic downstream testing has happened. Another option would be keeping a checklist of downstream projects and making sure they've all checked in and confirmed that they've run tests before making the release. -n
just my 2c it's fairly straightforward to add a test to the Travis matrix to grab numpy wheels built numpy wheels (works for conda or pip installs). so in pandas we r testing 2.7/3.5 against numpy master continuously https://github.com/pydata/pandas/blob/master/ci/install-3.5_NUMPY_DEV.sh
On 01/30/2016 06:27 PM, Ralf Gommers wrote:
It would be nice but its not realistic, I doubt most upstreams that are not themselves major downstreams are even subscribed to this list. Testing or delegating testing of least our major downstreams should be the job of the release manager. Thus I also disagree with our more frequent releases. It puts too much porting and testing effort on our downstreams and it gets infeaseble for a volunteer release manager to handle. I fear by doing this we will end up in an situation where more downstreams put upper bounds on their supported numpy releases like e.g. astropy already did. This has bad consequences like the subclass breaking of linspace that should have been caught month ago but was not because upstreams were discouraging users from upgrading numpy because they could not keep up with porting.
31.01.2016, 12:57, Julian Taylor kirjoitti: [clip]
I'd suggest that some automation could reduce the maintainer burden here. Basically, I think being aware of downstream breakage is something that could be determined without too much manual intervention. For example, automated test rig that does the following: - run tests of a given downstream project version, against previous numpy version, record output - run tests of a given downstream project version, against numpy master, record output - determine which failures were added by the new numpy version - make this happen with just a single command, eg "python run.py", and implement it for several downstream packages and versions. (Probably good to steal ideas from travis-ci dependency matrix etc.) This is probably too time intensive and waste of resources for Travis-CI, but could be run by the Numpy maintainer or someone else during release process, or periodically on some ad-hoc machine if someone is willing to set it up. Of course, understanding the cause of breakages would take some understanding of the downstream package, but this would at least ensure we are aware of stuff breaking. Provided it's covered by downstream test suite, of course. -- Pauli Virtanen
On 31 Jan 2016 13:08, "Pauli Virtanen" <pav@iki.fi> wrote:
A simpler idea: build the master branch of a series of projects and run the tests. In case of failure, we can compare with Travis's logs from the project when they use the released numpy. In most cases, the master branch is clean, so an error will likely be a change in behaviour. This can be run automatically once a week, to not hog too much of Travis, and counting the costs in hours of work, is very cheap to set up, and free to maintain. /David
31.01.2016, 14:41, Daπid kirjoitti:
If you can assume the tests of a downstream project are in an OK state, then you can skip the build against existing numpy. But it's an additional and unnecessary burden for the Numpy maintainers to compare the logs manually (and check the built versions are the same, and that the difference is not due to difference in build environments). I would also avoid depending on the other projects' Travis-CI configurations, since these may change. I think testing released versions of downstream projects is better than testing their master versions here, as the master branch may contain workarounds for Numpy changes and not be representative of what people get on their computers after Numpy release.
It may be that such project could be runnable on Travis, if split to per-project runs to work around the 50min timeout. I'm not aware of Travis-CI having support for "automatically once per week" builds. Anyway, having any form of central automated integration testing would be better than the current situation where it's mostly all-manual and relies on the activity of downstream project maintainers. -- Pauli Virtanen
Hi Julian, While the numpy 1.10 situation was bad, I do want to clarify that the problems we had in astropy were a consequence of *good* changes in `recarray`, which solved many problems, but also broke the work-arounds that had been created in `astropy.io.fits` quite a long time ago (possibly before astropy became as good as it tries to be now at moving issues upstream and perhaps before numpy had become as responsive to what happens downstream as it is now; I think it is fair to say many project's attitude to testing has changed rather drastically in the last decade!). I do agree, though, that it just goes to show one has to try to be careful, and like Nathaniel's suggestion of automatic testing with pre-releases -- I just asked on our astropy-dev list whether we can implement it. All the best, Marten
hi, even if it are good changes, I find it reasonable to ask for a delay in numpy release if you need more time to adapt. Of course this has to be within reason and can be rejected, but its very valuable to know changes break existing old workarounds. If pyfits broke there is probably a lot more code we don't know about that is also broken. Sometimes we might even be able to get the good without breaking the bad. E.g. thanks to Sebastians heroic efforts in his recent indexing rewrite only very little broke and a lot of odd stuff could be equipped with deprecation warnings instead of breaking. Of course that cannot often be done or be worthwhile but its at least worth considering when we change core functionality. cheers, Julian On 31.01.2016 22:52, Marten van Kerkwijk wrote:
On Sun, Jan 31, 2016 at 11:57 AM, Julian Taylor < jtaylor.debian@googlemail.com> wrote:
I'm pretty sure that some core devs from all major scipy stack packages are subscribed to this list. Testing or delegating testing of least our major downstreams should be
the job of the release manager.
If we make it (almost) fully automated, like in https://github.com/MacPython/scipy-stack-osx-testing, then I agree that adding this to the numpy release checklist would make sense. But it should really only be a tiny amount of work - we're short on developer power, and many things that are cross-project like build & test infrastructure (numpy.distutils, needed pip/packaging fixes, numpy.testing), scipy.org (the "stack" website), numpydoc, etc. are mostly maintained by the numpy/scipy devs. I'm very reluctant to say yes to putting even more work on top of that. So: it would really help if someone could pick up the automation part of this and improve the stack testing, so the numpy release manager doesn't have to do this. Ralf
01.02.2016, 23:25, Ralf Gommers kirjoitti: [clip]
quick hack: https://github.com/pv/testrig Not that I'm necessarily volunteering to maintain the setup, though, but if it seems useful, move it under numpy org. -- Pauli Virtanen
On Tue, Feb 2, 2016 at 8:45 AM, Pauli Virtanen <pav@iki.fi> wrote:
That's pretty cool :-). I also was fiddling with a similar idea a bit, though much less fancy... my little script cheats and uses miniconda to fetch pre-built versions of some packages, and then runs the tests against numpy 1.10.2 (as shipped by anaconda) + the numpy master, and does a diff (with a bit of massaging to make things more readable, like summarizing warnings): https://travis-ci.org/njsmith/numpy/builds/106865202 Search for "#####" to jump between sections of the output. Some observations: testing* matplotlib* this way doesn't work, b/c they need special test data files that anaconda doesn't ship :-/ *scipy*: *one new failure*, in test_nanmedian_all_axis 250 calls to np.testing.rand (wtf), 92 calls to random_integers, 3 uses of datetime64 with timezones. And for some reason the new numpy gives more "invalid value encountered in greater"-type warnings. *astropy*: *two weird failures* that hopefully some astropy person will look into; two spurious failures due to over-strict testing of warnings *scikit-learn*: several* new failures:* 1 "invalid slice" (?), 2 "OverflowError: value too large to convert to int". No idea what's up with these. Hopefully some scikit-learn person will investigate? 2 np.ma view warnings, 16 multi-character strings used where "C" or "F" expected, 1514 (!!) calls to random_integers *pandas:* zero new failures, only new warnings are about NaT, as expected. I guess their whole "running their tests against numpy master" thing works! *statsmodels:* * absolute disaster*. *261 *new failures, I think mostly because of numpy getting pickier about float->int conversions. Also a few "invalid slice". 102 np.ma view warnings. I don't have a great sense of whether the statsmodels breakages are ones that will actually impact users, or if they're just like, 1 bad utility function that only gets used in the test suite. (well, probably not the latter, because they do have different tracebacks). If this is typical though then we may need to back those integer changes out and replace them by a really loud obnoxious warning for a release or two :-/ The other problem here is that statsmodels hasn't done a release since 2014 :-/ -n -- Nathaniel J. Smith -- https://vorpus.org
On Wed, Feb 3, 2016 at 9:18 PM, Nathaniel Smith <njs@pobox.com> wrote:
Whoops, got distracted talking about the results and forgot to say -- I guess we should think about how to combine these? I like the information on warnings, because it helps gauge the impact of deprecations, which is a thing that takes a lot of our attention. But your approach is clearly fancier in terms of how it parses the test results. (Do you think the fanciness is worth it? I can see an argument for crude and simple if the fanciness ends up being fragile, but I haven't read the code -- mostly I was just being crude and simple because I'm lazy :-).) An extra ~2 hours of tests / 6-way parallelism is not that big a deal in the grand scheme of things (and I guess it's probably less than that if we can take advantage of existing binary builds) -- certainly I can see an argument for enabling it by default on the maintenance/1.x branches. Running N extra test suites ourselves is not actually more expensive than asking N projects to run 1 more testsuite :-). The trickiest part is getting it to give actually-useful automated pass/fail feedback, as opposed to requiring someone to remember to look at it manually :-/ Maybe it should be uploading the reports somewhere? So there'd be a readable "what's currently broken by 1.x" page, plus with persistent storage we could get travis to flag if new additions to the release branch causes any new failures to appear? (That way we only have to remember to look at the report manually once per release, instead of constantly throughout the process.) -n -- Nathaniel J. Smith -- https://vorpus.org
On Wed, 3 Feb 2016 21:56:08 -0800 Nathaniel Smith <njs@pobox.com> wrote:
Yes, I think that's where the problem lies. Python had something called "community buildbots" at a time (testing well-known libraries such as Twisted against the Python trunk), but it suffered from lack of attention and finally was dismantled. Apparently having the people running it and the people most interested in it not being the same ones ended up a bad idea :-) That said, if you do something like that with Numpy, we would be interested in having Numba be part of the tested packages. Regards Antoine.
04.02.2016, 07:56, Nathaniel Smith kirjoitti: [clip]
The fanciness is essentially a question of implementation language and ease of writing the reporting code. At 640 SLOC it's probably not so bad. I guess it's reasonably robust --- the test report formats are unlikely to change, and pip/virtualenv will probably continue to work esp. with pinned pip version. It should be simple to extract also the warnings from the test stdout. I'm not sure if the order of test results is deterministic in nose/py.test, so I don't know if just diffing the outputs always works. Building downstream from source avoids future binary compatibility issues. [clip]
This is probably possible to implement. Although, I'm not sure how much added value this is compared to travis matrix, eg. https://travis-ci.org/pv/testrig/ Of course, if the suggestion is that the results are generated on somewhere else than on travis, then that's a different matter. -- Pauli Virtanen
05.02.2016, 19:55, Nathaniel Smith kirjoitti:
ABI compatibility. However, as I understand it, not many backward ABI incompatible changes in Numpy are not expected in future. If they were, I note that if you work in the same environment, you can push repeated compilation times to zero compared to the time it takes to run tests in a way that requires less configuration, by enabling ccache/f90cache.
On Fri, Feb 5, 2016 at 9:55 AM, Nathaniel Smith <njs@pobox.com> wrote:
other's binary wheels are only available for the versions that are supported. Usually the latest releases, but Anaconda doesn't always have the latest builds of everything. Maybe we want to test against matplotlib master (or a release candidate, or??), for instance. And when we are testing a numpy-abi-breaking release, we'll need to have everything tested against that release. Usually, when you set up a conda environment, it preferentially pulls from the default channel anyway (or any other channel you set up) , so we'd only maintain stuff that wasn't readily available elsewhere. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Fri, Feb 5, 2016 at 1:16 PM, Chris Barker <chris.barker@noaa.gov> wrote:
True, though official project wheels will hopefully solve that soon.
Maybe we want to test against matplotlib master (or a release candidate, or??), for instance.
Generally I think for numpy's purposes we want to test against the latest released version, because it doesn't do end-users much good if a numpy release breaks their environment, and the only fix is hiding in some git repo somewhere :-). But yeah.
And when we are testing a numpy-abi-breaking release, we'll need to have everything tested against that release.
There aren't any current plans to have such a release, but true. -n -- Nathaniel J. Smith -- https://vorpus.org
On Fri, Feb 5, 2016 at 3:24 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Fri, Feb 5, 2016 at 1:16 PM, Chris Barker <chris.barker@noaa.gov> wrote:
OK, this may be more or less helpful, depending on what we want to built against. But a conda environment (maybe tied to a custom channel) really does make a nice contained space for testing that can be set up fast on a CI server. If whoever is setting up a test system/matrix thinks this would be useful, I'd be glad to help set it up. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
FWIW, we (Continuum) are working on a CI system that builds conda recipes. Part of this is testing not only individual packages that change, but also any downstream packages that are also in the repository of recipes. The configuration for this is in https://github.com/conda/conda-recipes/blob/master/.binstar.yml and the project doing the dependency detection is in https://github.com/ContinuumIO/ProtoCI/ This is still being established (particularly, provisioning build workers), but please talk with us if you're interested. Chris, it may still be useful to use docker here (perhaps on the build worker, or elsewhere), also, as the distinction between build machines and user machines is important to make. Docker would be great for making sure that all dependency requirements are met on end-user systems (we've had a few recent issues with libgfortran accidentally missing as a requirement of scipy). Best, Michael On Sat, Feb 6, 2016 at 5:22 PM Chris Barker <chris.barker@noaa.gov> wrote:
(we've had a few recent issues with libgfortran accidentally missing as a requirement of scipy).
On this topic, you may be able to get some milage out of adapting pypa/auditwheel, which can load up extension module `.so` files inside a wheel (or conda package) and walk the shared library dependency tree like the runtime linker (using pyelftools), and check whether things are going to resolve properly and where shared libraries are loaded from. Something like that should be able to, with minimal adaptation to use the conda dependency resolver, check that a conda package properly declares all of the shared library dependencies it actually needs. -Robert On Sat, Feb 6, 2016 at 3:42 PM, Michael Sarahan <msarahan@gmail.com> wrote:
-- -Robert
On Sat, Feb 6, 2016 at 3:42 PM, Michael Sarahan <msarahan@gmail.com> wrote:
FWIW, we (Continuum) are working on a CI system that builds conda recipes.
great, could be handy. I hope you've looked at the open-source systems that do this: obvious-ci and conda-build-all. And conda-smithy to help set it all up.. Chris, it may still be useful to use docker here (perhaps on the build
yes -- veryhandy, I have certainly accidentally brough in other system libs in a build.... Too bad it's Linux only. Though very useful for manylinux. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
Robert, Thanks for pointing out auditwheel. We're experimenting with a GCC 5.2 toolchain, and this tool will be invaluable. Chris, Both conda-build-all and obvious-ci are excellent projects, and we'll leverage them where we can (particularly conda-build-all). Obvious CI and conda-smithy are in a slightly different space, as we want to use our own anaconda.org build service, rather than write scripts to run on other CI services. With more control, we can do cool things like splitting up build jobs and further parallelizing them on more workers, which I see as very important if we're going to be building downstream stuff. As I see it, the single, massive recipe repo that is conda-recipes has been a disadvantage for a while in terms of complexity, but now may be an advantage in terms of building downstream packages (how else would dependency get resolved?) It remains to be seen whether git submodules might replace individual folders in conda-recipes - I think this might give project maintainers more direct control over their packages. The goal, much like ObviousCI, is to enable project maintainers to get their latest releases available in conda sooner, and to simplify the whole CI setup process. We hope we can help each other rather than compete. Best, Michael On Sat, Feb 6, 2016 at 5:53 PM Chris Barker <chris.barker@noaa.gov> wrote:
On Sat, Feb 6, 2016 at 4:11 PM, Michael Sarahan <msarahan@gmail.com> wrote:
I don't think conda-build-all or, for that matter, conda-smithy are fixed to any particular CI server. But anyway, the anaconda.org build service looks nice -- I'll need to give that a try. I've actually been building everything on my own machines anyway so far.
yup -- but the other issue is that conda-recipes didn't seem to be maintained, really...
Great goal! Thanks, -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
One limitation of this approach, AFAIU, is that the downstream versions are pinned by whatever is available from anaconda, correct? Not a big deal per se, just something to keep in mind when looking at the report that there might be false positives. For scipy, for instance, this seems to test 0.16.1. Most (all?) of these are fixed in 0.17.0. At any rate, this is great regardless --- thank you! Cheers, Evgeni
On Wed, Feb 3, 2016 at 10:18 PM, Nathaniel Smith <njs@pobox.com> wrote:
I'm going to do a second beta this weekend and will try putting it up on pypi. The statsmodels are a concern, we may need to put off the transition to integer only indexes. OTOH, if statsmodels can't fix things up we will have to deal with that at some point. Apparently we also need to do something about invisible deprecation warnings. Python changing the default to ignore was, IIRC, due to a Python screw up in backporting PyCapsule to 2.7 and deprecating PyCObject in the process. The easiest way out of that hole was painting it over. Chuck
On 02/01/2016 04:25 PM, Ralf Gommers wrote:
Well, I don't think anyone else from sklearn picked up on this, and I myself totally forgot the issue for the last two weeks. I think continuously testing against numpy master might actually be feasible for us, but I'm not entirely sure....
participants (17)
-
Andreas Mueller
-
Antoine Pitrou
-
Charles R Harris
-
Chris Barker
-
Chris Barker - NOAA Federal
-
Daπid
-
Evgeni Burovski
-
Jeff Reback
-
Julian Taylor
-
Marten van Kerkwijk
-
Matthew Brett
-
Michael Sarahan
-
Nathaniel Smith
-
Pauli Virtanen
-
Ralf Gommers
-
Robert T. McGibbon
-
Thomas Caswell