Sorry for drive by comments.

Matplotlib uses cloudflare as a https proxy + caching.  Jouni K Seppänen sorted out how to re-direct to the rackspace wheelhouse.

I think monthly releases of a big project is probably too fast (and yearly to too slow).  New versions also contain new bugs >:)


On Tue, Sep 11, 2018 at 6:08 PM Mark Harfouche <> wrote:
Hi Egor, Stephan,

1. I definitely commend Matthew Brett on his work. My suggestion would be move his infrastructure to run on every PR rather than getting rid of his infrastructure.
2. (a) Too expensive in terms of workforce:

This is definitely true with the current infrastructure. Simplifying this is non-trivial and would need to be a focused effort.
This discussion aims to find a path toward that goal.
I thank everybody for their input and suggestions, and most importantly, for taking the time to respond to this email.

(b) discounts the release value
Fast releases are probably immensely valuable to the people that raised the underlying issues.

Taking an example from a different library:
A one line fix in OpenCV I submitted this week makes it possible for me to pass arrays directly to PIL
With the current release cycle of OpenCV, it may be at least 1 month (and up to 6!) until I see that fix propagate across different computers.
Now I either have to recompile OpenCV myself (which is difficult) or just not use the feature I need.
scikit-image is easier to compile (or get from the wheelhouse) but those are all still additional barriers that you have to communicate with your collaborators.

(c) reduces their noticeability
People tend to notice only the features they need. I my projects, I don't always have a ">=" for all dependencies because I can often "just use any version that comes from pypi/conda-forge/anaconda".
Sometimes even enabling the `conda-forge` channel is difficult for beginners.

(d) very little number of PRs per release
Once again, even a single PR can be of immense value (eg. float32s passing through unchanged in `img_as_float`-- I wouldn't know how to change that functionality via monkey patching :/ )


Daily wheel builds are fine, unfortunately, I have a dependency in my pipeline on opencv, and as such I am stuck on conda.
I think you were working on getting a shorter url for the wheel house, I think that is great!

Regarding experimenting. I think this definitely needs to be included in the discussion. Thanks for bringing it up.

I think I'll try to summarize your other points in focused issues on github as Juan suggested.


On Tue, Sep 11, 2018 at 4:47 PM Stefan van der Walt <> wrote:
On Tue, 11 Sep 2018 23:16:29 +0300, Egor Panfilov wrote:
> - For the release cycle, I'd, however, agree with you only partially. From
> my standpoint, having weekly/bi-weekly/montly releases (a) is too expensive
> in terms of workforce, (b) discounts the release value and significantly
> reduces their noticeability, and (c) doesn't really make a lot of sense. In
> my experience, every 4 weeks only a very little number of significant PRs
> are developed and merged. Having said that, I believe, a reasonable cycle
> would be something like 1 release / 2-3 months. Once we see a progression
> in the contribution activity, we can consider discussing further changes in
> the schedule.

In an ideal world, we would be ready to punch out a release at a day's
notice, but would probably only release every three months or so.  A
release is not merely a tagging of the current master, but also a
statement about the readiness of the code it contains.  E.g., we are
more lenient about merging experimental features early on in the release
cycle, and sometimes modify or even remove APIs that we feel are not
mature enough.

Mark, I wonder if some of your issues can be addressed by daily wheel
builds (which we already have)?  Then you can point pip to our Rackspace
wheels container, and install from there.

The 32-bit issues mentioned were long known, because Matthew builds for
that platform daily, and reported the failures a long time ago.  It has
not been fixed because no one has taken the time to investigate
thoroughly (perhaps not that many developers care about 64-bit?).

I fully support attempts to reduce the burden of turning `master` into a

One such a proposed change was versioneer, but I recall from Mark's
previous PR (
that it introduced quite a bit of complexity for little gain.  I'm not
sure why the current versioning is considered troublesome; it is stored
in `skimage/`, and is reused everywhere else.

W.r.t. listing new features for a release, Nathaniel's suggestion of
Towncrier looks sensible, and helps to avoid conflicts.  On the other
hand, the more tools we use, the higher the overhead to contributing.
So, is it really a big problem to update the relevant text files?  If
that process is not documented clearly enough, that is something we can
address right now.

W.r.t. adding the wheel building machinery into the skimage source repo,
I'm not sure I see how that would be useful, or why that would be a good

W.r.t. backports to 0.14.x—since we no longer support 2.7 on master,
this really is the only way to give 2.7 users access to new features.
When a backport is easy to do, what motivation do we have for witholding

Thanks for your thoughts!
scikit-image mailing list
scikit-image mailing list