The #8414 PR on github (https://github.com/scipy/scipy/pull/8414) introduces an `Optimize` class for `scipy.optimize`. The `Optimize` class can be advanced like an iterator, or be advanced to convergence with the `solve` method. The PR is intended to provide a cleaner interface to the minimisation process, with the user being able to interact with the solver during iteration. This allows the variation of solver hyper-parameters, or other user definable interactions. As such `callback`s become redundant, as you can access all the optimizer information required. Other historical parameters may become redundant in these objects, such as `disp`. This is also a chance for a ground-zero approach to how people interact with optimizers. The PR is intended as a Request For Comment to gain feedback on architecture of the solvers. Given that it's a key part of infrastructure this may create a large amount of feedback, if I don't reply to it all bear in mind that I don't have unlimited time resource. In it's current state I've implemented `Optimizer`, `Function`, `LBFGS` and `NelderMead` classes. `LBFGS` and `NelderMead` are being called by the corresponding functions already in existence. The current test suite passes, but no unit tests have been written for the functionality. I'd prefer to add such a specific test suite once the overall architecture is settled. These optimisers were straightforward to translate. However, I don't know how this approach would translate to other minimisers, or things like `least_squares`. If a design could be agreed, then perhaps the best approach would be to tackle the remaining optimizers on a case by case basis once this PR is merged.
I can use the MATLAB optimizers and solve_ivp as points of reference. When compared to MATLAB, the current interface stacks up pretty well. However, both MATLAB and `scipy.optimize` have a less-than-ideal interface for gathering optimization history. Perhaps it would be more useful to align `scipy.optimize` with `solve_ivp`. In both cases you’re calculating a new point using the history of the previous points (though of course the goal—and the formulas—are different). The difference in user interface means that `scipy.optimize` only returns the last point, whereas `solve_ivp` returns the entire solution history. It’s obvious why: a solution to a set of differential equations mathematically must span the domain of interest whereas an optimization result mathematically only consists of a single point. But there no reason, at least that I can tell, not to return all the points that we tried while running the optimizer. It would at least provide a much simpler way to understand what the optimizer is doing. From: Andrew Nelson Sent: Tuesday, February 13, 2018 9:21 PM To: scipy-dev Subject: [SciPy-Dev] WIP: Class based Optimizers The #8414 PR on github (https://github.com/scipy/scipy/pull/8414) introduces an `Optimize` class for `scipy.optimize`. The `Optimize` class can be advanced like an iterator, or be advanced to convergence with the `solve` method. The PR is intended to provide a cleaner interface to the minimisation process, with the user being able to interact with the solver during iteration. This allows the variation of solver hyper-parameters, or other user definable interactions. As such `callback`s become redundant, as you can access all the optimizer information required. Other historical parameters may become redundant in these objects, such as `disp`. This is also a chance for a ground-zero approach to how people interact with optimizers. The PR is intended as a Request For Comment to gain feedback on architecture of the solvers. Given that it's a key part of infrastructure this may create a large amount of feedback, if I don't reply to it all bear in mind that I don't have unlimited time resource. In it's current state I've implemented `Optimizer`, `Function`, `LBFGS` and `NelderMead` classes. `LBFGS` and `NelderMead` are being called by the corresponding functions already in existence. The current test suite passes, but no unit tests have been written for the functionality. I'd prefer to add such a specific test suite once the overall architecture is settled. These optimisers were straightforward to translate. However, I don't know how this approach would translate to other minimisers, or things like `least_squares`. If a design could be agreed, then perhaps the best approach would be to tackle the remaining optimizers on a case by case basis once this PR is merged.
The iteration based nature will allow the interested user to keep all history. But atm they'll have to do it themselves. One suggestion was to create an optimisation runner to control how the iteration runs through. This is also a golden opportunity to change things on how we specify bounds, etc. On 14 Feb. 2018 15:01, "xoviat" <xoviat@gmail.com> wrote:
I can use the MATLAB optimizers and solve_ivp as points of reference. When compared to MATLAB, the current interface stacks up pretty well. However, both MATLAB and `scipy.optimize` have a less-than-ideal interface for gathering optimization history.
Perhaps it would be more useful to align `scipy.optimize` with `solve_ivp`. In both cases you’re calculating a new point using the history of the previous points (though of course the goal—and the formulas—are different). The difference in user interface means that `scipy.optimize` only returns the last point, whereas `solve_ivp` returns the entire solution history. It’s obvious why: a solution to a set of differential equations mathematically must span the domain of interest whereas an optimization result mathematically only consists of a single point. But there no reason, at least that I can tell, not to return all the points that we tried while running the optimizer. It would at least provide a much simpler way to understand what the optimizer is doing.
*From: *Andrew Nelson <andyfaff@gmail.com> *Sent: *Tuesday, February 13, 2018 9:21 PM *To: *scipy-dev <scipy-dev@python.org> *Subject: *[SciPy-Dev] WIP: Class based Optimizers
The #8414 PR on github (https://github.com/scipy/scipy/pull/8414) introduces an `Optimize` class for `scipy.optimize`. The `Optimize` class can be advanced like an iterator, or be advanced to convergence with the `solve` method.
The PR is intended to provide a cleaner interface to the minimisation process, with the user being able to interact with the solver during iteration. This allows the variation of solver hyper-parameters, or other user definable interactions. As such `callback`s become redundant, as you can access all the optimizer information required.
Other historical parameters may become redundant in these objects, such as `disp`. This is also a chance for a ground-zero approach to how people interact with optimizers.
The PR is intended as a Request For Comment to gain feedback on architecture of the solvers. Given that it's a key part of infrastructure this may create a large amount of feedback, if I don't reply to it all bear in mind that I don't have unlimited time resource.
In it's current state I've implemented `Optimizer`, `Function`, `LBFGS` and `NelderMead` classes. `LBFGS` and `NelderMead` are being called by the corresponding functions already in existence. The current test suite passes, but no unit tests have been written for the functionality. I'd prefer to add such a specific test suite once the overall architecture is settled.
These optimisers were straightforward to translate. However, I don't know how this approach would translate to other minimisers, or things like `least_squares`. If a design could be agreed, then perhaps the best approach would be to tackle the remaining optimizers on a case by case basis once this PR is merged.
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hi, ke, 2018-02-14 kello 14:21 +1100, Andrew Nelson kirjoitti:
The #8414 PR on github (https://github.com/scipy/scipy/pull/8414) introduces an `Optimize` class for `scipy.optimize`. The `Optimize` class can be advanced like an iterator, or be advanced to convergence with the `solve` method.
I think it would be necessary to make clearly a case what is actually gained with such reorganization. This would be the third optimizer API to the same solvers, and at first sight it's not clear to me if there is a strong need for it. One point that one can argue is that solver = ... for foo in solver: some_code is identical with regard to functionality to def callback(.., solver_info): some_code solver(..., callback=callback) with the difference that the latter does not leak unnecessary private implementation details to the user code, requires only small API changes, and won't require extensive code reorganization. The use cases where one needs to continuously monitor convergence in minimization are also somewhat expert-oriented, and for most users I expect such API will not be as useful. (The situation with ODE solvers is I think somewhat different, as with them one e.g. often does not even know the time up to which to integrate a priori.) Writing class-based solvers requires rewriting code in a reverse- communication style, which inverts the structure of control flow. This can result to code that is more complex, and more difficult to maintain. There are some tricks one can do to retain more natural structure such as using generators/coroutines, but these are not always so nice to use in practice in Python in my experience, and also bring more complexity. The existing solvers in Scipy apart from L-BFGS-B are IIRC not written in a reverse-communication style, and converting them would be significant rewrite, particularly so for the solvers written in C/Fortran. The same applies also for tuning solver parameters during the iteration. The solvers AFAIK have not been designed with parameters that can be adjusted during run, and one would probably have to think carefully about each one (instead of e.g. exposing them as class attributes). So, I think there are several questions that one should consider here. Pauli
Pauli, What do you think about returning the entire optimization history, similar to what is done with the ODE solvers, as I suggested? I think it would be necessary to make clearly a case what is actually gained with such reorganization. This would be the third optimizer API to the same solvers, and at first sight it's not clear to me if there is a strong need for it. I agree. The ODE solvers were previously implemented with the iterative method, which was found to be less-than-ideal. From: Pauli Virtanen Sent: Wednesday, February 14, 2018 3:12 PM To: scipy-dev@python.org Subject: Re: [SciPy-Dev] WIP: Class based Optimizers Hi, ke, 2018-02-14 kello 14:21 +1100, Andrew Nelson kirjoitti:
The #8414 PR on github (https://github.com/scipy/scipy/pull/8414) introduces an `Optimize` class for `scipy.optimize`. The `Optimize` class can be advanced like an iterator, or be advanced to convergence with the `solve` method.
I think it would be necessary to make clearly a case what is actually gained with such reorganization. This would be the third optimizer API to the same solvers, and at first sight it's not clear to me if there is a strong need for it. One point that one can argue is that solver = ... for foo in solver: some_code is identical with regard to functionality to def callback(.., solver_info): some_code solver(..., callback=callback) with the difference that the latter does not leak unnecessary private implementation details to the user code, requires only small API changes, and won't require extensive code reorganization. The use cases where one needs to continuously monitor convergence in minimization are also somewhat expert-oriented, and for most users I expect such API will not be as useful. (The situation with ODE solvers is I think somewhat different, as with them one e.g. often does not even know the time up to which to integrate a priori.) Writing class-based solvers requires rewriting code in a reverse- communication style, which inverts the structure of control flow. This can result to code that is more complex, and more difficult to maintain. There are some tricks one can do to retain more natural structure such as using generators/coroutines, but these are not always so nice to use in practice in Python in my experience, and also bring more complexity. The existing solvers in Scipy apart from L-BFGS-B are IIRC not written in a reverse-communication style, and converting them would be significant rewrite, particularly so for the solvers written in C/Fortran. The same applies also for tuning solver parameters during the iteration. The solvers AFAIK have not been designed with parameters that can be adjusted during run, and one would probably have to think carefully about each one (instead of e.g. exposing them as class attributes). So, I think there are several questions that one should consider here. Pauli _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
This would be the third optimizer API to the same solvers I agree. The ODE solvers were previously implemented with the iterative method, which was found to be less-than-ideal. What references are there on these? I’ve found [PR #6326] on the ODE solvers. What else is out there? Scott [PR #6326]: https://github.com/scipy/scipy/pull/6326 On February 14, 2018 at 3:16:53 PM, xoviat (xoviat@gmail.com) wrote: Pauli, What do you think about returning the entire optimization history, similar to what is done with the ODE solvers, as I suggested? I think it would be necessary to make clearly a case what is actually gained with such reorganization. This would be the third optimizer API to the same solvers, and at first sight it's not clear to me if there is a strong need for it. I agree. The ODE solvers were previously implemented with the iterative method, which was found to be less-than-ideal. *From: *Pauli Virtanen <pav@iki.fi> *Sent: *Wednesday, February 14, 2018 3:12 PM *To: *scipy-dev@python.org *Subject: *Re: [SciPy-Dev] WIP: Class based Optimizers Hi, ke, 2018-02-14 kello 14:21 +1100, Andrew Nelson kirjoitti:
The #8414 PR on github (https://github.com/scipy/scipy/pull/8414)
introduces an `Optimize` class for `scipy.optimize`. The `Optimize`
class
can be advanced like an iterator, or be advanced to convergence with
the
`solve` method.
I think it would be necessary to make clearly a case what is actually gained with such reorganization. This would be the third optimizer API to the same solvers, and at first sight it's not clear to me if there is a strong need for it. One point that one can argue is that solver = ... for foo in solver: some_code is identical with regard to functionality to def callback(.., solver_info): some_code solver(..., callback=callback) with the difference that the latter does not leak unnecessary private implementation details to the user code, requires only small API changes, and won't require extensive code reorganization. The use cases where one needs to continuously monitor convergence in minimization are also somewhat expert-oriented, and for most users I expect such API will not be as useful. (The situation with ODE solvers is I think somewhat different, as with them one e.g. often does not even know the time up to which to integrate a priori.) Writing class-based solvers requires rewriting code in a reverse- communication style, which inverts the structure of control flow. This can result to code that is more complex, and more difficult to maintain. There are some tricks one can do to retain more natural structure such as using generators/coroutines, but these are not always so nice to use in practice in Python in my experience, and also bring more complexity. The existing solvers in Scipy apart from L-BFGS-B are IIRC not written in a reverse-communication style, and converting them would be significant rewrite, particularly so for the solvers written in C/Fortran. The same applies also for tuning solver parameters during the iteration. The solvers AFAIK have not been designed with parameters that can be adjusted during run, and one would probably have to think carefully about each one (instead of e.g. exposing them as class attributes). So, I think there are several questions that one should consider here. Pauli _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
A long time ago, I wrote a class-based optimization framework. Not sure if people actually care about class-based optimizers. But if you want to create one, I would suggest you take a look at scikits.optimization if classes is what you are looking for here. Cheers, Matthieu 2018-02-14 23:00 GMT+00:00 Scott Sievert <sievert.scott@gmail.com>:
This would be the third optimizer API to the same solvers
I agree. The ODE solvers were previously implemented with the iterative method, which was found to be less-than-ideal.
What references are there on these? I’ve found [PR #6326] on the ODE solvers. What else is out there? Scott
[PR #6326]: https://github.com/scipy/scipy/pull/6326
On February 14, 2018 at 3:16:53 PM, xoviat (xoviat@gmail.com) wrote:
Pauli,
What do you think about returning the entire optimization history, similar to what is done with the ODE solvers, as I suggested?
I think it would be necessary to make clearly a case what is actually
gained with such reorganization. This would be the third optimizer API
to the same solvers, and at first sight it's not clear to me if there
is a strong need for it.
I agree. The ODE solvers were previously implemented with the iterative method, which was found to be less-than-ideal.
*From: *Pauli Virtanen <pav@iki.fi> *Sent: *Wednesday, February 14, 2018 3:12 PM *To: *scipy-dev@python.org *Subject: *Re: [SciPy-Dev] WIP: Class based Optimizers
Hi,
ke, 2018-02-14 kello 14:21 +1100, Andrew Nelson kirjoitti:
The #8414 PR on github (https://github.com/scipy/scipy/pull/8414)
introduces an `Optimize` class for `scipy.optimize`. The `Optimize`
class
can be advanced like an iterator, or be advanced to convergence with
the
`solve` method.
I think it would be necessary to make clearly a case what is actually
gained with such reorganization. This would be the third optimizer API
to the same solvers, and at first sight it's not clear to me if there
is a strong need for it.
One point that one can argue is that
solver = ...
for foo in solver:
some_code
is identical with regard to functionality to
def callback(.., solver_info):
some_code
solver(..., callback=callback)
with the difference that the latter does not leak unnecessary private
implementation details to the user code, requires only small API
changes, and won't require extensive code reorganization.
The use cases where one needs to continuously monitor convergence in
minimization are also somewhat expert-oriented, and for most users I
expect such API will not be as useful. (The situation with ODE solvers
is I think somewhat different, as with them one e.g. often does not
even know the time up to which to integrate a priori.)
Writing class-based solvers requires rewriting code in a reverse-
communication style, which inverts the structure of control flow. This
can result to code that is more complex, and more difficult to
maintain. There are some tricks one can do to retain more natural
structure such as using generators/coroutines, but these are not always
so nice to use in practice in Python in my experience, and also bring
more complexity.
The existing solvers in Scipy apart from L-BFGS-B are IIRC not written
in a reverse-communication style, and converting them would be
significant rewrite, particularly so for the solvers written in
C/Fortran.
The same applies also for tuning solver parameters during the
iteration. The solvers AFAIK have not been designed with parameters
that can be adjusted during run, and one would probably have to think
carefully about each one (instead of e.g. exposing them as class
attributes).
So, I think there are several questions that one should consider here.
Pauli
_______________________________________________
SciPy-Dev mailing list
SciPy-Dev@python.org
https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Quantitative analyst, Ph.D. Blog: http://blog.audio-tk.com/ LinkedIn: http://www.linkedin.com/in/matthieubrucher
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers: https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab072c92c609d71ed694... where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here. Andrew. p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
From the (beautifully written) draft PEP:
"Different optimization algorithms can inherit from Optimizer, with each of the subclasses overriding the __next__ method ..." I'm unclear re. whether this approach would allow something like a parallel implementation of Nelder-Mead. Phillip On Sun, Mar 4, 2018 at 7:05 PM, Andrew Nelson <andyfaff@gmail.com> wrote:
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers:
https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab072c92c609d 71ed6943c6/PEP/1-Optimizer.rst
where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here.
Andrew.
p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
would allow something like a parallel implementation of Nelder-Mead.
At the moment the __next__ method would consist of the logic of the existing loop inside _minimize_neldermead, which is done serially. I was not aware of a parallel version of NM, but a quick search reveals there is something along those lines. That's not in scope here, but could be added later. On 5 March 2018 at 18:03, Phillip Feldman <phillip.m.feldman@gmail.com> wrote:
From the (beautifully written) draft PEP:
"Different optimization algorithms can inherit from Optimizer, with each of the subclasses overriding the __next__ method ..."
I'm unclear re. whether this approach would allow something like a parallel implementation of Nelder-Mead.
Phillip
On Sun, Mar 4, 2018 at 7:05 PM, Andrew Nelson <andyfaff@gmail.com> wrote:
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers:
https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab07 2c92c609d71ed6943c6/PEP/1-Optimizer.rst
where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here.
Andrew.
p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- _____________________________________ Dr. Andrew Nelson _____________________________________
For future reference parallel Nelder Mead is described at 10.1007/s10614-007-9094-2, and some performance at https://scwu.io/f/Parallelization_Nelder_Mead_Simplex_Algorithm_Abstract.pdf . On 5 March 2018 at 18:13, Andrew Nelson <andyfaff@gmail.com> wrote:
would allow something like a parallel implementation of Nelder-Mead.
At the moment the __next__ method would consist of the logic of the existing loop inside _minimize_neldermead, which is done serially. I was not aware of a parallel version of NM, but a quick search reveals there is something along those lines. That's not in scope here, but could be added later.
On 5 March 2018 at 18:03, Phillip Feldman <phillip.m.feldman@gmail.com> wrote:
From the (beautifully written) draft PEP:
"Different optimization algorithms can inherit from Optimizer, with each of the subclasses overriding the __next__ method ..."
I'm unclear re. whether this approach would allow something like a parallel implementation of Nelder-Mead.
Phillip
On Sun, Mar 4, 2018 at 7:05 PM, Andrew Nelson <andyfaff@gmail.com> wrote:
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers:
https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab07 2c92c609d71ed6943c6/PEP/1-Optimizer.rst
where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here.
Andrew.
p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- _____________________________________ Dr. Andrew Nelson
_____________________________________
-- _____________________________________ Dr. Andrew Nelson _____________________________________
Didn't OpenOpt take some steps in this direction? http://courses.csail.mit.edu/6.867/wiki/images/6/6e/Qp-openopt.pdf Just making sure anything useful doesn't get overlooked, since I did not see this mentioned. Alan Isaac On 3/5/2018 2:03 AM, Phillip Feldman wrote:
From the (beautifully written) draft PEP:
"Different optimization algorithms can inherit from Optimizer, with each of the subclasses overriding the __next__ method ..."
I'm unclear re. whether this approach would allow something like a parallel implementation of Nelder-Mead.
Phillip
On Sun, Mar 4, 2018 at 7:05 PM, Andrew Nelson <andyfaff@gmail.com <mailto:andyfaff@gmail.com>> wrote:
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers:
https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab072c92c609d71ed694... <https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab072c92c609d71ed6943c6/PEP/1-Optimizer.rst>
where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here.
Andrew.
p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
On Mon, Mar 5, 2018 at 8:58 PM, Alan Isaac <alan.isaac@gmail.com> wrote:
Didn't OpenOpt take some steps in this direction? http://courses.csail.mit.edu/6.867/wiki/images/6/6e/Qp-openopt.pdf Just making sure anything useful doesn't get overlooked, since I did not see this mentioned.
Hard to say, since OpenOpt has disappeared AFAIK. There's no repo to be found and openopt.org has been offline for a while. Ralf
Alan Isaac
On 3/5/2018 2:03 AM, Phillip Feldman wrote:
From the (beautifully written) draft PEP:
"Different optimization algorithms can inherit from Optimizer, with each of the subclasses overriding the __next__ method ..."
I'm unclear re. whether this approach would allow something like a parallel implementation of Nelder-Mead.
Phillip
On Sun, Mar 4, 2018 at 7:05 PM, Andrew Nelson <andyfaff@gmail.com <mailto:
andyfaff@gmail.com>> wrote:
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers:
https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab07 2c92c609d71ed6943c6/PEP/1-Optimizer.rst <https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab0 72c92c609d71ed6943c6/PEP/1-Optimizer.rst>
where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here.
Andrew.
p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
On Sun, Mar 4, 2018 at 7:05 PM, Andrew Nelson <andyfaff@gmail.com> wrote:
Scott Sievert and I have put a lot of work into preparing a draft of the PEP for class based scalar minimizers:
https://github.com/andyfaff/scipy/blob/a52bb4f9029389da3ab072c92c609d 71ed6943c6/PEP/1-Optimizer.rst
where we've tried to address comments already made in this thread, and from the WIP github PR. Scott and I look forward to hearing any comments/concerns/feedback about the proposal. We can field any questions and address them in an updated PEP, as well as on here.
Andrew.
p.s. Ralf/Pauli, could we add the PEP to a scipy/PEP, or scipy/scipep, repo? How should we discuss such this, and any further PEP? Should we have a scipep process, or shall we keep things simple?
I'd suggest a separate repo, and no custom process but rather just follow what is done for Python Enhancement Proposals - discussion of major things on this list, more detailed things on a PR on that new repo. Unless there's other opinions, I can create the repo. There's also https://github.com/numpy/neps for which build infrastructure is in progress (I expect/hope), so we should be able to steal that soon. So for now just open a PR on the new empty repo I'd say. Your proposal looks quite comprehensive and well written. I would suggest following the structure of PEPs a little more closely ( https://www.python.org/dev/peps/pep-0001/#what-belongs-in-a-successful-pep), e.g. add the metadata and copyright bits, and use "motivation" and "rationale" as section headers for some of the content that you have. Ralf
On Sun, 04 Mar 2018 23:41:36 -0800, Ralf Gommers wrote:
Your proposal looks quite comprehensive and well written. I would suggest following the structure of PEPs a little more closely ( https://www.python.org/dev/peps/pep-0001/#what-belongs-in-a-successful-pep), e.g. add the metadata and copyright bits, and use "motivation" and "rationale" as section headers for some of the content that you have.
We've tuned the Python PEP specification a bit for NumPy. See https://github.com/numpy/numpy/blob/master/doc/neps/nep-0000.rst We're now finalizing the build machinery (using CircleCI, very much the same as what is being used by SciPy). Best regards Stéfan
On Mon, Mar 5, 2018 at 1:34 PM, Stefan van der Walt <stefanv@berkeley.edu> wrote:
On Sun, 04 Mar 2018 23:41:36 -0800, Ralf Gommers wrote:
Your proposal looks quite comprehensive and well written. I would suggest following the structure of PEPs a little more closely ( https://www.python.org/dev/peps/pep-0001/#what-belongs- in-a-successful-pep), e.g. add the metadata and copyright bits, and use "motivation" and "rationale" as section headers for some of the content that you have.
We've tuned the Python PEP specification a bit for NumPy. See
https://github.com/numpy/numpy/blob/master/doc/neps/nep-0000.rst
Thanks, that seems fine to copy. The link to the template in nep-0000 is broken, here it is: https://github.com/numpy/numpy/blob/master/doc/neps/nep-template.rst Ralf
participants (9)
-
Alan Isaac
-
Andrew Nelson
-
Matthieu Brucher
-
Pauli Virtanen
-
Phillip Feldman
-
Ralf Gommers
-
Scott Sievert
-
Stefan van der Walt
-
xoviat