GSoC' 15 Ideas: define general integrators; parallelize integration; higher-order ODEs
Hello everyone, I am a bit late to write to this mailing list, therefore sorry for that. My name is Max Mertens; I am a Electrical Engineering student at the University of Ulm, Germany. I have done a few projects in C(++) on physical simulations of multibody dynamics with collision detection, and molecular dynamics with interatomic potentials. I got interested in SciPy as it facilitates physical simulations in an easy way, but have not worked with SciPy so far. I would like to contribute to SciPy as part of the GSoC, and am willing to contribute by providing an "easy-fix" to one of the open issues. Is there already a possibility to perform multibody/molecular dynamics in SciPy or are there contributions needed in this scope? The scipy.integrate module with ode and odeint provides several methods to integrate differential equations. For this I have the following ideas: * add a interface/class/method to define general integration methods: For now you can specify various integrators like "vode" and "dopri5" as a string. The new code would allow to enter a Butcher tableau to define implicit/explicit (embedded) Runge-Kutta methods, which would cover "dopri5" and "dopri853" (see [0]); and possibly other general integrator descriptions as needed. * add distributed integration: Linear multistep integrators like Runge-Kutta with multiple differential equations can be parallelized to speed up calculations. This would distribute integrations to multiple threads; and/or, if needed for complicated equations like in multibody dynamics, distribute to multiple physical machines (I have developed a few ideas on how to accomplish this). * provide methods to integrate higher-order differential equations: is this needed, or are users of the library expected to express these as multiple first-order differential equations? Could this step be automated? Do you think this would be a useful contribution to SciPy? Do you have any suggestions? Thank you for your feedback. Max Mertens [0] https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods
2015-03-17 3:37 GMT+11:00 Max Mertens <max.mail@dameweb.de>:
Hello everyone,
I am a bit late to write to this mailing list, therefore sorry for that.
My name is Max Mertens; I am a Electrical Engineering student at the University of Ulm, Germany. I have done a few projects in C(++) on physical simulations of multibody dynamics with collision detection, and molecular dynamics with interatomic potentials.
I got interested in SciPy as it facilitates physical simulations in an easy way, but have not worked with SciPy so far. I would like to contribute to SciPy as part of the GSoC, and am willing to contribute by providing an "easy-fix" to one of the open issues. Is there already a possibility to perform multibody/molecular dynamics in SciPy or are there contributions needed in this scope?
The scipy.integrate module with ode and odeint provides several methods to integrate differential equations. For this I have the following ideas:
* add a interface/class/method to define general integration methods: For now you can specify various integrators like "vode" and "dopri5" as a string. The new code would allow to enter a Butcher tableau to define implicit/explicit (embedded) Runge-Kutta methods, which would cover "dopri5" and "dopri853" (see [0]); and possibly other general integrator descriptions as needed.
Excellent idea. I would like this very much. I had to write something like this with Butcher tableaux for general implicit Runge-Kutta methods (for differential-algebraic equations rather than ordinary differential equations), but it's a bit in-house and application-specific for publishing. Having this as part of SciPy would be great.
* add distributed integration: Linear multistep integrators like Runge-Kutta with multiple differential equations can be parallelized to speed up calculations. This would distribute integrations to multiple threads; and/or, if needed for complicated equations like in multibody dynamics, distribute to multiple physical machines (I have developed a few ideas on how to accomplish this).
Another excellent idea and something I've had on my own TODO list for a while. I'm not sure what the SciPy policy on parallelization is though; SciPy doesn't seem to deal with it much. I would be very interested to read your ideas on this. Ultimately I suppose I would like my integrator to use a sparse parallel linear solver.
* provide methods to integrate higher-order differential equations: is this needed, or are users of the library expected to express these as multiple first-order differential equations? Could this step be automated?
I think this could be worthwhile too. At the very least for second-order equations which arise so often in physical applications and for which there are popular special methods like https://en.wikipedia.org/wiki/Newmark-beta_method and https://en.wikipedia.org/wiki/Verlet_integration. Another reason I had to write my own implicit Runge-Kutta methods was because the differential equations that I was dealing with weren't 'ordinary' in the sense that they could be solved for the first derivative so were 'implicit' or 'differential-algebraic'. This wasn't a particularly difficult generalization but is not currently covered by anything in scipy.optimize; e.g. scipy.optimize.ode insists on a form y' = f (t, y) but my equation was more like f (t, y, y') = 0, as in Hairer, E., C. Lubich, & M. Roche (1989). The numerical solution of differential-algebraic systems by Runge-Kutta methods, Volume 1409 of Lecture Notes in Mathematics. Berlin: Springer Brenan, K. E., S. L. Campbell, & L. R. Petzold (1996). Numerical solution of initial-value problems in differential-algebraic equations, Volume 14 of Classics in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics DAEs can be much trickier than ODEs though (as described in those two books and Petzold's other papers), so it is harder to write robust general-purpose programs for them for inclusion in something as high-level as SciPy; e.g. this is a large part of why I describe my solutions as too application-specific. Good luck. There is much worthwhile work to be done here.
2015-03-17 0:42 GMT+01:00 Geordie McBain <gdmcbain@freeshell.org>:
2015-03-17 3:37 GMT+11:00 Max Mertens <max.mail@dameweb.de>:
Hello everyone,
I am a bit late to write to this mailing list, therefore sorry for that.
My name is Max Mertens; I am a Electrical Engineering student at the University of Ulm, Germany. I have done a few projects in C(++) on physical simulations of multibody dynamics with collision detection, and molecular dynamics with interatomic potentials.
I got interested in SciPy as it facilitates physical simulations in an easy way, but have not worked with SciPy so far. I would like to contribute to SciPy as part of the GSoC, and am willing to contribute by providing an "easy-fix" to one of the open issues. Is there already a possibility to perform multibody/molecular dynamics in SciPy or are there contributions needed in this scope?
The scipy.integrate module with ode and odeint provides several methods to integrate differential equations. For this I have the following ideas:
* add a interface/class/method to define general integration methods: For now you can specify various integrators like "vode" and "dopri5" as a string. The new code would allow to enter a Butcher tableau to define implicit/explicit (embedded) Runge-Kutta methods, which would cover "dopri5" and "dopri853" (see [0]); and possibly other general integrator descriptions as needed.
Excellent idea. I would like this very much. I had to write something like this with Butcher tableaux for general implicit Runge-Kutta methods (for differential-algebraic equations rather than ordinary differential equations), but it's a bit in-house and application-specific for publishing. Having this as part of SciPy would be great.
* add distributed integration: Linear multistep integrators like Runge-Kutta with multiple differential equations can be parallelized to speed up calculations. This would distribute integrations to multiple threads; and/or, if needed for complicated equations like in multibody dynamics, distribute to multiple physical machines (I have developed a few ideas on how to accomplish this).
Another excellent idea and something I've had on my own TODO list for a while. I'm not sure what the SciPy policy on parallelization is though; SciPy doesn't seem to deal with it much. I would be very interested to read your ideas on this. Ultimately I suppose I would like my integrator to use a sparse parallel linear solver.
This for stiff problems and implicit methods! The python code would not care as long as normal arrays can be passed. Most specific implementations have their own datastructures however. Code like fipy casts PDE problems to equations which can be solved via pytrilinos, http://trilinos.org/packages/pytrilinos/ in parallel via MPI. You get out numpy arrays, internally something else is used. The cvode solver of sundials for stiff problems has a fully parallel implementation, and sundials has some RK methods in ARKode. I don't think any of the python bindings expose that at the moment. I suppose the mapping to parallel array is not straightforward. Having a look at the parallel implementation there might also give ideas though for a general interface. Pytrilinos interfaces parts of sundials via Rythmos ( http://trilinos.org/docs/dev/packages/rythmos/doc/html/classRythmos_1_1Impli...), but it's unclear to me if that is parallel or not. In any case, the data structures used are not ndarray, but must be mapped to Epetra.Vector. All far too high level abstraction to be usable in scipy. Scipy should focus on simple interfaces for ode/dae, but the existing examples of parallel implementation seem to indicate simple is no longer possible then. Whatever the approach, scipy should not redo work present in high level packages like pytrilinos, but instead offer the basis to start from, so people can evolve to those packages if needed. Benny
* provide methods to integrate higher-order differential equations: is this needed, or are users of the library expected to express these as multiple first-order differential equations? Could this step be automated?
I think this could be worthwhile too. At the very least for second-order equations which arise so often in physical applications and for which there are popular special methods like https://en.wikipedia.org/wiki/Newmark-beta_method and https://en.wikipedia.org/wiki/Verlet_integration.
Another reason I had to write my own implicit Runge-Kutta methods was because the differential equations that I was dealing with weren't 'ordinary' in the sense that they could be solved for the first derivative so were 'implicit' or 'differential-algebraic'. This wasn't a particularly difficult generalization but is not currently covered by anything in scipy.optimize; e.g. scipy.optimize.ode insists on a form y' = f (t, y) but my equation was more like f (t, y, y') = 0, as in
Hairer, E., C. Lubich, & M. Roche (1989). The numerical solution of differential-algebraic systems by Runge-Kutta methods, Volume 1409 of Lecture Notes in Mathematics. Berlin: Springer
Brenan, K. E., S. L. Campbell, & L. R. Petzold (1996). Numerical solution of initial-value problems in differential-algebraic equations, Volume 14 of Classics in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics
DAEs can be much trickier than ODEs though (as described in those two books and Petzold's other papers), so it is harder to write robust general-purpose programs for them for inclusion in something as high-level as SciPy; e.g. this is a large part of why I describe my solutions as too application-specific.
Good luck. There is much worthwhile work to be done here. _______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
@Geordie McBain, Benny Malengier: Thank you for your feedback, and for the online/paper references. @Benny:
This for stiff problems and implicit methods! The python code would not care as long as normal arrays can be passed. Do you mean it is needed, or difficult instead to implement for stiff/implicit ODEs?
Most specific implementations have their own datastructures however.
Code like fipy casts PDE problems to equations which can be solved via pytrilinos, http://trilinos.org/packages/pytrilinos/ in parallel via MPI. You get out numpy arrays, internally something else is used.
The cvode solver of sundials for stiff problems has a fully parallel implementation, and sundials has some RK methods in ARKode. I don't think any of the python bindings expose that at the moment. I suppose the mapping to parallel array is not straightforward. Having a look at the parallel implementation there might also give ideas though for a general interface. Pytrilinos interfaces parts of sundials via Rythmos (http://trilinos.org/docs/dev/packages/rythmos/doc/html/classRythmos_1_1Impli...), but it's unclear to me if that is parallel or not. In any case, the data structures used are not ndarray, but must be mapped to Epetra.Vector. All far too high level abstraction to be usable in scipy. Scipy should focus on simple interfaces for ode/dae, but the existing examples of parallel implementation seem to indicate simple is no longer possible then.
Whatever the approach, scipy should not redo work present in high level packages like pytrilinos, but instead offer the basis to start from, so people can evolve to those packages if needed. If I understand you correctly, you suggest to not implement parallel solvers, as those exist in other libraries already, but rather provide a general interface to those?
What about the other ideas, a general interface to define RK (or other) integrators, and methods to automate higher-order to first-order ODE transformation? Do you suggest something similar as a project for me? Regards, Max
2015-03-17 11:12 GMT+01:00 Max Mertens <max.mail@dameweb.de>:
@Geordie McBain, Benny Malengier: Thank you for your feedback, and for the online/paper references.
@Benny:
This for stiff problems and implicit methods! The python code would not care as long as normal arrays can be passed. Do you mean it is needed, or difficult instead to implement for stiff/implicit ODEs?
I mean it would be great if parallel backend is an option for the cases where a sparse parallel linear solver is possible.
Most specific implementations have their own datastructures however.
Code like fipy casts PDE problems to equations which can be solved via pytrilinos, http://trilinos.org/packages/pytrilinos/ in parallel via MPI. You get out numpy arrays, internally something else is used.
The cvode solver of sundials for stiff problems has a fully parallel implementation, and sundials has some RK methods in ARKode. I don't think any of the python bindings expose that at the moment. I suppose the mapping to parallel array is not straightforward. Having a look at the parallel implementation there might also give ideas though for a general interface. Pytrilinos interfaces parts of sundials via Rythmos (
http://trilinos.org/docs/dev/packages/rythmos/doc/html/classRythmos_1_1Impli... ),
but it's unclear to me if that is parallel or not. In any case, the data structures used are not ndarray, but must be mapped to Epetra.Vector. All far too high level abstraction to be usable in scipy. Scipy should focus on simple interfaces for ode/dae, but the existing examples of parallel implementation seem to indicate simple is no longer possible then.
Whatever the approach, scipy should not redo work present in high level packages like pytrilinos, but instead offer the basis to start from, so people can evolve to those packages if needed. If I understand you correctly, you suggest to not implement parallel solvers, as those exist in other libraries already, but rather provide a general interface to those?
No, what I mean is that if a parallel solver is offered, it must remain a simple API, and not a very detailed API geared towards a specific area (like molecular modelling). Scipy will not replace dedicated packages. The methods present should be great in general. I'm not a core scipy developer though, they might have other ideas on what is fitting. Also, for integrate, scipy mostly is a wrapper layer around academic, well tested, codes. I'm not sure own written solvers, or pure python solvers, would be accepted. So you would need to select an existing well tested parallel solver, and then wrap that. Personally I like the approach of maple with their student package: http://www.maplesoft.com/support/help/Maple/view.aspx?path=Student%2fNumeric... The basic integrators are present there. It paints a clear picture: look, you learn this stuff in numerical analysis, so here are the methods, but for really doing ode or dae, use dsolve numberic http://www.maplesoft.com/support/help/maple/view.aspx?path=dsolve%2fnumeric instead (though that has the classical option too with the common classical methods ( http://www.maplesoft.com/support/help/maple/view.aspx?path=dsolve%2fclassica... ). Some sort of 'student/classical' version for integrate in scipy is something I always missed.
What about the other ideas, a general interface to define RK (or other) integrators, and methods to automate higher-order to first-order ODE transformation?
I did not react to RK as I have no experience there. As I said, if there is a good paper that allows an implementation, then yes, it probably is an option. But mostly, a wrapper to an existing well tested codebase for RK seems the fastest and best approach for a student. Current dopri5 and dop853 in scipy seem a very specific choice for RK, but the fact they have step size control makes them in practise better than whatever standard RK you could devise based on textbooks. If the aim is just to have an implementation for classical methods, eg RK variants, available in scipy, then adding those via a clear student/classical package is something I would like, but perhaps not core scipy developers. Benny
Do you suggest something similar as a project for me?
Regards, Max
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
Hi Benny, thanks for your reply. So I see that SciPy is a rather mature and stable environment and should stay like that, and parallel solvers should only be integrated if implemented as a wrapper to another solid library. I like Maple's approach to provide nice visualization commands to easily compare different integrators. Various of the 'classical' integrators presented there (as well as e.g. SciPy's dopri5; but IIRC only explicit ones) could be covered by a Butcher-tableau-based solver, which reduces the amount of code. This solver in turn could be used to provide Maple-like visualizations. The method could be invoked like ode or odeint or integrated into those, with a string option to specify the method to be used, or a Butcher-tableau itself. If speed and stability is an issue, I'm willing to write efficient C code with a Python interface and analyze/profile that, and compare it to the existing methods. Regards, Max On 17.03.2015 12:09, Benny Malengier wrote:
I mean it would be great if parallel backend is an option for the cases where a sparse parallel linear solver is possible.
...
No, what I mean is that if a parallel solver is offered, it must remain a simple API, and not a very detailed API geared towards a specific area (like molecular modelling). Scipy will not replace dedicated packages. The methods present should be great in general. I'm not a core scipy developer though, they might have other ideas on what is fitting.
Also, for integrate, scipy mostly is a wrapper layer around academic, well tested, codes. I'm not sure own written solvers, or pure python solvers, would be accepted. So you would need to select an existing well tested parallel solver, and then wrap that.
Personally I like the approach of maple with their student package: http://www.maplesoft.com/support/help/Maple/view.aspx?path=Student%2fNumeric... The basic integrators are present there. It paints a clear picture: look, you learn this stuff in numerical analysis, so here are the methods, but for really doing ode or dae, use dsolve numberic http://www.maplesoft.com/support/help/maple/view.aspx?path=dsolve%2fnumeric instead (though that has the classical option too with the common classical methods (http://www.maplesoft.com/support/help/maple/view.aspx?path=dsolve%2fclassica...).
Some sort of 'student/classical' version for integrate in scipy is something I always missed.
...
I did not react to RK as I have no experience there. As I said, if there is a good paper that allows an implementation, then yes, it probably is an option. But mostly, a wrapper to an existing well tested codebase for RK seems the fastest and best approach for a student. Current dopri5 and dop853 in scipy seem a very specific choice for RK, but the fact they have step size control makes them in practise better than whatever standard RK you could devise based on textbooks. If the aim is just to have an implementation for classical methods, eg RK variants, available in scipy, then adding those via a clear student/classical package is something I would like, but perhaps not core scipy developers.
Benny
participants (3)
-
Benny Malengier
-
Geordie McBain
-
Max Mertens