The future of the scipy.sandbox and a reminder of upcoming doc-day
Hello, I started a blog, which has two posts about the NumPy/SciPy code sprint/strategic planning meeting early this month. It doesn't have very much content yet, but I thought it would be worth letting everyone know where it is: http://jarrodmillman.blogspot.com/ The most recent entry addresses some of the reasons we decided to retire the sandbox. I also created a ticket for removing the sandbox: http://projects.scipy.org/scipy/scipy/ticket/573 I will work on further documenting what went on at the sprint and some other things tomorrow during the first NumPy/SciPy doc-day. As a reminder, Friday's doc-day will be virtual where anyone with an internet connection can join in on the scipy channel on irc.freenode.net. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/
Hi, Some of the packages already are in a scikit, like pyem and svm. I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy). BTW, for the sparse solvers in linsolve, the module name does not show that it is for sparse arrays. Matthieu 2007/12/27, Jarrod Millman <millman@berkeley.edu>:
Hello,
I started a blog, which has two posts about the NumPy/SciPy code sprint/strategic planning meeting early this month. It doesn't have very much content yet, but I thought it would be worth letting everyone know where it is: http://jarrodmillman.blogspot.com/
The most recent entry addresses some of the reasons we decided to retire the sandbox. I also created a ticket for removing the sandbox: http://projects.scipy.org/scipy/scipy/ticket/573
I will work on further documenting what went on at the sprint and some other things tomorrow during the first NumPy/SciPy doc-day. As a reminder, Friday's doc-day will be virtual where anyone with an internet connection can join in on the scipy channel on irc.freenode.net.
Thanks,
-- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ _______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
-- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher
On Dec 27, 2007 2:01 PM, Matthieu Brucher <matthieu.brucher@gmail.com> wrote:
Some of the packages already are in a scikit, like pyem and svm.
Several sandbox packages have been removed: ann, pyem, svm, arraysetops, cow, gplt, maskedarray, plt, wavelets, xplt, stats, and oliphant, see http://projects.scipy.org/scipy/scipy/ticket/573
I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy).
Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/
On Dec 27, 2007 9:11 PM, Jarrod Millman <millman@berkeley.edu> wrote:
I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy).
Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March.
How do you see sparse being structured? Currently sparse contains only the sparse matrix classes and a handful of creation functions (e.g. spdiags) and the iterative solvers live in scipy.linalg.iterative. It would be strange to put an eigensolver under sparse and iterative methods for linear systems under linalg. Also, lobpcg should live along side arpack wherever they end up. I could imagine a structure like: scipy.iterative.linear (for cg/gmres etc.) scipy.iterative.eigen (for arpack/lobpcg etc.) -- Nathan Bell wnbell@gmail.com
Nathan Bell wrote:
On Dec 27, 2007 9:11 PM, Jarrod Millman <millman@berkeley.edu> wrote:
I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy). Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March.
How do you see sparse being structured? Currently sparse contains only the sparse matrix classes and a handful of creation functions (e.g. spdiags) and the iterative solvers live in scipy.linalg.iterative.
IMHO iterative solvers (eigen- or linear) do not care about the format of matrices they work with - all they need is the matrix action on a vector. Due to this I think they do not belong under scipy.sparse - they should work without change for dense matrices, or even for matrix-like objects that have only the matrix action (A*x) implemented. lobpcg.py works like that already - a user can pass in a dense/sparse matrix or a function.
It would be strange to put an eigensolver under sparse and iterative methods for linear systems under linalg. Also, lobpcg should live along side arpack wherever they end up. I could imagine a structure like:
scipy.iterative.linear (for cg/gmres etc.) scipy.iterative.eigen (for arpack/lobpcg etc.)
+1. Maybe instead of 'iterative' I would use something like 'isolve(r(s))' to indicate their purpose better. regards, r.
On Jan 2, 2008 4:03 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
IMHO iterative solvers (eigen- or linear) do not care about the format of matrices they work with - all they need is the matrix action on a vector. Due to this I think they do not belong under scipy.sparse - they should work without change for dense matrices, or even for matrix-like objects that have only the matrix action (A*x) implemented. lobpcg.py works like that already - a user can pass in a dense/sparse matrix or a function.
+1. Maybe instead of 'iterative' I would use something like 'isolve(r(s))' to indicate their purpose better.
Travis suggested creating scipy.splinalg to include the sparse solvers and other functionality. We could have: splinalg.factor -- direct factorization methods (e.g. SuperLU) splinalg.eigen -- sparse eigensolvers (e.g. ARPACK, lobpcg) splinalg.solve -- sparse solvers for linear systems (e.g. CG, GMRES) In the process we'd eliminate scipy.linsolve and scipy.linalg.iterative and move that code to splinalg.factor and splinalg.solve respectively. We could then draw a distinction between scipy.linalg -- dense linear algebra and splinalg -- sparse linear algebra. We could then move sparse functions spkron and spdiags to splinalg.construct or something like that. I'm not married to any of the names above, so feel free to offer alternatives. I agree with your statement that such solvers should not be exclusively for sparse matrices. However it's important to set them apart from the dense solvers so new users can find the most appropriate solver for their needs. We should also explore the idea of having a scipy.gallery in the spirit of MATLAB's gallery function. This would be extremely helpful in illustrating non-trivial usage of the sparse module (in tutorials and docstrings) and would facilitate more interesting unittests. Any comments? -- Nathan Bell wnbell@gmail.com
On Jan 2, 2008 9:37 PM, Nathan Bell <wnbell@gmail.com> wrote:
On Jan 2, 2008 4:03 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
IMHO iterative solvers (eigen- or linear) do not care about the format of matrices they work with - all they need is the matrix action on a vector. Due to this I think they do not belong under scipy.sparse - they should work without change for dense matrices, or even for matrix-like objects that have only the matrix action (A*x) implemented. lobpcg.py works like that already - a user can pass in a dense/sparse matrix or a function.
+1. Maybe instead of 'iterative' I would use something like 'isolve(r(s))' to indicate their purpose better.
Travis suggested creating scipy.splinalg to include the sparse solvers and other functionality. We could have:
splinalg.factor -- direct factorization methods (e.g. SuperLU) splinalg.eigen -- sparse eigensolvers (e.g. ARPACK, lobpcg) splinalg.solve -- sparse solvers for linear systems (e.g. CG, GMRES)
In the process we'd eliminate scipy.linsolve and scipy.linalg.iterative and move that code to splinalg.factor and splinalg.solve respectively. We could then draw a distinction between scipy.linalg -- dense linear algebra and splinalg -- sparse linear algebra. We could then move sparse functions spkron and spdiags to splinalg.construct or something like that.
I'm not married to any of the names above, so feel free to offer alternatives.
I agree with your statement that such solvers should not be exclusively for sparse matrices. However it's important to set them apart from the dense solvers so new users can find the most appropriate solver for their needs.
We should also explore the idea of having a scipy.gallery in the spirit of MATLAB's gallery function. This would be extremely helpful in illustrating non-trivial usage of the sparse module (in tutorials and docstrings) and would facilitate more interesting unittests. Any comments?
No comments, just agree with everything you said. Actually one comment - how could one plug more solvers to the above architecture? (pysparse comes to my mind) Ondrej
On Jan 2, 2008 9:50 PM, Ondrej Certik <ondrej@certik.cz> wrote:
On Jan 2, 2008 9:37 PM, Nathan Bell <wnbell@gmail.com> wrote:
On Jan 2, 2008 4:03 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
IMHO iterative solvers (eigen- or linear) do not care about the format of matrices they work with - all they need is the matrix action on a vector. Due to this I think they do not belong under scipy.sparse - they should work without change for dense matrices, or even for matrix-like objects that have only the matrix action (A*x) implemented. lobpcg.py works like that already - a user can pass in a dense/sparse matrix or a function.
+1. Maybe instead of 'iterative' I would use something like 'isolve(r(s))' to indicate their purpose better.
Travis suggested creating scipy.splinalg to include the sparse solvers and other functionality. We could have:
splinalg.factor -- direct factorization methods (e.g. SuperLU) splinalg.eigen -- sparse eigensolvers (e.g. ARPACK, lobpcg) splinalg.solve -- sparse solvers for linear systems (e.g. CG, GMRES)
In the process we'd eliminate scipy.linsolve and scipy.linalg.iterative and move that code to splinalg.factor and splinalg.solve respectively. We could then draw a distinction between scipy.linalg -- dense linear algebra and splinalg -- sparse linear algebra. We could then move sparse functions spkron and spdiags to splinalg.construct or something like that.
I'm not married to any of the names above, so feel free to offer alternatives.
I agree with your statement that such solvers should not be exclusively for sparse matrices. However it's important to set them apart from the dense solvers so new users can find the most appropriate solver for their needs.
We should also explore the idea of having a scipy.gallery in the spirit of MATLAB's gallery function. This would be extremely helpful in illustrating non-trivial usage of the sparse module (in tutorials and docstrings) and would facilitate more interesting unittests. Any comments?
No comments, just agree with everything you said.
Actually one comment - how could one plug more solvers to the above architecture? (pysparse comes to my mind)
and blzpack - that one is BSD, so it could even go to scipy. Ondrej
On Jan 2, 2008 2:51 PM, Ondrej Certik <ondrej@certik.cz> wrote:
No comments, just agree with everything you said.
Actually one comment - how could one plug more solvers to the above architecture? (pysparse comes to my mind)
I would think a JDSYM solver like the one in pysparse could live in splinalg.eigen
and blzpack - that one is BSD, so it could even go to scipy.
Likewise for blzpack -- Nathan Bell wnbell@gmail.com
Nathan Bell wrote:
On Jan 2, 2008 4:03 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
IMHO iterative solvers (eigen- or linear) do not care about the format of matrices they work with - all they need is the matrix action on a vector. Due to this I think they do not belong under scipy.sparse - they should work without change for dense matrices, or even for matrix-like objects that have only the matrix action (A*x) implemented. lobpcg.py works like that already - a user can pass in a dense/sparse matrix or a function.
+1. Maybe instead of 'iterative' I would use something like 'isolve(r(s))' to indicate their purpose better.
Travis suggested creating scipy.splinalg to include the sparse solvers and other functionality. We could have:
splinalg.factor -- direct factorization methods (e.g. SuperLU) splinalg.eigen -- sparse eigensolvers (e.g. ARPACK, lobpcg) splinalg.solve -- sparse solvers for linear systems (e.g. CG, GMRES)
In the process we'd eliminate scipy.linsolve and scipy.linalg.iterative and move that code to splinalg.factor and splinalg.solve respectively. We could then draw a distinction between scipy.linalg -- dense linear algebra and splinalg -- sparse linear algebra. We could then move sparse functions spkron and spdiags to splinalg.construct or something like that.
Maybe the contruction functions could live directly in the splinalg namespace?
I'm not married to any of the names above, so feel free to offer alternatives.
The names are good. Now what to do with umfpack? The current situation when the wrappers are in linsolve and the solver is external is not ideal. I did it this way at time when there were no scikits and SuperLU solver included in scipy had not performed well for my matrices. Would it be a good scikit? Or we could contact Tim Davis... Like Ondrej said, more solvers are always needed.
I agree with your statement that such solvers should not be exclusively for sparse matrices. However it's important to set them apart from the dense solvers so new users can find the most appropriate solver for their needs.
Yes, it makes sense to separate dense and sparse solvers, at least until there is some notion of a sparse matrix in numpy and most relevant functions work for 2D dense arrays, dense matrices and sparse matrices. We could make a new sparse matrix-like class, which just implements a matrix action via a user-specified function. Is the name MatrixAction ok? Many solvers expect a particular sparse matrix format. Apart from the fact that this should be properly documented, I would like to issue a warning when an implicit conversion of a sparse format occurs. This could be controlled by some warning-level or verbosity parameter.
We should also explore the idea of having a scipy.gallery in the spirit of MATLAB's gallery function. This would be extremely helpful in illustrating non-trivial usage of the sparse module (in tutorials and docstrings) and would facilitate more interesting unittests. Any comments?
+1 I am sure Nils Wagner has lots of interesting matrices to test eigensolvers. :) r.
On Jan 3, 2008 3:18 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
Maybe the contruction functions could live directly in the splinalg namespace?
That's possible. I can only think of a few functions, so a submodule is probably unnecessary.
Now what to do with umfpack? The current situation when the wrappers are in linsolve and the solver is external is not ideal. I did it this way at time when there were no scikits and SuperLU solver included in scipy had not performed well for my matrices. Would it be a good scikit? Or we could contact Tim Davis...
I think it would make a good scikit. It seems highly unlikely that Tim Davis would make a license change for us, but it's worth asking. Perhaps he'd put an older version in the public domain? Are there BSD licensed sparse LU codes that are superior to SuperLU?
We could make a new sparse matrix-like class, which just implements a matrix action via a user-specified function. Is the name MatrixAction ok?
Currently, the iterative solvers only require a matvec(x) method and perhaps a .shape attribute. I suppose you mean a class: MatrixAction(matvec, shape, rmatvec=None) that's essentially a dummy object which satisfies our expectations in the sparse solvers? I added rmatvec because some iterative solvers require matrix vector products by the transpose. I'm not keen on the name, but the idea is quite good. It's easier to document MatrixAction once than describing the expected interface (i.e. .matvec() and .shape) in each iterative solver's docstring.
Many solvers expect a particular sparse matrix format. Apart from the fact that this should be properly documented, I would like to issue a warning when an implicit conversion of a sparse format occurs. This could be controlled by some warning-level or verbosity parameter.
I've added a SparseEfficiencyWarning for people who construct matrices using the CSR/CSC formats. I suppose we could do something similar for functions which expect a particular format. Should we use SparseEfficiencyWarning for this situation, or should a new Warning be created? -- Nathan Bell wnbell@gmail.com http://graphics.cs.uiuc.edu/~wnbell/
Nathan Bell wrote:
Now what to do with umfpack? The current situation when the wrappers are in linsolve and the solver is external is not ideal. I did it this way at time when there were no scikits and SuperLU solver included in scipy had not performed well for my matrices. Would it be a good scikit? Or we could contact Tim Davis...
I think it would make a good scikit.
It seems highly unlikely that Tim Davis would make a license change for us, but it's worth asking. Perhaps he'd put an older version in the public domain?
Yeah, even an older version would be good. But as every version gets (significantly) faster than the previous ones, a scikit should exist for the latest version.
Are there BSD licensed sparse LU codes that are superior to SuperLU?
This could be a good project for a student to find out...
We could make a new sparse matrix-like class, which just implements a matrix action via a user-specified function. Is the name MatrixAction ok?
Currently, the iterative solvers only require a matvec(x) method and perhaps a .shape attribute. I suppose you mean a class: MatrixAction(matvec, shape, rmatvec=None) that's essentially a dummy object which satisfies our expectations in the sparse solvers? I added rmatvec because some iterative solvers require matrix vector products by the transpose.
I'm not keen on the name, but the idea is quite good. It's easier to document MatrixAction once than describing the expected interface (i.e. .matvec() and .shape) in each iterative solver's docstring.
DummyMatrix/DummySparseMatrix then? I am bad at inventing names.
Many solvers expect a particular sparse matrix format. Apart from the fact that this should be properly documented, I would like to issue a warning when an implicit conversion of a sparse format occurs. This could be controlled by some warning-level or verbosity parameter.
I've added a SparseEfficiencyWarning for people who construct matrices using the CSR/CSC formats. I suppose we could do something similar for functions which expect a particular format.
Should we use SparseEfficiencyWarning for this situation, or should a new Warning be created?
SparseEfficiencyWarning is perfect! Thanks for adding this. r.
I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy).
Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March.
I can help, but not before mid-January (I have to live without it because of some deadlines). Then, I will catch up with what was done. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher
I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy).
Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March.
I have some time to do this (besides I need the package), so if the whereabouts of arpack are solved (ie where it should be put with which interface), , I can help moving arpack from the sandbox to the trunk and apply the long awaited patch (if I'm given commits privileges). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher
On Jan 22, 2008 7:43 AM, Matthieu Brucher <matthieu.brucher@gmail.com> wrote:
Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March.
I have some time to do this (besides I need the package), so if the whereabouts of arpack are solved (ie where it should be put with which interface), , I can help moving arpack from the sandbox to the trunk and apply the long awaited patch (if I'm given commits privileges).
AFAIK the current proposal is as follows scipy.sparse Will contain the sparse matrix classes and perhaps construction functions (e.g. spdiags) scipy.splinalg New home for sparse linear algebra (i.e. anything that has a dense analog in scipy.linalg) Possible home for sparse construction functions (e.g. spkron) splinalg.eigen Sparse eigensolvers: sandbox.lobpcg -> splinalg.eigen.lobpcg sandbox.arpack -> splinalg.eigen.arpack a function splinalg.eigen.eigs() should support a simplified interface to ARPACK, without exposing many ARPACK-specific parameters (allowing the backend to be changed in the future) splinalg.isolve Iterative solvers for linear systems (e.g. cg, gmres): linalg.iterative -> splinalg.isolve splinalg.dsolve Direct solvers for linear systems (e.g. SuperLU): scipy.linsolve -> splinalg.dsolve scipy.linsolve.umfpack -> scikit -- Nathan Bell wnbell@gmail.com http://graphics.cs.uiuc.edu/~wnbell/
AFAIK the current proposal is as follows
scipy.sparse Will contain the sparse matrix classes and perhaps construction functions (e.g. spdiags)
scipy.splinalg New home for sparse linear algebra (i.e. anything that has a dense analog in scipy.linalg) Possible home for sparse construction functions (e.g. spkron)
splinalg.eigen Sparse eigensolvers: sandbox.lobpcg -> splinalg.eigen.lobpcg sandbox.arpack -> splinalg.eigen.arpack
a function splinalg.eigen.eigs() should support a simplified interface to ARPACK, without exposing many ARPACK-specific parameters (allowing the backend to be changed in the future)
splinalg.isolve Iterative solvers for linear systems (e.g. cg, gmres): linalg.iterative -> splinalg.isolve
splinalg.dsolve Direct solvers for linear systems (e.g. SuperLU): scipy.linsolve -> splinalg.dsolve scipy.linsolve.umfpack -> scikit
Is there any concern about this ? If not, it may be time to make it happen ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher
Matthieu Brucher wrote:
AFAIK the current proposal is as follows
scipy.sparse Will contain the sparse matrix classes and perhaps construction functions (e.g. spdiags)
scipy.splinalg New home for sparse linear algebra (i.e. anything that has a dense analog in scipy.linalg) Possible home for sparse construction functions (e.g. spkron)
splinalg.eigen Sparse eigensolvers: sandbox.lobpcg -> splinalg.eigen.lobpcg sandbox.arpack -> splinalg.eigen.arpack
a function splinalg.eigen.eigs() should support a simplified interface to ARPACK, without exposing many ARPACK-specific parameters (allowing the backend to be changed in the future)
splinalg.isolve Iterative solvers for linear systems (e.g. cg, gmres): linalg.iterative -> splinalg.isolve
splinalg.dsolve Direct solvers for linear systems (e.g. SuperLU): scipy.linsolve -> splinalg.dsolve scipy.linsolve.umfpack -> scikit
Is there any concern about this ? If not, it may be time to make it happen ?
Looks good to me. -Travis
Matthieu Brucher wrote:
AFAIK the current proposal is as follows
scipy.sparse Will contain the sparse matrix classes and perhaps construction functions (e.g. spdiags)
scipy.splinalg New home for sparse linear algebra (i.e. anything that has a dense analog in scipy.linalg) Possible home for sparse construction functions (e.g. spkron)
splinalg.eigen Sparse eigensolvers: sandbox.lobpcg -> splinalg.eigen.lobpcg sandbox.arpack -> splinalg.eigen.arpack
a function splinalg.eigen.eigs() should support a simplified interface to ARPACK, without exposing many ARPACK-specific parameters (allowing the backend to be changed in the future)
splinalg.isolve Iterative solvers for linear systems (e.g. cg, gmres): linalg.iterative -> splinalg.isolve
splinalg.dsolve Direct solvers for linear systems (e.g. SuperLU): scipy.linsolve -> splinalg.dsolve scipy.linsolve.umfpack -> scikit
Is there any concern about this ? If not, it may be time to make it happen ?
+1 Besides the sparse stuff, I would also like to have support for the symmetric eigenvalue functions of lapack that are currently in the symeig package, but are not in scipy.linalg. lobpcg needs them to work correctly. r.
On Jan 24, 2008 2:31 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
Besides the sparse stuff, I would also like to have support for the symmetric eigenvalue functions of lapack that are currently in the symeig package, but are not in scipy.linalg. lobpcg needs them to work correctly.
We should formalize your idea to make a standard "dummy" matrix soon also. How about the name LinearOperator? Our goal with this should be to eliminate all of the code that checks for matvec(), psolve(), etc. I imagine a definition like so: class LinearOperator: def __init__(self, shape, matvec, rmatvec=None,psolve=None): self.shape = shape self.matvec = matvec if rmatvec is not None: self.rmatvec else: def rmatvec(x): raise NotImplementedError('LinearOperator does not define the operation x*A") A question arises when dealing with psolve() (the preconditioner). We could either continue checking for the existence of psolve() in each method, or we could make LinearOperator have a dummy routine psolve(x) -> x and then write the methods so that the preconditioner is always applied. The downside to this approach is that unnecessary copies may be performed. OTOH one could write the method to avoid such problems (at worst, by checking to psolve() as is currently done). Ideas? -- Nathan Bell wnbell@gmail.com http://graphics.cs.uiuc.edu/~wnbell/
Nathan Bell wrote:
On Jan 24, 2008 2:31 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
Besides the sparse stuff, I would also like to have support for the symmetric eigenvalue functions of lapack that are currently in the symeig package, but are not in scipy.linalg. lobpcg needs them to work correctly.
We should formalize your idea to make a standard "dummy" matrix soon also. How about the name LinearOperator? Our goal with this should be to eliminate all of the code that checks for matvec(), psolve(), etc.
Good name!
I imagine a definition like so:
class LinearOperator: def __init__(self, shape, matvec, rmatvec=None,psolve=None): self.shape = shape self.matvec = matvec
if rmatvec is not None: self.rmatvec
self.rmatvec = rmatvec
else: def rmatvec(x): raise NotImplementedError('LinearOperator does not define the operation x*A")
A question arises when dealing with psolve() (the preconditioner). We could either continue checking for the existence of psolve() in each method, or we could make LinearOperator have a dummy routine psolve(x) -> x and then write the methods so that the preconditioner is always applied.
The downside to this approach is that unnecessary copies may be performed. OTOH one could write the method to avoid such problems (at worst, by checking to psolve() as is currently done). Ideas?
I am not familiar with internals of scipy.linalg, but looking at iterative.py, it seems to me that functions in it check for psolve and if it is not defined in A, they use the no-op method psolve(x) -> x. So there is a precedent :) IMHO if one must use sparse matrices, a (possible) copy of a vector or two does not matter much, as the main memory+performance hogger is the matrix. -> I would start with the dummy psolve. When it is done, profiling may indicate some better way. A side-note: currently, the iterative solvers are defined as, for example: bicg(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, callback=None) - the preconditioner is passed in as an attribute of A. This is not too transparent, IMHO. It might be better to use instead bicg(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, callback=None, precond = None), where both A, precond would be LinearOperator instances. The preconditioning would then be performed by precond.matvec/rmatvec. just my 1.5c r.
On Jan 24, 2008 8:02 AM, Robert Cimrman <cimrman3@ntc.zcu.cz> wrote:
currently, the iterative solvers are defined as, for example: bicg(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, callback=None) - the preconditioner is passed in as an attribute of A. This is not too transparent, IMHO.
It might be better to use instead bicg(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, callback=None, precond = None), where both A, precond would be LinearOperator instances. The preconditioning would then be performed by precond.matvec/rmatvec.
I agree, that would be better. -- Nathan Bell wnbell@gmail.com http://graphics.cs.uiuc.edu/~wnbell/
On Jan 23, 2008 2:46 PM, Matthieu Brucher <matthieu.brucher@gmail.com> wrote:
Is there any concern about this ? If not, it may be time to make it happen ?
I should have some time this weekend to make the changes. You're welcome to start now if you like. -- Nathan Bell wnbell@gmail.com http://graphics.cs.uiuc.edu/~wnbell/
2008/1/24, Nathan Bell <wnbell@gmail.com>:
On Jan 23, 2008 2:46 PM, Matthieu Brucher <matthieu.brucher@gmail.com> wrote:
Is there any concern about this ? If not, it may be time to make it
happen ?
I should have some time this weekend to make the changes. You're welcome to start now if you like.
Unfortunately I don't think I have commit privileges, so I'm looking forward to see your future changes :) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher
On Dec 27, 2007 2:10 PM, Jarrod Millman <millman@berkeley.edu> wrote:
Hello,
I started a blog, which has two posts about the NumPy/SciPy code sprint/strategic planning meeting early this month. It doesn't have very much content yet, but I thought it would be worth letting everyone know where it is: http://jarrodmillman.blogspot.com/
Any objections if I merge Min's work into the trunk? I'm afraid if we don't do it soon, with Matthew's work on testing, it will get out of date and hard to merge. All of that is confined to weave alone, and was pure bugfixing. I'm running the tests right now and will do it in two stages: 1. update the weave cleanup branch to current trunk status, so there's a clear commit of the differences. 2. merge the weave changes back into the trunk and then commit that. I'm not an svn merge expert, so if anyone spots a huge blunder-in-the-making, by all means speak up... Cheers, f
On Dec 27, 2007 2:35 PM, Fernando Perez <fperez.net@gmail.com> wrote:
Any objections if I merge Min's work into the trunk? I'm afraid if we don't do it soon, with Matthew's work on testing, it will get out of date and hard to merge. All of that is confined to weave alone, and was pure bugfixing.
+1
I'm running the tests right now and will do it in two stages:
1. update the weave cleanup branch to current trunk status, so there's a clear commit of the differences. 2. merge the weave changes back into the trunk and then commit that.
I'm not an svn merge expert, so if anyone spots a huge blunder-in-the-making, by all means speak up...
That sounds right to me. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/
Fernando Perez wrote:
On Dec 27, 2007 2:10 PM, Jarrod Millman <millman@berkeley.edu> wrote:
Hello,
I started a blog, which has two posts about the NumPy/SciPy code sprint/strategic planning meeting early this month. It doesn't have very much content yet, but I thought it would be worth letting everyone know where it is: http://jarrodmillman.blogspot.com/
Any objections if I merge Min's work into the trunk? I'm afraid if we don't do it soon, with Matthew's work on testing, it will get out of date and hard to merge. All of that is confined to weave alone, and was pure bugfixing. I'm running the tests right now and will do it in two stages:
1. update the weave cleanup branch to current trunk status, so there's a clear commit of the differences. 2. merge the weave changes back into the trunk and then commit that.
I'm not an svn merge expert, so if anyone spots a huge blunder-in-the-making, by all means speak up...
Go ahead and do it. I'm never an expert on svn merge until *after* I've done the merge, either, but that's why we have version control :-) -Travis
participants (7)
-
Fernando Perez -
Jarrod Millman -
Matthieu Brucher -
Nathan Bell -
Ondrej Certik -
Robert Cimrman -
Travis E. Oliphant