There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2). In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :( Jon  Jon Peirce Nottingham University +44 (0)115 8467176 (tel) +44 (0)115 9515324 (fax) http://www.psychopy.org This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.
Jon Peirce schrieb:
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2).
In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :(
Maybe I don't understand, but what's wrong with numpy.linalg.pinv? sven
Sven Schreiber wrote:
Jon Peirce schrieb:
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2).
In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :(
Maybe I don't understand, but what's wrong with numpy.linalg.pinv?
Er, what's a pinv? It doesn't sound anything like a generalized_inverse. Vicki Laidler
pseudoinverse it's the same name matlab uses: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html Victoria G. Laidler wrote:
Sven Schreiber wrote:
Jon Peirce schrieb:
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2).
In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :(
Maybe I don't understand, but what's wrong with numpy.linalg.pinv?
Er, what's a pinv? It doesn't sound anything like a generalized_inverse.
Vicki Laidler
 Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with preintegrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpydiscussion mailing list Numpydiscussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpydiscussion
  I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!!  Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 wwwstat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305
Jonathan Taylor wrote:
pseudoinverse
it's the same name matlab uses:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html
Thanks for the explanation. I'm puzzled by the naming choice, however. Standard best practice in writing software is to give understandable names, to improve readability and code maintenance. Obscure abbreviations like "pinv" pretty much went out with the FORTRAN 9character limit for variable names. It's very unusual to see them in new software nowadays, and it always looks unprofessional to me. I understand that for interactive use, short names are more convenient; but shouldn't they be available aliases to the more general names? Since numpy is primarily a software library, I wouldn't expect it to sacrifice a standard bestpractice in order to make things more convenient for interactive use. If the concern is for for matlab compatibility, maybe a synonym module numpy.as_matlab could define all the synonyms, that matlab users could then use? That would make more sense to me than inflicting obscure matlab names on the rest of the user community. Vicki Laidler
Victoria G. Laidler wrote:
Sven Schreiber wrote:
Jon Peirce schrieb:
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2).
In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :(
Maybe I don't understand, but what's wrong with numpy.linalg.pinv?
Er, what's a pinv? It doesn't sound anything like a generalized_inverse.
Vicki Laidler

Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with preintegrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpydiscussion mailing list Numpydiscussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpydiscussion
Victoria G. Laidler schrieb:
I understand that for interactive use, short names are more convenient; but shouldn't they be available aliases to the more general names? Since numpy is primarily a software library, I wouldn't expect it to sacrifice a standard bestpractice in order to make things more convenient for interactive use.
I don't necessarily agree that numpy should aim to be primarily a library, but I'm with you concerning the alias idea. However, iirc there was some discussion recently on this list about the dual solution (long names as well as short ones in parallel), and some important numpy people had some reservations, although I don't remember exactly what those were  probably some Python Zen issues ("there should be only one way to get to Rome", was that it?  just kidding).
If the concern is for for matlab compatibility, maybe a synonym module numpy.as_matlab could define all the synonyms, that matlab users could then use? That would make more sense to me than inflicting obscure matlab names on the rest of the user community.
From superficially browsing through the numpy guide my subjective impression is that function names are mostly pretty short. So maybe the alias thing should work the other way around, making long names available in a module numpy.long_names_for_typing_addicts (again, a bad joke...)
cheers, Sven
Victoria G. Laidler wrote:
Jonathan Taylor wrote:
pseudoinverse
it's the same name matlab uses:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html
Thanks for the explanation.
I'm puzzled by the naming choice, however. Standard best practice in writing software is to give understandable names, to improve readability and code maintenance. Obscure abbreviations like "pinv" pretty much went out with the FORTRAN 9character limit for variable names. It's very unusual to see them in new software nowadays, and it always looks unprofessional to me.
I appreciate this feedback. It's a question that comes up occasionally, so I'll at least give my opinion on the matter which may shed some light on it. I disagree with the general "longname" concept when it comes to "verycommon" operations. It's easy to take an idea and overgeneralize it for the sake of consistency. I've seen too many codes where very long names actually get in the way of code readability. Someone reading code will have to know what an operation actually is to understand it. A name like "generalized_inverse" doesn't convey any intrinsic meaning to the nonpractitioner anyway. You always have to "know" what the function is "really" doing. All that's needed is a "unique" name. I've found that long names are harder to remember (there's more opportunity for confusion about how much of the full name was actually used and how any words were combined). A particularly ludicrous case, for example, was the fact that the very common SVD (whose acronym everybody doing linear algebra uses) was named in LinearAlgebra (an unecessarily long module name to begin with) with the horribly long and unsightly name of singular_value_decomposition. I suppose this was done just for the sake of "code readability." It's not that we're concerned with MATLAB compatibility. But, frankly I've never heard that the short names MATLAB uses for some very common operations are a liability. So, when a common operation has a short, easilyremembered name that is in common usage, why not use it? That's basically the underlying philosophy. NumPy has too many very basic operations to try and create very_long_names for them. I know there are differing opinions out there. I can understand that. That's why I suspect that many codes I will want to use will be written with easy_to_understand_but_very_long names and I'll grin and bear the extra horizontal space that it takes up in my code. Travis
On Jul 16, 2006, at 00:21 , Travis Oliphant wrote:
Victoria G. Laidler wrote:
Jonathan Taylor wrote:
pseudoinverse
it's the same name matlab uses:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html
Thanks for the explanation.
I'm puzzled by the naming choice, however. Standard best practice in writing software is to give understandable names, to improve readability and code maintenance. Obscure abbreviations like "pinv" pretty much went out with the FORTRAN 9character limit for variable names. It's very unusual to see them in new software nowadays, and it always looks unprofessional to me.
I appreciate this feedback. It's a question that comes up occasionally, so I'll at least give my opinion on the matter which may shed some light on it.
I disagree with the general "longname" concept when it comes to "verycommon" operations. It's easy to take an idea and overgeneralize it for the sake of consistency. I've seen too many codes where very long names actually get in the way of code readability.
How are pseudoinverse and inverse "very common"? (Especially given that one of the arguments for not having a .I attribute for inverse on matrices is that that's usually the wrong way to go about solving equations.)
Someone reading code will have to know what an operation actually is to understand it. A name like "generalized_inverse" doesn't convey any intrinsic meaning to the nonpractitioner anyway. You always have to "know" what the function is "really" doing. All that's needed is a "unique" name. I've found that long names are harder to remember (there's more opportunity for confusion about how much of the full name was actually used and how any words were combined).
As has been argued before, short names have their own problems with remembering what they are. I also find that when reading code with short names, I go slower, because I have to stop and think what that short name is (particularly bad are short names that drop vowels, like lstsq  I can't pronounce that!). I'm not very good at creating hash tables in my head from short names to long ones. The currently exported names in numpy.linalg are solve, inv, cholesky, eigvals, eigvalsh, eig, eigh, svd, pinv, det, lstsq, and norm. Of these, 'lstsq' is the worst offender, IMHO (superfluous dropped vowels). 'inv' and 'pinv' are the next, then the 'eig*' names. 'least_squares' would be better than 'lstsq'. 'inverse' is not much longer than 'inv', and is more descriptive. I don't think 'pinv' is that common to need a short name; 'pseudoinverse' would be better (not all generalized inverses are pseudoinverses). Give me these three and I'll be happy :) Personally, I'd prefer 'eigenvalues' and 'eigen' instead of 'eigvals' and 'eig', but I can live with the current names. 'det' is fine, as it's used in mathematical notation. 'cholesky' is also fine, as it's a word at least. I'd have to look at the docstring to find how to use it, but that would be the same for "cholesky_decomposition". [btw, I'm ok with numpy.dft now: the names there make sense, because they're constructed logically. Once you know the scheme, you can see right away that 'irfftn' is 'inverse real FFT, ndimensional'.]
A particularly ludicrous case, for example, was the fact that the very common SVD (whose acronym everybody doing linear algebra uses) was named in LinearAlgebra (an unecessarily long module name to begin with) with the horribly long and unsightly name of singular_value_decomposition. I suppose this was done just for the sake of "code readability."
I agree; that's stupid.
It's not that we're concerned with MATLAB compatibility. But, frankly I've never heard that the short names MATLAB uses for some very common operations are a liability. So, when a common operation has a short, easilyremembered name that is in common usage, why not use it?
That's basically the underlying philosophy. NumPy has too many very basic operations to try and create very_long_names for them.
I know there are differing opinions out there. I can understand that. That's why I suspect that many codes I will want to use will be written with easy_to_understand_but_very_long names and I'll grin and bear the extra horizontal space that it takes up in my code.
 >\/< /\ David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ cookedm@physics.mcmaster.ca
On Sun, 16 Jul 2006, "David M. Cooke" apparently wrote:
'inverse' is not much longer than 'inv', and is more descriptive
But 'inv' is quite universal (can you name a matrix language that uses 'inverse' instead?) and I think unambiguous (what might it be confused with?). Cheers, Alan Isaac
On Jul 16, 2006, at 11:47 AM, Alan G Isaac wrote:
On Sun, 16 Jul 2006, "David M. Cooke" apparently wrote:
'inverse' is not much longer than 'inv', and is more descriptive
But 'inv' is quite universal (can you name a matrix language that uses 'inverse' instead?) and I think unambiguous (what might it be confused with?).
IDL uses invert, so inv is not exactly universal. I'm personally a fan of names that can be used in interactive sessions at the command line, which argues for shorter names. But it is nice to have names where you can type just a few characters and use tabcompletion to fill in the rest of the name. Then the important thing is not the full length of the name but having the first 3 or 4 characters be memorable. So I'd rather have "pseudoinverse" because I can probably find it by just typing "ps<tab>".
On 7/16/06, Rick White <rlw@stsci.edu> wrote:
On Jul 16, 2006, at 11:47 AM, Alan G Isaac wrote:
On Sun, 16 Jul 2006, "David M. Cooke" apparently wrote:
'inverse' is not much longer than 'inv', and is more descriptive
But 'inv' is quite universal (can you name a matrix language that uses 'inverse' instead?) and I think unambiguous (what might it be confused with?).
IDL uses invert, so inv is not exactly universal.
I'm personally a fan of names that can be used in interactive sessions at the command line, which argues for shorter names. But it is nice to have names where you can type just a few characters and use tabcompletion to fill in the rest of the name. Then the important thing is not the full length of the name but having the first 3 or 4 characters be memorable. So I'd rather have "pseudoinverse" because I can probably find it by just typing "ps<tab>".
I prfr shrtr nams lke inv eig and sin.
Keith Goodman wrote:
I prfr shrtr nams lke inv eig and sin.
Dwn wth vwls!! Srsly thgh:
from numpy import linalg help(linalg.inv) Help on function inv in module numpy.linalg.linalg:
inv(a)
???
While I prefer inverse to inv, I don't really care as long as the word "inverse" appears in the docstring and it faithfully promises that it is going to try to do an inverse in the matrix linear algebra sense and not in the one_over_x or power_minus_one sense or bit inversion (~ x, numpy.invert) sense. It would be handy to know which exception which will be raised for a singular matrix, what are valid arguments (eg, square shape, numbers), what is the type of the output ('<f8' for me) and what is the expected cost (O(n^3)). Other handy information  like a pointer to pinv and matrix.I and a note that this is implemented by "solve"ing Identity = b = A.x. The docs for solve should indicate they use an LU factorisation in lapack as this is of interest (ie it is not the matlab \ operator and not cholesky). If you have all that then I suspect people might accept any even vaguely sane naming scheme, as they don't have to guess or read the source to find out what it actually does. The documentation for "invert" is for a generic ufunc in my interpreter (python 2.4.3 and numpy 0.9.8) which could cause confusion  one might imagine it is going to invert a matrix. The ".I" attribute (instead of ".I()") of a matrix implies to me that the inverse is already known and is an O(1) lookup  this seems seriously misleading  it seems you need to store the temporary yourself if you want to reuse the inverse without recomputing it, but maybe I miss some deep magic? I did buy the book, and it doesn't contain all the information about "inv" that I've listed above, and I don't want Travis to spend his time putting all that in the book as then it'd be too long for me to print out and read ;) I think numpy would benefit from having a great deal more documentation available than it does right now in the form of docstrings and doctests  or am I the only person who relies on the interpreter and help(thing) as being the ultimate reference? This is an area that many of us might be able to help with fixing, via the wiki for example. Has there been a decision on the numpy wiki examples not being converted to doctests? (these examples could usefully be linked from the numeric.scipy.org home page). I saw "rundocs" in testing.numpytest, which sort of suggests the possibility is there. With the wiki being a moving target I can understand that synchronisation is an issue, but perhaps there could be a wiki dump to the numpy.doc directory each time a new release is made, along with a doctest? Catching and fixing misinformation would be as useful as catching actual bugs. Let me know if you are against the "convert examples to doctests" idea or if it has already been done. Perhaps this increases the testsuite coverage for free...? It would be equally possible place that example code into the actual docstrings in numpy, along with more detailed explanations which could also be pulled in from the wiki. I don't think this would detract much from Travis' book, since that contains a lot of information that doesn't belong in docstrings anyway? Jon PS: Doubtless someone might do better, but here is what I mean: copy and paste the ascii (editor) formatted wiki text into a file wiki.txt from the wiki example page and get rid of the {{{ python formatting that confuses doctest: $ grep v "{{{" wiki.txt  grep v "}}}" > testnumpywiki.txt ==testwiki.py:== import doctest doctest.testfile("testnumpywiki.txt") $ python testwiki.py > problems.txt problems.txt is 37kB in size (83 of 1028) throwing out the blanklines issues via: doctest.testfile("testnumpywiki.txt", optionflags=doctest.NORMALIZE_WHITESPACE) reduces this to 24kB (62 of 1028). ... most cases are not important, just needing to be fixed for formatting on the wiki or flagged as version dependent, but a few are worth checking out the intentions, eg: ********************************************************************** File "testnumpywiki.txt", line 69, in testnumpywiki.txt Failed example: a[:,b2] Exception raised: Traceback (most recent call last): File "c:\python24\lib\doctest.py", line 1243, in __run compileflags, 1) in test.globs File "<doctest testnumpywiki.txt[18]>", line 1, in ? a[:,b2] IndexError: arrays used as indices must be of integer type ********************************************************************** File "testnumpywiki.txt", line 893, in testnumpywiki.txt Failed example: ceil(a) # nearest integers greaterthan or equal to a Expected: array([1., 1., 0., 1., 2., 2.]) Got: array([1., 1., 0., 1., 2., 2.]) ********************************************************************** File "testnumpywiki.txt", line 1162, in testnumpywiki.txt Failed example: cov(T,P) # covariance between temperature and pressure Expected: 3.9541666666666657 Got: array([[ 1.97583333, 3.95416667], [ 3.95416667, 8.22916667]]) ********************************************************************** File "testnumpywiki.txt", line 2235, in testnumpywiki.txt Failed example: type(a[0]) Expected: <type 'int32_arrtype'> Got: <type 'int32scalar'>
On 7/15/06, Travis Oliphant <oliphant.travis@ieee.org> wrote:
Victoria G. Laidler wrote:
Jonathan Taylor wrote:
<snip> It's not that we're concerned with MATLAB compatibility. But, frankly
I've never heard that the short names MATLAB uses for some very common operations are a liability. So, when a common operation has a short, easilyremembered name that is in common usage, why not use it?
That's basically the underlying philosophy. NumPy has too many very basic operations to try and create very_long_names for them.
I know there are differing opinions out there. I can understand that. That's why I suspect that many codes I will want to use will be written with easy_to_understand_but_very_long names and I'll grin and bear the extra horizontal space that it takes up in my code.
What is needed in the end is a good index with lots of crossreferences. Name choices are just choices, there is no iso standard for function names that I know of. There are short names have been used for so long that everyone knows them (sin, cos, ...), some names come in two standard forms (arcsin, asin) some are fortran conventions (arctan2), some are matlab conventions (pinv, chol). One always has to learn what the names for things are in any new language, so the best thing is to make it easy to find out. Chuck
On Sun, 16 Jul 2006, Charles R Harris apparently wrote:
What is needed in the end is a good index with lots of crossreferences. Name choices are just choices
I mostly agree with this (although I think Matlab made some bad choices in naming). As a point of reference for a useful index see http://www.mathworks.com/access/helpdesk/help/techdoc/ref/refbookl.html Cheers, Alan Isaac
Victoria G. Laidler wrote:
Sven Schreiber wrote:
Jon Peirce schrieb:
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2).
In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :(
Maybe I don't understand, but what's wrong with numpy.linalg.pinv?
Er, what's a pinv? It doesn't sound anything like a generalized_inverse.
'pseudo'inverse. It's the name MATLAB uses for the thing. There are many choices for a "generalized_inverse" which is actually a misnomer for what is being done. The MoorePenrose pseudoinverse is a particular form of the generalized_inverse (and the one being computed). Travis
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2).
In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? No. We are just emphasizing the new names. The old names are just
Jon Peirce wrote: there for compatibility with Numeric. The new names have been there from the beginning of NumPy releases. So, just call it numpy.linalg.pinv and it will work in all versions. Travis
participants (11)

Alan G Isaac

Charles R Harris

David M. Cooke

Jon Peirce

Jon Wright

Jonathan Taylor

Keith Goodman

Rick White

Sven Schreiber

Travis Oliphant

Victoria G. Laidler