
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays. It first tests that an array is 2d, then if it is square. This seems redundant to me. This currently works In [20]: np.linalg.det([[1]]) Out[20]: 1.0 but In [21]: np.linalg.det([1]) <snip> LinAlgError: 1-dimensional array given. Array must be two-dimensional The diff attached to this ticket <http://projects.scipy.org/numpy/ticket/1556> gives In [3]: np.linalg.det([[1]]) Out[3]: 1.0 In [4]: np.linalg.det([1]) Out[4]: 1.0 In [5]: np.linalg.det(1) Out[5]: 1.0 Skipper

On 7/26/2010 12:45 PM, Skipper Seabold wrote:
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays.
I don't have a real opinion on changing this, but I am curious to know the use case, as the current behavior seems a) correct and b) to provide an error check. Cheers, Alan

On Mon, Jul 26, 2010 at 5:48 PM, Alan G Isaac <aisaac@american.edu> wrote:
On 7/26/2010 12:45 PM, Skipper Seabold wrote:
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays.
I don't have a real opinion on changing this, but I am curious to know the use case, as the current behavior seems
Use case is just so that I can have less atleast_2d's in my code, since checks are done in linalg.det anyway.
a) correct and b) to provide an error check.
Isn't the determinant defined for a scalar b such that det(b) == det([b]) == det([[b]])? The error check is redundant I think if the determinant is defined for a scalar (if not then shouldn't det([[b]]) fail?). Check if something is 2d. Check if something is square. Since a square array is by definition 2d, if you replace asarray then the check for 2d with atleast_2d, you can handle the scalar case and then go on to check if it's square. Basically, it saves a tiny bit of time in det and saves me from writing atleast_2d. Skipper

On Mon, Jul 26, 2010 at 5:05 PM, Skipper Seabold <jsseabold@gmail.com>wrote:
On Mon, Jul 26, 2010 at 5:48 PM, Alan G Isaac <aisaac@american.edu> wrote:
On 7/26/2010 12:45 PM, Skipper Seabold wrote:
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays.
I don't have a real opinion on changing this, but I am curious to know the use case, as the current behavior seems
Use case is just so that I can have less atleast_2d's in my code, since checks are done in linalg.det anyway.
a) correct and b) to provide an error check.
Isn't the determinant defined for a scalar b such that det(b) == det([b]) == det([[b]])?
Well, no ;) Matrices have determinants, scalars don't. Where are you running into a problem? Is something returning a scalar where a square array would be more appropriate? <snip> Chuck

On Mon, Jul 26, 2010 at 7:38 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Mon, Jul 26, 2010 at 5:05 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 5:48 PM, Alan G Isaac <aisaac@american.edu> wrote:
On 7/26/2010 12:45 PM, Skipper Seabold wrote:
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays.
I don't have a real opinion on changing this, but I am curious to know the use case, as the current behavior seems
Use case is just so that I can have less atleast_2d's in my code, since checks are done in linalg.det anyway.
a) correct and b) to provide an error check.
Isn't the determinant defined for a scalar b such that det(b) == det([b]) == det([[b]])?
Well, no ;) Matrices have determinants, scalars don't. Where are you running into a problem? Is something returning a scalar where a square array would be more appropriate?
No, linalg.det always returns a scalar, and I, of course, could be more careful and always ensure that whatever the user supplies it becomes a 2d array, but I don't like putting atleast_2d everywhere if I don't need to. I thought that the determinant of a scalar was by definition a scalar (e.g, google "determinant of a scalar is"), hence np.linalg.det(np.array([[2]])) #2.0 which should either fail or if not, then I think np.linalg.det should handle scalars and scalars as 1d arrays. So instead of me having to do b = np.array([2]) b = np.atleast_2d(b) np.linalg.det(b) #2.0 I could just do b = np.array([2]) np.linalg.det(b) #2.0 Regardless, doing asarray, checking if something is 2d, and then checking if its square seems redundant and could be replaced by an atleast_2d in linalg.slogdet which 1) takes a view as an array, 2) ensures that the we have a 2d array, and 3) handles the scalar case. Then we check if it's square. It doesn't really change much except keeping me from having to put atleast_2d's in my code. Skipper

On Mon, Jul 26, 2010 at 4:18 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 7:38 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Mon, Jul 26, 2010 at 5:05 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 5:48 PM, Alan G Isaac <aisaac@american.edu> wrote:
On 7/26/2010 12:45 PM, Skipper Seabold wrote:
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays.
I don't have a real opinion on changing this, but I am curious to know the use case, as the current behavior seems
Use case is just so that I can have less atleast_2d's in my code, since checks are done in linalg.det anyway.
a) correct and b) to provide an error check.
Isn't the determinant defined for a scalar b such that det(b) == det([b]) == det([[b]])?
Well, no ;) Matrices have determinants, scalars don't. Where are you running into a problem? Is something returning a scalar where a square array would be more appropriate?
No, linalg.det always returns a scalar, and I, of course, could be more careful and always ensure that whatever the user supplies it becomes a 2d array, but I don't like putting atleast_2d everywhere if I don't need to. I thought that the determinant of a scalar was by definition a scalar (e.g, google "determinant of a scalar is"), hence
np.linalg.det(np.array([[2]])) #2.0
which should either fail or if not, then I think np.linalg.det should handle scalars and scalars as 1d arrays.
So instead of me having to do
b = np.array([2]) b = np.atleast_2d(b) np.linalg.det(b) #2.0
I could just do b = np.array([2]) np.linalg.det(b) #2.0
Regardless, doing asarray, checking if something is 2d, and then checking if its square seems redundant and could be replaced by an atleast_2d in linalg.slogdet which 1) takes a view as an array, 2) ensures that the we have a 2d array, and 3) handles the scalar case. Then we check if it's square. It doesn't really change much except keeping me from having to put atleast_2d's in my code.
Skipper _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
imo, the determinant of a scalar should be defined as itself, based on the definition of the determinant. I don't have a vested interest in linalg's behavior in this respect, though. --Josh

On Mon, Jul 26, 2010 at 6:22 PM, Joshua Holbrook <josh.holbrook@gmail.com>wrote:
On Mon, Jul 26, 2010 at 4:18 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 7:38 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Mon, Jul 26, 2010 at 5:05 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 5:48 PM, Alan G Isaac <aisaac@american.edu>
wrote:
On 7/26/2010 12:45 PM, Skipper Seabold wrote:
Right now np.linalg.det does not handle scalars or 1d (scalar) arrays.
I don't have a real opinion on changing this, but I am curious to know the use case, as the current behavior seems
Use case is just so that I can have less atleast_2d's in my code, since checks are done in linalg.det anyway.
a) correct and b) to provide an error check.
Isn't the determinant defined for a scalar b such that det(b) == det([b]) == det([[b]])?
Well, no ;) Matrices have determinants, scalars don't. Where are you running into a problem? Is something returning a scalar where a square array would be more appropriate?
No, linalg.det always returns a scalar, and I, of course, could be more careful and always ensure that whatever the user supplies it becomes a 2d array, but I don't like putting atleast_2d everywhere if I don't need to. I thought that the determinant of a scalar was by definition a scalar (e.g, google "determinant of a scalar is"), hence
np.linalg.det(np.array([[2]])) #2.0
which should either fail or if not, then I think np.linalg.det should handle scalars and scalars as 1d arrays.
So instead of me having to do
b = np.array([2]) b = np.atleast_2d(b) np.linalg.det(b) #2.0
I could just do b = np.array([2]) np.linalg.det(b) #2.0
Regardless, doing asarray, checking if something is 2d, and then checking if its square seems redundant and could be replaced by an atleast_2d in linalg.slogdet which 1) takes a view as an array, 2) ensures that the we have a 2d array, and 3) handles the scalar case. Then we check if it's square. It doesn't really change much except keeping me from having to put atleast_2d's in my code.
Skipper _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
imo, the determinant of a scalar should be defined as itself, based on the definition of the determinant. I don't have a vested interest in linalg's behavior in this respect, though.
And the definition of a determinant is? There are several, but the common form these days is an anti-symmetric multilinear form acting on a set of column vectors and scaled so that applying it to the columns of the identity matrix is one. In that case one would have to treat a scalar as a column vector. As a practical matter I don't have a problem with det handling scalars if it is useful to do so. Chuck

On Mon, Jul 26, 2010 at 4:51 PM, Alan G Isaac <aisaac@american.edu> wrote:
On 7/26/2010 8:22 PM, Joshua Holbrook wrote:
imo, the determinant of a scalar should be defined as itself, based on the definition of the determinant.
What definition do you have in mind?
Alan Isaac
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
It seems I may be a bit out of my element here! Nonetheless, i believe I had http://en.wikipedia.org/wiki/Laplace_expansion in mind--that is, expansion of cofactors. It's been a while, but I recall this process scaling to, well, scalars. I might be wrong! --Josh

On 7/26/2010 8:18 PM, Skipper Seabold wrote:
np.linalg.det(np.array([[2]])) #2.0
which should either fail or if not, then I think np.linalg.det should handle scalars and scalars as 1d arrays
It should not fail, because it follows from standard definitions. (E.g., it is the base case of a recursive definition.) And it does not imply that the determinant is defined for anything but square matrices. But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invokedbefore taking the determinant? Cheers, Alan Isaac

On Mon, Jul 26, 2010 at 10:05 PM, Alan G Isaac <aisaac@american.edu> wrote:
But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invokedbefore taking the determinant?
Recently I ran into this trying to make the log-likelihood of a multivariate and univariate autoregressive process use the same function. One has log(sigma_scalar) and one calls for logdet(Sigma_matrix). I also ran in to again yesterday working on the Kalman filter, depending on the process being modeled and how the user writes a function if the needed coefficient arrays depend on parameters. To be more general, I have to put in atleast_2d, even though these checks are really in slogdet. Skipper

On Mon, Jul 26, 2010 at 10:05 PM, Alan G Isaac<aisaac@american.edu> wrote:
But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invoked before taking the determinant?
On 7/27/2010 8:51 AM, Skipper Seabold wrote:
Recently I ran into this trying to make the log-likelihood of a multivariate and univariate autoregressive process use the same function. One has log(sigma_scalar) and one calls for logdet(Sigma_matrix). I also ran in to again yesterday working on the Kalman filter, depending on the process being modeled and how the user writes a function if the needed coefficient arrays depend on parameters. To be more general, I have to put in atleast_2d, even though these checks are really in slogdet.
OK, I see. Two comments, without going over the code. 1. It seems the problem really arises earlier, when computing the residuals. I suppose the single equation code produces a 1d array, while the multi-equation code must produce a 2d array of residuals. This seems like the better place to fix things if you want general handling: make sure the residuals are always 2d. 2. If you don't want to do this, you could always branch on the LinAlgError. Cheers, Alan Isaac

On Tue, Jul 27, 2010 at 10:01 AM, Alan G Isaac <aisaac@american.edu> wrote:
On Mon, Jul 26, 2010 at 10:05 PM, Alan G Isaac<aisaac@american.edu> wrote:
But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invoked before taking the determinant?
On 7/27/2010 8:51 AM, Skipper Seabold wrote:
Recently I ran into this trying to make the log-likelihood of a multivariate and univariate autoregressive process use the same function. One has log(sigma_scalar) and one calls for logdet(Sigma_matrix). I also ran in to again yesterday working on the Kalman filter, depending on the process being modeled and how the user writes a function if the needed coefficient arrays depend on parameters. To be more general, I have to put in atleast_2d, even though these checks are really in slogdet.
OK, I see. Two comments, without going over the code.
1. It seems the problem really arises earlier, when computing the residuals. I suppose the single equation code produces a 1d array, while the multi-equation code must produce a 2d array of residuals. This seems like the better place to fix things if you want general handling: make sure the residuals are always 2d.
This is usually where it shows up. a = np.ones(10) np.dot(a,a)/1. #10.0 Changing to all 2d arrays is probably the sensible thing to do, though we would have to change some (many?) results held as floats to 1x1 arrays. Trying to use all 1d vectors as 2d arrays I recall having to insert both atleast_2d and squeezes all over the place. Maybe I will have another look. Skipper

On Tue, Jul 27, 2010 at 07:51, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 10:05 PM, Alan G Isaac <aisaac@american.edu> wrote:
But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invokedbefore taking the determinant?
Recently I ran into this trying to make the log-likelihood of a multivariate and univariate autoregressive process use the same function. One has log(sigma_scalar) and one calls for logdet(Sigma_matrix). I also ran in to again yesterday working on the Kalman filter, depending on the process being modeled and how the user writes a function if the needed coefficient arrays depend on parameters. To be more general, I have to put in atleast_2d, even though these checks are really in slogdet.
Not necessarily. In this context, you are treating a scalar as a 1x1 matrix. Or rather, in full generality, you have a 1x1 matrix and 1-element vectors and only doing operations on them that map fairly neatly onto a subset of scalar properties. Consequently, you can use scalars in their place without much problem (to add confusion, the scalar case was formulated first, then generalized to the multivariate case, but that doesn't change the mathematics unless if you believe certain ethnomathematicians). However, there are other contexts in which scalars are used where the determinant would come into play. For example, scalar-vector multiplication is defined. If you have an n-vector, then scalar-vector multiplication behaves like matrix-vector multiplication provided that the matrix is a diagonal matrix with the diagonal entries each being the scalar value. In this context, the determinant is not just the scalar value itself, but rather value**n. Many of the references you found stating that the "determinant of a scalar value is" the scalar itself were actually referring to 1x1 matrices, not true scalars. 1x1 matrices behave like scalars, but not all scalars behave like 1x1 matrices. linalg.det() does not know the context in which you are treating the scalar, so it rightly complains. That said, I expect you will be running into this 1x1<->scalar special case reasonably frequently in statsmodels. Writing a dwim_logdet() utility function there that does what you want is a perfectly reasonable thing to do. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Tue, Jul 27, 2010 at 12:00 PM, Robert Kern <robert.kern@gmail.com> wrote:
On Tue, Jul 27, 2010 at 07:51, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 10:05 PM, Alan G Isaac <aisaac@american.edu> wrote:
But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invokedbefore taking the determinant?
Recently I ran into this trying to make the log-likelihood of a multivariate and univariate autoregressive process use the same function. One has log(sigma_scalar) and one calls for logdet(Sigma_matrix). I also ran in to again yesterday working on the Kalman filter, depending on the process being modeled and how the user writes a function if the needed coefficient arrays depend on parameters. To be more general, I have to put in atleast_2d, even though these checks are really in slogdet.
Not necessarily. In this context, you are treating a scalar as a 1x1 matrix. Or rather, in full generality, you have a 1x1 matrix and 1-element vectors and only doing operations on them that map fairly neatly onto a subset of scalar properties. Consequently, you can use scalars in their place without much problem (to add confusion, the scalar case was formulated first, then generalized to the multivariate case, but that doesn't change the mathematics unless if you believe certain ethnomathematicians).
However, there are other contexts in which scalars are used where the determinant would come into play. For example, scalar-vector multiplication is defined. If you have an n-vector, then scalar-vector multiplication behaves like matrix-vector multiplication provided that the matrix is a diagonal matrix with the diagonal entries each being the scalar value. In this context, the determinant is not just the scalar value itself, but rather value**n.
Ok, but I'm not sure I see why this would make automatic handling of scalars as 2d 1x1 arrays a bad idea.
Many of the references you found stating that the "determinant of a scalar value is" the scalar itself were actually referring to 1x1 matrices, not true scalars. 1x1 matrices behave like scalars, but not all scalars behave like 1x1 matrices. linalg.det() does not know the context in which you are treating the scalar, so it rightly complains.
That said, I expect you will be running into this 1x1<->scalar special case reasonably frequently in statsmodels. Writing a dwim_logdet() utility function there that does what you want is a perfectly reasonable thing to do.
Fair enough. Can someone mark the ticket invalid or won't fix then? It doesn't look like I can do it. http://projects.scipy.org/numpy/ticket/1556 Skipper

On Tue, Jul 27, 2010 at 11:36, Skipper Seabold <jsseabold@gmail.com> wrote:
On Tue, Jul 27, 2010 at 12:00 PM, Robert Kern <robert.kern@gmail.com> wrote:
On Tue, Jul 27, 2010 at 07:51, Skipper Seabold <jsseabold@gmail.com> wrote:
On Mon, Jul 26, 2010 at 10:05 PM, Alan G Isaac <aisaac@american.edu> wrote:
But I am still confused about the use case. What is the scalar- (or 1d-array-) returning procedure invokedbefore taking the determinant?
Recently I ran into this trying to make the log-likelihood of a multivariate and univariate autoregressive process use the same function. One has log(sigma_scalar) and one calls for logdet(Sigma_matrix). I also ran in to again yesterday working on the Kalman filter, depending on the process being modeled and how the user writes a function if the needed coefficient arrays depend on parameters. To be more general, I have to put in atleast_2d, even though these checks are really in slogdet.
Not necessarily. In this context, you are treating a scalar as a 1x1 matrix. Or rather, in full generality, you have a 1x1 matrix and 1-element vectors and only doing operations on them that map fairly neatly onto a subset of scalar properties. Consequently, you can use scalars in their place without much problem (to add confusion, the scalar case was formulated first, then generalized to the multivariate case, but that doesn't change the mathematics unless if you believe certain ethnomathematicians).
However, there are other contexts in which scalars are used where the determinant would come into play. For example, scalar-vector multiplication is defined. If you have an n-vector, then scalar-vector multiplication behaves like matrix-vector multiplication provided that the matrix is a diagonal matrix with the diagonal entries each being the scalar value. In this context, the determinant is not just the scalar value itself, but rather value**n.
Ok, but I'm not sure I see why this would make automatic handling of scalars as 2d 1x1 arrays a bad idea.
Because scalars might not be representing 1x1 matrices but rather NxN diagonal matrices.
Many of the references you found stating that the "determinant of a scalar value is" the scalar itself were actually referring to 1x1 matrices, not true scalars. 1x1 matrices behave like scalars, but not all scalars behave like 1x1 matrices. linalg.det() does not know the context in which you are treating the scalar, so it rightly complains.
That said, I expect you will be running into this 1x1<->scalar special case reasonably frequently in statsmodels. Writing a dwim_logdet() utility function there that does what you want is a perfectly reasonable thing to do.
Fair enough.
Can someone mark the ticket invalid or won't fix then? It doesn't look like I can do it.
Done. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
participants (5)
-
Alan G Isaac
-
Charles R Harris
-
Joshua Holbrook
-
Robert Kern
-
Skipper Seabold