[Numpy-discussion] Consider improving numpy.outer's behavior with zero-dimensional vectors

josef.pktd at gmail.com josef.pktd at gmail.com
Fri Apr 17 11:11:32 EDT 2015


On Fri, Apr 17, 2015 at 10:59 AM, Sebastian Berg
<sebastian at sipsolutions.net> wrote:
> On Fr, 2015-04-17 at 10:47 -0400, josef.pktd at gmail.com wrote:
>> On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
>> <sebastian at sipsolutions.net> wrote:
>> > On Do, 2015-04-16 at 15:28 -0700, Matthew Brett wrote:
>> >> Hi,
>> >>
>> > <snip>
>> >>
>> >> So, how about a slight modification of your proposal?
>> >>
>> >> 1) Raise deprecation warning for np.outer for non 1D arrays for a few
>> >> versions, with depraction in favor of np.multiply.outer, then
>> >> 2) Raise error for np.outer on non 1D arrays
>> >>
>> >
>> > I think that was Neil's proposal a bit earlier, too. +1 for it in any
>> > case, since at least for the moment I doubt outer is used a lot for non
>> > 1-d arrays. Possible step 3) make it work on higher dims after a long
>> > period.
>>
>> sounds ok to me
>>
>> Some random comments of what I remember or guess in terms of usage
>>
>> I think there are at most very few np.outer usages with 2d or higher dimension.
>> (statsmodels has two models that switch between 2d and 1d
>> parameterization where we don't use outer but it has similar
>> characteristics. However, we need to control the ravel order, which
>> IIRC is Fortran)
>>
>> The current behavior of 0-D scalars in the initial post might be
>> useful if a numpy function returns a scalar instead of a 1-D array in
>> size=1. np.diag which is a common case, doesn't return a scalar (in my
>> version of numpy).
>>
>> I don't know any use case where I would ever want to have the 2d
>> behavior of np.multiply.outer.
>> I guess we will or would have applications for outer along an axis,
>> for example if x.shape = (100, 10), then we have
>> x[:,None, :] * x[:, :, None]     (I guess)
>> Something like this shows up reasonably often in econometrics as
>> "Outer Product". However in most cases we can avoid constructing this
>> matrix and get the final results in a more memory efficient or faster
>> way.
>> (example an array of covariance matrices)
>>
>
> So basically outer product of stacked vectors (fitting basically into
> how np.linalg functions now work). I think that might be a good idea,
> but even then we first need to do the deprecation and it would be a long
> term project. Or you add np.linalg.outer or such sooner and in the
> longer run it will be an alias to that instead of np.multiple.outer.


Essentially yes, but I don't have an opinion about location or
implementation in numpy, nor do I know enough.

I always considered np.outer conceptually as belonging to linalg that
provides a more convenient interface than np.dot if both arrays are
1-D.  (no need to add extra axis and transpose)

Josef

>
>
>> Josef
>>
>>
>>
>>
>> >
>> > - Sebastian
>> >
>> >
>> >> Best,
>> >>
>> >> Matthew
>> >> _______________________________________________
>> >> NumPy-Discussion mailing list
>> >> NumPy-Discussion at scipy.org
>> >> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> >>
>> >
>> >
>> > _______________________________________________
>> > NumPy-Discussion mailing list
>> > NumPy-Discussion at scipy.org
>> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> >
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion at scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>



More information about the NumPy-Discussion mailing list