We use np.at_least2d extensively in scikit-image, and I also use it in a *lot* of my own code now that scikit-learn stopped accepting 1D arrays as feature vectors.
what is the advantage of np.at_leastnd` over `np.array(a, copy=False, ndim=n)`
Readability, clearly. My only concern is the described behavior of np.at_least3d, which came as a surprise. I certainly would expect the “at_least” family to all work in the same way as broadcasting, ie prepending singleton dimensions. Prepend/append behavior can be controlled either by keyword or simply by using .T, I don’t mind either way. Juan. On 6 July 2016 at 10:22:15 AM, Marten van Kerkwijk ( m.h.vankerkwijk@gmail.com) wrote: Hi All, I'm with Nathaniel here, in that I don't really see the point of these routines in the first place: broadcasting takes care of many of the initial use cases one might think of, and others are generally not all that well served by them: the examples from scipy to me do not really support `at_least?d`, but rather suggest that little thought has been put into higher-dimensional objects which should be treated as stacks of row or column vectors. My sense is that we're better off developing the direction started with `matmul`, perhaps adding `matvecmul` etc. More to the point of the initial inquiry: what is the advantage of having a general `np.at_leastnd` routine over doing ``` np.array(a, copy=False, ndim=n) ``` or, for a list of inputs, ``` [np.array(a, copy=False, ndim=n) for a in input_list] ``` All the best, Marten _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion