Clean way in python to test for None, empty, scalar, and list/ndarray? A prayer to the gods of Python
As much as I love python, the following drives me crazy, and I would wish that some future version would come up with a more consistent approach for this. And please don't reply with "Too bad if you don't know what type your data are..."  if I want to implement some generic functionality, I want to avoid constrains on the user where not absolutely necessary, and I believe this approach is truely pythonic. OK  here comes the problem set. These are in fact several related issues. Clearly, a solution exists for each of them, but you will have to admit that they are all very different in style and therefore unnecessarily complicate matters. If you want to write code which works under many circumstances, it will become quite convoluted. 1. Testing for None: From what I read elsewhere, the preferred solution is `if x is None` (or `if not x is None` in the opposite case). This is fine and it works for scalars, lists, sets, numpy ndarrays,... 2. Testing for empty lists or empty ndarrays: In principle, `len(x) == 0` will do the trick. **BUT** there are several caveats here:  `len(scalar)` raises a TypeError, so you will have to use try and except or find some other way of testing for a scalar value  `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises a TypeError ("unsized object")  `len([[]])` returns a length of 1, which is somehow understandable, but  I would argue  perhaps not what one might expect initially Alternatively, numpy arrays have a size attribute, and `numpy.array([]).size`, `numpy.array(8.).size`, and `numpy.array([8.]).size` all return what you would expect. And even `numpy.array([[]]).size` gives you 0. Now, if I could convert everything to a numpy array, this might work. But have you ever tried to assign a list of mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])` will fail, even though the list inside is perfectly fine as a list. 3. Testing for scalar: Let's suppose we knew the number of nonempty elements, and this is 1. Then there are occasions when you want to know if you have in fact `6` or `[6]` as an answer (or maybe even `[[6]]`). Obviously, this question is also relevant for numpy arrays. For the latter, a combination of size and ndim can help. For other objects, I would be tempted to use something like `isiterable()`, however, this function doesn't exist, and there are numerous discussions how one could or should find out if an object is iterable  none of them truly intuitive. (and is it true that **every** iterable object is a descendant of collections.Iterable?) 4. Finding the number of elements in an object: From the discussion above, it is already clear that `len(x)` is not very robust for doing this. Just to mention another complication: `len("abcd")` returns 4, even though this is only one string. Of course this is correct, but it's a nuisance if you need to to find the number of elements of a list of strings and if it can happen that you have a scalar string instead of a 1element list. And, believe me, such situations do occur! 5. Forcing a scalar to become a 1element list: Unfortunately, `list(77)` throws an error, because 77 is not iterable. `numpy.array(77)` works, but  as we saw above  there will be no len defined for it. Simply writing `[x]` is dangerous, because if x is a list already, it will create `[[77]]`, which you generally don't want. Also, `numpy.array([x])` would create a 2D array if x is already a 1D array or a list. Often, it would be quite useful to know for sure that a function result is provided as a list, regardless of how many elements it contains (because then you can write `res[0]` without risking the danger to throw an exception). Does anyone have a good suggestion for this one? 6. Detecting None values in a list: This is just for completeness. I have seen solutions using `all` which solve this problem (see [question #1270920][1]). I haven't digged into them extensively, but I fear that these will also suffer from the abovementioned issues if you don't know for sure if you are starting from a list, a numpy array, or a scalar. Enough complaining. Here comes my prayer to the python gods: **Please**  add a good `isiterable` function  add a `size` attribute to all objects (I wouldn't mind if this is None in case you don't really know how to define the size of something, but it would be good to have it, so that `anything.size` would never throw an error  add an `isscalar` function which would at least try to test if something is a scalar (meaning a single entity). Note that this might give different results compared to `isiterable`, because one would consider a scalar string as a scalar even though it is iterable. And if `isscalar` would throw exceptions in cases where it doesn't know what to do: fine  this can be easily captured.  enable the `len()` function for scalar variables such as integers or floats. I would tend to think that 1 is a natural answer to what the length of a number is. [1]: http://stackoverflow.com/questions/1270920/mostconcisewaytocheckwhether...
On Fri, Jun 14, 2013 at 3:12 PM, Martin Schultz <maschu09@gmail.com> wrote:
As much as I love python, the following drives me crazy, and I would wish that some future version would come up with a more consistent approach for this. And please don't reply with "Too bad if you don't know what type your data are..."  if I want to implement some generic functionality, I want to avoid constrains on the user where not absolutely necessary, and I believe this approach is truely pythonic.
OK  here comes the problem set. These are in fact several related issues. Clearly, a solution exists for each of them, but you will have to admit that they are all very different in style and therefore unnecessarily complicate matters. If you want to write code which works under many circumstances, it will become quite convoluted.
1. Testing for None:
From what I read elsewhere, the preferred solution is `if x is None` (or `if not x is None` in the opposite case). This is fine and it works for scalars, lists, sets, numpy ndarrays,...
Should actually be ``if x is not None``. [SNIP]
 add a good `isiterable` function
Done: isinstance(x, collections.abc.Iterable)
 add a `size` attribute to all objects (I wouldn't mind if this is None in case you don't really know how to define the size of something, but it would be good to have it, so that `anything.size` would never throw an error
This is what len() is for. I don't know why numpy doesn't define the __len__ method on their array types for that. We can't force every object to have it as it doesn't make sense in all cases.
 add an `isscalar` function which would at least try to test if something is a scalar (meaning a single entity). Note that this might give different results compared to `isiterable`, because one would consider a scalar string as a scalar even though it is iterable. And if `isscalar` would throw exceptions in cases where it doesn't know what to do: fine  this can be easily captured.
The numbers module has a bunch of ABCs you can use to test for integrals, etc. just as I suggested for iterables.
 enable the `len()` function for scalar variables such as integers or floats. I would tend to think that 1 is a natural answer to what the length of a number is.
The len() function is specifically for containers so it would not make sense to ask for the length of something you can't put something into or iterate over. This is actually a perfect case of using the new singledispatch generic function work that has landed in Python 3.4 ( http://docs.python.org/3.4/library/functools.html#functools.singledispatch). With that you could write your own custom_len() function that dispatches on the type to return exactly what you are after. E.g.:: @functools.singledispatch def custom_length(ob): return len(ob) @custom_length.register(int) _(ob): return 1 And on you go for all the types you want to specialcase.
On 20130614 21:03, Brett Cannon wrote:
On Fri, Jun 14, 2013 at 3:12 PM, Martin Schultz <maschu09@gmail.com <mailto:maschu09@gmail.com>> wrote:
 add a `size` attribute to all objects (I wouldn't mind if this is None in case you don't really know how to define the size of something, but it would be good to have it, so that `anything.size` would never throw an error
This is what len() is for. I don't know why numpy doesn't define the __len__ method on their array types for that.
It does. It gives the size of the first axis, i.e. the one accessed by simple indexing with an integer: some_array[i]. The `size` attribute givens the total number of items in the possiblymultidimensional array. However, one of the other axes can be 0length, so the array will have no elements but the length will be nonzero. [~] 4> np.empty([3,4,0]) array([], shape=(3, 4, 0), dtype=float64) [~] 5> np.empty([3,4,0])[1] array([], shape=(4, 0), dtype=float64) [~] 6> len(np.empty([3,4,0])) 3  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco
On Fri, 14 Jun 2013 21:12:00 +0200, Martin Schultz <maschu09@gmail.com> wrote:
2. Testing for empty lists or empty ndarrays:
In principle, `len(x) == 0` will do the trick. **BUT** there are several caveats here:  `len(scalar)` raises a TypeError, so you will have to use try and except or find some other way of testing for a scalar value  `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises a TypeError ("unsized object")  `len([[]])` returns a length of 1, which is somehow understandable, but  I would argue  perhaps not what one might expect initially
Alternatively, numpy arrays have a size attribute, and `numpy.array([]).size`, `numpy.array(8.).size`, and `numpy.array([8.]).size` all return what you would expect. And even `numpy.array([[]]).size` gives you 0. Now, if I could convert everything to a numpy array, this might work. But have you ever tried to assign a list of mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])` will fail, even though the list inside is perfectly fine as a list.
In general you test whether nor not something is empty in Python by testing its truth value. Empty things are False. Numpy seems to follow this using size, from the limited examples you have given
bool(numpy.array([[]]) False bool(numpy.array([[1]]) True
I have no idea what the definition of numpy.array.size is that it would return 0 for [[]], so its return value obviously defies my intuition as much as len([[]]) seems to have initially defied yours :)
3. Testing for scalar:
Let's suppose we knew the number of nonempty elements, and this is 1. Then there are occasions when you want to know if you have in fact `6` or `[6]` as an answer (or maybe even `[[6]]`). Obviously, this question is also relevant for numpy arrays. For the latter, a combination of size and ndim can help. For other objects, I would be tempted to use something like `isiterable()`, however, this function doesn't exist, and there are numerous discussions how one could or should find out if an object is iterable  none of them truly intuitive. (and is it true that **every** iterable object is a descendant of collections.Iterable?)
No, but...I'm not 100% sure about this as I tend to stay away from ABCs myself, but my understanding is that collections.Iterable checks if an object is iterable when you use it in an isinstance check. There are probably ways to fool it, but I think you could argue that any such data types are broken.
4. Finding the number of elements in an object:
From the discussion above, it is already clear that `len(x)` is not very robust for doing this. Just to mention another complication: `len("abcd")` returns 4, even though this is only one string. Of course this is correct, but it's a nuisance if you need to to find the number of elements of a list of strings and if it can happen that you have a scalar string instead of a 1element list. And, believe me, such situations do occur!
len is robust when you consider that it only applies to sequences (see below). (I don't know what it means to "code a scaler as a numpy array", but if it is still a scaler, it makes sense that it raises a TypeError on len...it should.)
5. Forcing a scalar to become a 1element list:
Unfortunately, `list(77)` throws an error, because 77 is not iterable. `numpy.array(77)` works, but  as we saw above  there will be no len defined for it. Simply writing `[x]` is dangerous, because if x is a list already, it will create `[[77]]`, which you generally don't want. Also, `numpy.array([x])` would create a 2D array if x is already a 1D array or a list. Often, it would be quite useful to know for sure that a function result is provided as a list, regardless of how many elements it contains (because then you can write `res[0]` without risking the danger to throw an exception). Does anyone have a good suggestion for this one?
Well, no. If the list is empty res[0] will throw an error. You need to know both that it is indexable (note: not iterable...an object can be iterable without being indexable) and that it is nonempty. Well behaved objects should I think pass an isinstance check against collections.Sequence. (I can't think of a good way to check for indexability without the abc.)
Enough complaining. Here comes my prayer to the python gods: **Please**
 add a good `isiterable` function
That would be spelled isinstance(x, collections.Iterable), it seems.
 add a `size` attribute to all objects (I wouldn't mind if this is None in case you don't really know how to define the size of something, but it would be good to have it, so that `anything.size` would never throw an error
Why? What is the definition of 'size' that makes it useful outside of numpy?
 add an `isscalar` function which would at least try to test if something is a scalar (meaning a single entity). Note that this might give different results compared to `isiterable`, because one would consider a scalar string as a scalar even though it is iterable. And if `isscalar` would throw exceptions in cases where it doesn't know what to do: fine  this can be easily captured.
This I sort of agree with. I've often enough wanted to know if something is a nonstring iterable. But you'd have to decide if bytes/bytearray is a sequence of integers or a scaler...
 enable the `len()` function for scalar variables such as integers or floats. I would tend to think that 1 is a natural answer to what the length of a number is.
That would screw up the ABC type hierarchy...the existence of len indicates that an iterable is indexable. David
On 20130614 21:55, R. David Murray wrote:
On Fri, 14 Jun 2013 21:12:00 +0200, Martin Schultz <maschu09@gmail.com> wrote:
2. Testing for empty lists or empty ndarrays:
In principle, `len(x) == 0` will do the trick. **BUT** there are several caveats here:  `len(scalar)` raises a TypeError, so you will have to use try and except or find some other way of testing for a scalar value  `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises a TypeError ("unsized object")  `len([[]])` returns a length of 1, which is somehow understandable, but  I would argue  perhaps not what one might expect initially
Alternatively, numpy arrays have a size attribute, and `numpy.array([]).size`, `numpy.array(8.).size`, and `numpy.array([8.]).size` all return what you would expect. And even `numpy.array([[]]).size` gives you 0. Now, if I could convert everything to a numpy array, this might work. But have you ever tried to assign a list of mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])` will fail, even though the list inside is perfectly fine as a list.
In general you test whether nor not something is empty in Python by testing its truth value. Empty things are False. Numpy seems to follow this using size, from the limited examples you have given
>>> bool(numpy.array([[]]) False >>> bool(numpy.array([[1]]) True
numpy does not do so. Empty arrays are extremely rare and testing for them rarer (rarer still is testing for emptiness not knowing if it is an array or some other sequence). What people usually want from bool(some_array) is either some_array.all() or some_array.any(). In the face of this ambiguity, numpy refuses the temptation to guess and raises an exception explaining matters.  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco
On 20130614 23:31, Robert Kern wrote:
On 20130614 21:55, R. David Murray wrote:
On Fri, 14 Jun 2013 21:12:00 +0200, Martin Schultz <maschu09@gmail.com> wrote:
2. Testing for empty lists or empty ndarrays:
In principle, `len(x) == 0` will do the trick. **BUT** there are several caveats here:  `len(scalar)` raises a TypeError, so you will have to use try and except or find some other way of testing for a scalar value  `len(numpy.array(0))` (i.e. a scalar coded as numpy array) also raises a TypeError ("unsized object")  `len([[]])` returns a length of 1, which is somehow understandable, but  I would argue  perhaps not what one might expect initially
Alternatively, numpy arrays have a size attribute, and `numpy.array([]).size`, `numpy.array(8.).size`, and `numpy.array([8.]).size` all return what you would expect. And even `numpy.array([[]]).size` gives you 0. Now, if I could convert everything to a numpy array, this might work. But have you ever tried to assign a list of mixed data types to a numpy array? `numpy.array(["a",1,[2,3],(888,9)])` will fail, even though the list inside is perfectly fine as a list.
In general you test whether nor not something is empty in Python by testing its truth value. Empty things are False. Numpy seems to follow this using size, from the limited examples you have given
>>> bool(numpy.array([[]]) False >>> bool(numpy.array([[1]]) True
numpy does not do so. Empty arrays are extremely rare and testing for them rarer (rarer still is testing for emptiness not knowing if it is an array or some other sequence). What people usually want from bool(some_array) is either some_array.all() or some_array.any(). In the face of this ambiguity, numpy refuses the temptation to guess and raises an exception explaining matters.
Actually, that's a bit of a lie. In the empty case and the oneelement case, we do return a bool, False for empty and bool(element) for whatever that one element is. Anything else raises the exception since we don't know whether it is all() or any() that was desired.  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco
BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 06/14/2013 04:55 PM, R. David Murray wrote:
This I sort of agree with. I've often enough wanted to know if something is a nonstring iterable. But you'd have to decide if bytes/bytearray is a sequence of integers or a scaler...
In fifteen years of Python programming, I have literally *never* wanted to iterate over 'str' (or now 'bytes'). I've always considered the fact that Python made them iterable by default (rather than e.g. defining a method / property to get to an iterable "view" on the underlying string) a wart and a serious bug magnet. Tres.   =================================================================== Tres Seaver +1 5404290999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com BEGIN PGP SIGNATURE Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined  http://www.enigmail.net/ iEYEARECAAYFAlG8Y7MACgkQ+gerLs4ltQ6EbwCfYlC3JKL22HK7WgxJLAh9Gk2H R4cAn2+ijAkebHuF92txeddBq99L/zqn =bLkO END PGP SIGNATURE
I never want to iterate, but I love slice syntax and indexing. Don't think you can have that w/o being able to loop over it can you? Maybe I'm just thinking slow since I just woke up. On Jun 15, 2013, at 8:53 AM, Tres Seaver <tseaver@palladion.com> wrote:
BEGIN PGP SIGNED MESSAGE Hash: SHA1
On 06/14/2013 04:55 PM, R. David Murray wrote:
This I sort of agree with. I've often enough wanted to know if something is a nonstring iterable. But you'd have to decide if bytes/bytearray is a sequence of integers or a scaler...
In fifteen years of Python programming, I have literally *never* wanted to iterate over 'str' (or now 'bytes'). I've always considered the fact that Python made them iterable by default (rather than e.g. defining a method / property to get to an iterable "view" on the underlying string) a wart and a serious bug magnet.
Tres.   =================================================================== Tres Seaver +1 5404290999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com BEGIN PGP SIGNATURE Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined  http://www.enigmail.net/
iEYEARECAAYFAlG8Y7MACgkQ+gerLs4ltQ6EbwCfYlC3JKL22HK7WgxJLAh9Gk2H R4cAn2+ijAkebHuF92txeddBq99L/zqn =bLkO END PGP SIGNATURE
_______________________________________________ PythonDev mailing list PythonDev@python.org http://mail.python.org/mailman/listinfo/pythondev Unsubscribe: http://mail.python.org/mailman/options/pythondev/donald%40stufft.io
BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 06/15/2013 09:15 AM, Donald Stufft wrote:
I never want to iterate, but I love slice syntax and indexing. Don't think you can have that w/o being able to loop over it can you? Maybe I'm just thinking slow since I just woke up.
You could if '__iter__' raised an error ('NotIterable', maybe) which defeated the '__getitem__'/'__len__'based fallback. Tres.   =================================================================== Tres Seaver +1 5404290999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com BEGIN PGP SIGNATURE Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined  http://www.enigmail.net/ iEYEARECAAYFAlG8cFoACgkQ+gerLs4ltQ7KcgCfdFCEkp2gBeuUgn/KooY7F5HX Jm8An2bwq8QoplwJ1MqIbS76m+xdl/Mk =6hA9 END PGP SIGNATURE
On 6/15/2013 8:53 AM, Tres Seaver wrote:
In fifteen years of Python programming, I have literally *never* wanted to iterate over 'str' (or now 'bytes').
If so, it is because you have always been able to use prewritten methods and functions that internally do the iteration for you.
I've always considered the fact that Python made them iterable by default (rather than e.g. defining a method / property to get to an iterable "view" on the underlying string)
.__iter__ is that method. But this is offtopic for pydev.  Terry Jan Reedy
BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 06/15/2013 04:11 PM, Terry Reedy wrote:
On 6/15/2013 8:53 AM, Tres Seaver wrote:
In fifteen years of Python programming, I have literally *never* wanted to iterate over 'str' (or now 'bytes').
If so, it is because you have always been able to use prewritten methods and functions that internally do the iteration for you.
Given that strings are implemented in C, there is no real "iteration" happing (in the Python sense) in their methods. What stdlib code can you point to that does iterate over them in Python? I.e.: for c in stringval: ... Even if there were, aren't you proving my point? The fact that strings are implicitly iterable injects all kinds of fur into methods which take either a single value or an iterable as an argument, e.g.: def api_function(value_or_values): if isinstance(value_or_values, STRING_TYPES): value_or_values = [value_or_values] elif isinstance(value_or_values, collections.abc.Iterable): value_or_values = list(value_or_values) else: value_or_values = [value_or_values] The bugs that leaving the first test out yields pop up all over the place.
I've always considered the fact
that Python made them iterable by default (rather than e.g. defining a method / property to get to an iterable "view" on the underlying string)
.__iter__ is that method.
But it is *on be default*: I was describing something which has to be requested in ways that don't get triggered by syntax, and doesn't make strings look "nonscalar" for APIs like the one above. Tres.   =================================================================== Tres Seaver +1 5404290999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com BEGIN PGP SIGNATURE Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined  http://www.enigmail.net/ iEYEARECAAYFAlG84H0ACgkQ+gerLs4ltQ56RQCgks8R52f41RwJ+v9oteOBC3qY kIIAoIHmg+zcmJpV3v/gAhkKJfbNKufj =ZeRB END PGP SIGNATURE
On 6/15/2013 5:45 PM, Tres Seaver wrote:
Given that strings are implemented in C,
That is a current implementation detail. String functions were originally written in python in string.py. Some used 'for c in s:'. The functions only because methods after 2.2. I presume Pypy starts from Python code again.
there is no real "iteration" happing (in the Python sense) in their methods.
I disagree, but not relevant.
What stdlib code can you point to that does iterate over them in Python? I.e.:
for c in stringval:
Using Idle's grep (Find in files) for ...Python33/Lib/*.py: 'for c in' 193 hits 'for ch in' 30 hits 'for chr in' 0 hits 'for char in' 14 hits. Some, especially in the 193, are false positives, but most are not. There are at least a few other 'for name in string' uses: for instance, 'for a in _hexdig for b in _hexdig' where _hexdig is '0123456789ABCDEFabcdef'.  Terry Jan Reedy
Your questions/suggestions are offtopic for this list. This belongs on pythonideas. On 14 June 2013 20:12, Martin Schultz <maschu09@gmail.com> wrote:
2. Testing for empty lists or empty ndarrays: 4. Finding the number of elements in an object: 6. Detecting None values in a list:
Each of the problems above relates to wanting to use generic Python containers interchangeably with numpy arrays. The solution is simply not to do this; there are too many semantic inconsistencies for this interchangeability to be safe. Here are just a few of them:
import numpy as np [1, 2, 3] + [4, 5, 6] [1, 2, 3, 4, 5, 6] np.array([1, 2, 3]) + np.array([4, 5, 6]) array([5, 7, 9]) [1, 2, 3] * 3 [1, 2, 3, 1, 2, 3, 1, 2, 3] np.array([1, 2, 3]) * 3 array([3, 6, 9]) [[1, 2, 3], [4, 5, 6]] + [7, 8, 9] [[1, 2, 3], [4, 5, 6], 7, 8, 9] np.array([[1, 2, 3], [4, 5, 6]]) + np.array([7, 8, 9]) array([[ 8, 10, 12], [11, 13, 15]]) bool([1, 2, 3]) True bool(np.array([1, 2, 3])) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Numpy provides a very convenient function that will convert objects of all the mentioned types to ndarray objects: the numpy.array function. If you have a function and want to accept lists (of lists ...) and ndarrays interchangeably, simply convert your input with numpy.array at the start of the function i.e.: def square(numbers): numbers = numpy.array(numbers) print(numbers.size, numbers.shape) # etc. return numbers ** 2 # Would be an error for a list of lists
3. Testing for scalar: 5. Forcing a scalar to become a 1element list:
numbers = np.array(possibly_scalar_input) if not numbers.shape: print('Numbers was scalar: coercing...') numbers = np.array([numbers]) Usually it is bad design to have an API that interchangeably accepts scalars and sequences. There are some situations where it does make sense such as numpy's ufuncs. These are scalar mathematical functions that can operate distributively on lists of lists, ndarrays etc. and numpy provides a convenient way for you to define these yourself:
def square(x): ... return x ** 2 ... square = np.frompyfunc(square, 1, 1) square(2) 4 square([2]) array([4], dtype=object) square([2, 3, 4]) array([4, 9, 16], dtype=object) square([[2, 3, 4]]) array([[4, 9, 16]], dtype=object) square([[]]) array([], shape=(1, 0), dtype=object)
The frompyfunc function unfortunately doesn't quite work like a decorator but you can easily wrap it do so: def ufunc(nin, nout): def wrapper_factory(func): return np.frompyfunc(func, nin, nout) return wrapper_factory @ufunc(1, 1) def square(x): return x ** 2 Oscar
participants (8)

Brett Cannon

Donald Stufft

Martin Schultz

Oscar Benjamin

R. David Murray

Robert Kern

Terry Reedy

Tres Seaver