Hi all, There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment) cheers, Luc
To deal with specifically adding a new value to a returned tuple, you could write your function calls to truncate the tuple to the expected length, e.g. def myfunc(): ... return (result1, result2, newresult) x,y = myfunc()[2] x,y,z = myfunc()[3] So you would have to change all the relevant function calls, but only once. More generally, perhaps you could return a dictionary. Although this makes the function calls a bit more awkward: results = myfunc() x, y = results['result1'], results['result2'] Best wishes Rob Cliffe On 13/01/2011 14:30, Luc Goossens wrote:
Hi all,
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
cheers, Luc _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On 01/13/2011 09:30 AM, Luc Goossens wrote:
Hi all,
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
You can achieve something similar with PEP 3132's Extended Iterable Unpacking:
def f(): return 0, 1, 2, 3 ... a, b, c, d, *unused = f() a, b, c, d, unused (0, 1, 2, 3, [])
If you add more return values, they show up in unused.
def f(): return 0, 1, 2, 3, 4 ... a, b, c, d, *unused = f() # note caller is unchanged a, b, c, d, unused (0, 1, 2, 3, [4])
Or you could return dicts. Eric.
Luc Goossens
Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable.
If your function is returning a bunch of related values in a tuple, and that tuple keeps changing as you re-design the code, that's a code smell. The tuple should instead be a user-defined type (defined with ‘class’), the elements of the tuple should instead be attributes of the type, and the return value should be a single object of that type. The type can grow new attributes as you change the design, without the calling code needing to know every attribute. This refactoring is called “Replace Array With Object” URL:http://www.refactoring.com/catalog/replaceArrayWithObject.html in the Java world, but it's just as applicable in Python. -- \ “How wonderful that we have met with a paradox. Now we have | `\ some hope of making progress.” —Niels Bohr | _o__) | Ben Finney
Hi Eric (and Rob, and Ben, ...), Sorry maybe this was not clear from my mail but I am not so much interested in possible work-arounds but in why this asymmetry exists in the first place. I mean is there a reason as to why it is the way it is, or is it just that nobody ever asked for anything else. cheers, Luc On Jan 13, 2011, at 4:00 PM, Eric Smith wrote:
On 01/13/2011 09:30 AM, Luc Goossens wrote:
Hi all,
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
You can achieve something similar with PEP 3132's Extended Iterable Unpacking:
def f(): return 0, 1, 2, 3 ... a, b, c, d, *unused = f() a, b, c, d, unused (0, 1, 2, 3, [])
If you add more return values, they show up in unused.
def f(): return 0, 1, 2, 3, 4 ... a, b, c, d, *unused = f() # note caller is unchanged a, b, c, d, unused (0, 1, 2, 3, [4])
Or you could return dicts.
Eric.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
Luc Goossens wrote:
Hi all,
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment).
You're not limited to returning tuples. You could return an object with named attributes, or a namedtuple, or even just a dict. There's precedence in the standard library, for example, os.stat. Except in the case of tuple unpacking, this does mean that assigning the result of the function call is a two stage procedure: t = func(x, y, z) a, b, c = t.spam, t.ham, t.cheese but it does give you flexibility in adding new return fields without having to update function calls that don't use the new fields. -- Steven
On 01/13/2011 10:21 AM, Luc Goossens wrote:
Hi Eric (and Rob, and Ben, ...),
Sorry maybe this was not clear from my mail but I am not so much interested in possible work-arounds but in why this asymmetry exists in the first place. I mean is there a reason as to why it is the way it is, or is it just that nobody ever asked for anything else.
If the system automatically ignored "new" return values (for whatever "new" might mean), I think it would be too easy to miss return values that you don't mean to be ignoring. Eric.
If the return value is an instance of a class, then to extend the return
value you just add a new instance attribute to the class. If a class feels
too heavy duty, use a single named tuple and access its elements with dot
notation.
Either method is guaranteed not to break until you remove an instance
attribute or element, at which point it doesn't make sense to do anything
else.
On Thu, Jan 13, 2011 at 10:33 AM, Eric Smith
On 01/13/2011 10:21 AM, Luc Goossens wrote:
Hi Eric (and Rob, and Ben, ...),
Sorry maybe this was not clear from my mail but I am not so much interested in possible work-arounds but in why this asymmetry exists in the first place. I mean is there a reason as to why it is the way it is, or is it just that nobody ever asked for anything else.
If the system automatically ignored "new" return values (for whatever "new" might mean), I think it would be too easy to miss return values that you don't mean to be ignoring.
Eric. _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
Sounds like you'd be happier returning named tuples. I'm sure I saw something about a named tuple package change recently but I can't find it now. Perhaps someone else will have it. --rich On 1/13/11 06:30 , Luc Goossens wrote:
Hi all,
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
cheers, Luc _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On Thu, Jan 13, 2011 at 10:21 AM, Luc Goossens
Sorry maybe this was not clear from my mail but I am not so much interested in possible work-arounds but in why this asymmetry exists in the first place.
It looks like you are asking why tuple unpacking syntax does not support all options available to argument passing. Part of it (the variable length unpacking) is the subject of PEP 3132 http://www.python.org/dev/peps/pep-3132/, which was approved, but the implementation was postponed due to the moratorium on language changes in effect for 3.2 release. Note that PEP 3132 does not really achieve symmetry with argument passing because it makes (a, *x, b) = .. valid while f(a, *x, b) is not.
I mean is there a reason as to why it is the way it is, or is it just that nobody ever asked for anything else.
No one has ever proposed a design in which tuple unpacking and argument passing is "symmetric". This may very well be impossible.
On Thu, Jan 13, 2011 at 2:12 PM, Alexander Belopolsky
.. Part of it (the variable length unpacking) is the subject of PEP 3132 http://www.python.org/dev/peps/pep-3132/, which was approved, but the implementation was postponed due to the moratorium on language changes in effect for 3.2 release.
I should have checked before posting. In py3k:
a, *b = range(10) b [1, 2, 3, 4, 5, 6, 7, 8, 9]
See also http://bugs.python.org/issue2292 .
On 2011-01-13, at 20:12 , Alexander Belopolsky wrote:
I mean is there a reason as to why it is the way it is, or is it just that nobody ever asked for anything else.
No one has ever proposed a design in which tuple unpacking and argument passing is "symmetric". This may very well be impossible.
Indeed, barring partial dictionary matching (take PEP 3132 and now do the same with dicts) *and* a new object which combines attributes of tuples and dictionaries (and lets the user match them both at once, some kind of ordered dictionary with offsets so the initial values can be purely positional) I don't see how it would be possible to replicate Python's breadth of arguments unpacking in return values. Considering Python's tendency to hedge towards functional programming and pattern matching (these days it runs the opposite way as fast as possible and doesn't stop until it's gone through a few borders), I don't see it happening any time soon, even ignoring the debatable value of the scheme.
On Thu, Jan 13, 2011 at 8:30 AM, Luc Goossens
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
I have often thought that I'd like a way to represent the arguments to a function. (args, kwargs) is what I usually use, but func(*thing[0], **thing[1]) is very unsatisfying. I'd like, um, func(***thing) ;) Interestingly you have traditionally been able to do things like "def func(a, (b, c))" (removed in py3, right?) -- but it created a sense of symmetric between assignment and function signatures. But of course keyword arguments aren't quite the same (nor are named parameters, but I'll ignore that). So it would be neat if you could do: (a, b, c=3) = func(...) where this was essentially like: result = func(...) (a, b) = result.args c = result.kwargs.get('c', 3) Where result was some new tuple-dict hybrid object. -- Ian Bicking | http://blog.ianbicking.org
On Thu, Jan 13, 2011 at 2:27 PM, Masklinn
.. I don't see how it would be possible to replicate Python's breadth of arguments unpacking in return values.
What I do miss sometimes is the ability to inject the contents of a dictionary into locals. For example, when I get the results of a database query in a list of dictionaries or named tuples, I would like to do something like for <locals> in sql('select name, age from students'): print(name, age) I can achieve that with hacks like for x in sql('select name, age from students'): locals().update(*x) print(name, age) but I don't think this is guaranteed to work and it is ugly and inefficient.
On Thu, Jan 13, 2011 at 8:30 AM, Luc Goossens
wrote: There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment).
As others have mentions, if you return a dictionary or a named tuple from the function will a little more flexiblity with respect to argument order. In the end no matter what is done, there is still going to be a pretty tight semantic coupling between what a function returns and how the caller accesses it, so there are limits to what you can achieve with syntax. I would like to note that the complexity of passing arguments into functions is not a pure win. The flexibility has a cost in terms of complexity, learnability, speed, and implementation challenges. ISTM that very few people fully grok all of the existing capabilities. I don't think that adding yet more complexity to the language would be a net win. As C++ has shown, when you start getting to feature rich, the features will interact in unexpected ways. For example, how would all those options for processing return values interact with augmented assignment? Raymond FWIW, here's an excerpt from Grammar/Grammar: funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' ['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef) tfpdef: NAME [':' test] varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' ['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef) vfpdef: NAME
On Thu, Jan 13, 2011 at 9:21 AM, Luc Goossens
Sorry maybe this was not clear from my mail but I am not so much interested in possible work-arounds but in why this asymmetry exists in the first place. I mean is there a reason as to why it is the way it is, or is it just that nobody ever asked for anything else.
There are two kinds of asymmetry here. One is semantic and one is syntactic. 1. Semantically, function calls are fundamentally asymmetric in Python. A call takes as its input a tuple of arguments and a dictionary of keyword arguments, but its output is either a single return value or a single raised exception. 2. Syntactically, the syntax for composing a value (tuple expressions, list/set/dict displays, constructor calls) differs from the syntax for decomposing a value into its parts (unpacking assignment). The ML family of programming languages eliminate both asymmetries about as completely as I can imagine. ML functions take one argument and return one value; either can be a tuple. The same pattern-matching syntax is used to cope with parameters and return values. To a very great degree the syntax for composing a tuple, record, or list is the same as the syntax for decomposing it. So what you're asking is at least demonstrably possible, at least for other languages. So why does Python have these asymmetries? 1. The semantic asymmetry (functions taking multiple parameters but returning a single value) is a subtle thing. Even in Scheme, where conceptual purity and treating continuations as procedures are core design principles of the entire language, this asymmetry is baked into (lambda) and the behavior of function calls. And even in ML there is *some* asymmetry; a function can die with an error rather than return anything. (You can "error out" but not "error in".) In Python's design, I imagine Guido found this particular asymmetry made the language fit the brain better. It's more like C. The greater symmetry in languages like ML may have felt like be too much--and one more unfamiliar thing for new users to trip over. In any case it would be impractical to change this in Python. It's baked into the language, the implementation, and the C API. 2. The syntactic asymmetry is made up of lots of little asymmetries, and I think it's enlightening to take a few of them case by case. (a) You can write [a, b] = [1, 2] but not {a, b} = {1, 2} or {"A": a, "B": b} = {"A": 1, "B": 2} Sets have no deterministic order, so the second possibility is misleading. The third is not implemented, I imagine, purely for usability reasons: it would do more harm than good. (b) You can write x = complex(1, 2) but not complex(a, b) = x In ML-like languages, you can identify constructors at compile time, so it's clear on the left-hand side of something like this what variables are being defined. In Python it's not so obvious what this is supposed to do. (c) Unlike ML, you can write (a, b) = [1, 2] or generally a, b = any_iterable It is useful for unpacking to depend on the iterable protocol rather than the exact type of the right-hand side. This is a nicety that ML-like languages don't bother with, afaik. (d) You can write def f(x, y, a) : ... f(**dct) but not (x, y, a) = **dct and conversely you can write lst[1] = x but not def f(lst[1]): ... f(x) In both cases, I find the accepted syntax sane, and the symmetric-but-currently-not-accepted syntax baffling. Note that in the case of lst[1] = x, we are mutating an existing object, something ML does not bother to make easy. All four of these cases seem to boil down to what's useful vs. what's confusing. You could go on for some time in that vein. Hope this helps. -j
On 2011-01-13, at 21:11 , Jason Orendorff wrote:
(c) Unlike ML, you can write (a, b) = [1, 2] or generally a, b = any_iterable It is useful for unpacking to depend on the iterable protocol rather than the exact type of the right-hand side. This is a nicety that ML-like languages don't bother with, afaik.
In no small part because, in ML-type languages (or more generally in functional languages, Erlang could hardly be called an ML-like language) lists (or more generally sequences) and tuples are very different beasts and entirely incompatible. So the language has no way to make sense of a pattern match between a tuple and a list, except maybe by considering that the list is a cons and that a tuple is a dotted pair.
On Thu, Jan 13, 2011 at 3:10 PM, Masklinn
(c) Unlike ML, you can write (a, b) = [1, 2] or generally a, b = any_iterable It is useful for unpacking to depend on the iterable protocol rather than the exact type of the right-hand side. This is a nicety that ML-like languages don't bother with, afaik. In no small part because, in ML-type languages (or more generally in functional languages, Erlang could hardly be called an ML-like language)
On 2011-01-13, at 21:11 , Jason Orendorff wrote: lists (or more generally sequences) and tuples are very different beasts and entirely incompatible.
Well, sure, as far as tuples go. But the point I was making was more general. Python has a notion of "iterable" which covers many types, not just "list". The iterable protocol is used by Python's for-loops, sorted(), str.join() and so on; it's only natural for unpacking assignment to use it as well. As far as I know, most ML languages don't have that notion.* So Python has a reason for this asymmetry that those languages don't have. -j *Haskell, to be sure, has several typeclasses that generalize List, but for whatever reason it is List, and not any of the generalizations, that is baked into the language.
On 01/13/2011 01:41 PM, Ian Bicking wrote:
On Thu, Jan 13, 2011 at 8:30 AM, Luc Goossens
mailto:luc.goossens@cern.ch> wrote: There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
I have often thought that I'd like a way to represent the arguments to a function. (args, kwargs) is what I usually use, but func(*thing[0], **thing[1]) is very unsatisfying. I'd like, um, func(***thing) ;)
Interestingly you have traditionally been able to do things like "def func(a, (b, c))" (removed in py3, right?) -- but it created a sense of symmetric between assignment and function signatures. But of course keyword arguments aren't quite the same (nor are named parameters, but I'll ignore that). So it would be neat if you could do:
(a, b, c=3) = func(...)
where this was essentially like:
result = func(...) (a, b) = result.args c = result.kwargs.get('c', 3)
Where result was some new tuple-dict hybrid object.
I think what you're thinking of is a single function signature object that can be passed around as is. In essence, it separates the signature handling parts of a function out into a separate object. The tricky part is making it easy to get to from inside the function. def foo(a, b, c=3) >> foosig: return bar(foosig) # No packing or unpacking here! result = foo(*args, **kwds) (a, b) = result.args c = result.kwds['c'] or... locals().update(result) Ron A.
On 2011-01-13, at 22:31 , Jason Orendorff wrote:
On Thu, Jan 13, 2011 at 3:10 PM, Masklinn
wrote: (c) Unlike ML, you can write (a, b) = [1, 2] or generally a, b = any_iterable It is useful for unpacking to depend on the iterable protocol rather than the exact type of the right-hand side. This is a nicety that ML-like languages don't bother with, afaik. In no small part because, in ML-type languages (or more generally in functional languages, Erlang could hardly be called an ML-like language)
On 2011-01-13, at 21:11 , Jason Orendorff wrote: lists (or more generally sequences) and tuples are very different beasts and entirely incompatible.
Well, sure, as far as tuples go. But the point I was making was more general. Python has a notion of "iterable" which covers many types, not just "list". The iterable protocol is used by Python's for-loops, sorted(), str.join() and so on; it's only natural for unpacking assignment to use it as well. As far as I know, most ML languages don't have that notion.* So Python has a reason for this asymmetry that those languages don't have. Well yeah, but even if those languages had a higher-level "iterable" abstraction, tuples wouldn't be part of it.
Hi all, Thanks to everybody for your feedback! So I guess the answer to my question (which - I noticed just now - did not end with a question mark), is ... no.
If your function is returning a bunch of related values in a tuple, and that tuple keeps changing as you re-design the code, that's a code smell.
the use cases I have in mind are the functions that return a set of weakly related values, or more importantly report on different aspects of the calculation; an example of the first is a divmod function that returns the div and the mod while callers might only be interested in the div; examples of the latter are the time it took to calculate the value, possible warnings that were encountered, ... like the good old errorcode/stdout/stderr trio
[various workarounds suggested]
the problem with (all) the workarounds that were suggested is that they help with migrating from 2 to more return values; for the 1 to 2 case (the most common case) they don't help a lot, as the amount of work to put the workaround in place exceeds the amount of work to cope with the migration directly; I would say it is a requirement that the simple case of single variable gets single (or first) return value, retains its current simple notation
If the system automatically ignored "new" return values (for whatever "new" might mean), I think it would be too easy to miss return values that you don't mean to be ignoring.
this I guess is only valid in the case where multiple return values are so strongly related they probably should be an object instead of a bunch of values
So it would be neat if you could do:
(a, b, c=3) = func(...)
or adding keywords to the mix a, b, c = kw1, d = kw2 (defval2) = function(...) now for the can of worms ... - one would need some syntactic means to distinguish the returning of two values from the returning of a single pair with two values - there's a complication with nested function calls (i.e. fun1 ( fun2(...), fun3(...)); the only simple semantic I could associate with this, is to simply drop all return values except for the first, but that is incompatible with returning the full return value of a function without needing to manipulate it ... Hmm, maybe the second worm above hints at the root problem with multiple return values: there is just no simple way of accommodating them. Too bad :-( Luc On Jan 13, 2011, at 3:30 PM, Luc Goossens wrote:
Hi all,
There's a striking asymmetry between the wonderful flexibility in passing values into functions (positional args, keyword args, default values, *args, **kwargs, ...) and the limited options for processing the return values (assignment). Hence, whenever I upgrade a function with a new keyword arg and a default value, I do not have to change any of the existing calls, whereas whenever I add a new element to its output tuple, I find myself chasing all existing code to upgrade the corresponding assignments with an additional (unused) variable. So I was wondering whether this was ever discussed before (and recorded) inside the Python community. (naively what seems to be missing is the ability to use the assignment machinery that binds functions' formal params to the given actual param list also in the context of a return value assignment)
cheers, Luc _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On Tue, Jan 18, 2011 at 1:04 AM, Luc Goossens
If the system automatically ignored "new" return values (for whatever "new" might mean), I think it would be too easy to miss return values that you don't mean to be ignoring.
this I guess is only valid in the case where multiple return values are so strongly related they probably should be an object instead of a bunch of values
If the relationship between the return values is so weak, I would seriously question the viability of returning them at all.
So it would be neat if you could do:
(a, b, c=3) = func(...)
or adding keywords to the mix
a, b, c = kw1, d = kw2 (defval2) = function(...)
now for the can of worms ...
- one would need some syntactic means to distinguish the returning of two values from the returning of a single pair with two values - there's a complication with nested function calls (i.e. fun1 ( fun2(...), fun3(...)); the only simple semantic I could associate with this, is to simply drop all return values except for the first, but that is incompatible with returning the full return value of a function without needing to manipulate it ...
Hmm, maybe the second worm above hints at the root problem with multiple return values: there is just no simple way of accommodating them.
If you want additional independent return values, use a container (such as a list or dictionary) as an output variable. Even better, if you want to change the return value without having to change every location that calls the function, *create a new function* instead of modifying the existing one. Yes, this means you can sometimes end up with lousy names for functions because the original function used up the best one. Such is life in a world where you need to cope with backwards compatibility issues. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 17 January 2011 15:04, Luc Goossens
Hi all,
Thanks to everybody for your feedback! So I guess the answer to my question (which - I noticed just now - did not end with a question mark), is ... no.
If your function is returning a bunch of related values in a tuple, and that tuple keeps changing as you re-design the code, that's a code smell.
the use cases I have in mind are the functions that return a set of weakly related values, or more importantly report on different aspects of the calculation; an example of the first is a divmod function that returns the div and the mod while callers might only be interested in the div; examples of the latter are the time it took to calculate the value, possible warnings that were encountered, ...
like the good old errorcode/stdout/stderr trio
[various workarounds suggested]
the problem with (all) the workarounds that were suggested is that they help with migrating from 2 to more return values; for the 1 to 2 case (the most common case) they don't help a lot, as the amount of work to put the workaround in place exceeds the amount of work to cope with the migration directly; I would say it is a requirement that the simple case of single variable gets single (or first) return value, retains its current simple notation
If the system automatically ignored "new" return values (for whatever "new" might mean), I think it would be too easy to miss return values that you don't mean to be ignoring.
this I guess is only valid in the case where multiple return values are so strongly related they probably should be an object instead of a bunch of values
So it would be neat if you could do:
(a, b, c=3) = func(...)
or adding keywords to the mix
a, b, c = kw1, d = kw2 (defval2) = function(...)
now for the can of worms ...
- one would need some syntactic means to distinguish the returning of two values from the returning of a single pair with two values - there's a complication with nested function calls (i.e. fun1 ( fun2(...), fun3(...)); the only simple semantic I could associate with this, is to simply drop all return values except for the first, but that is incompatible with returning the full return value of a function without needing to manipulate it
LISP has a notion of multiple return values. I can't easily find an authoritative reference, but here is a short explanation: http://abhishek.geek.nz/docs/features-of-common-lisp/#Multiple_values Based on this, you could define a decorator class: class multiple_values: def __init__(self, f): self.f = f def __call__(self, *args, **kwargs): return self.f(*args, **kwargs)[0] def all_values(self, *args, **kwargs): return self.f(*args, **kwargs) @multiple_values def div(x, y): return x//y, x%y Then:
q = div(10, 3) 3 q, r = div.all_values(17, 5)
-- Arnaud
If the values involved are sufficiently weakly related, then I question whether it's appropriate to calculate them at all. If the most frequent use is to select out a subset of the values, then even calculating the other values seems like a wasted effort. To take "average" and "stdev" as an example... If you use an object to represent not the range of return values, but the domain of input values, then you can use @property accessors for the results. class Statisics(object): def __init__(self, list): self.list = list @property def avg(self): return ... @property def stdev(self): return ... @property def inputs(self): return self.list @property def outputs(self): return self.avg, self.stdev Now you have the syntactic appearance of selecting from multiple values in either one step or two, your choice. x = Statistics([1, 2, 3]).stdev y, z = Statistics([1, 2, 3]).outputs p = Statistics([4, 5, 6]) q = p.avg --rich
You could shorten this... def __call__(self): return self.avg, self.stdev Now it's even more dense and allows for indexing the results: p = Statistics([4, 5, 6])()[0] --rich On 1/17/11 09:54 , K. Richard Pixley wrote:
If the values involved are sufficiently weakly related, then I question whether it's appropriate to calculate them at all. If the most frequent use is to select out a subset of the values, then even calculating the other values seems like a wasted effort.
To take "average" and "stdev" as an example...
If you use an object to represent not the range of return values, but the domain of input values, then you can use @property accessors for the results.
class Statisics(object): def __init__(self, list): self.list = list
@property def avg(self): return ...
@property def stdev(self): return ...
@property def inputs(self): return self.list
@property def outputs(self): return self.avg, self.stdev
Now you have the syntactic appearance of selecting from multiple values in either one step or two, your choice.
x = Statistics([1, 2, 3]).stdev y, z = Statistics([1, 2, 3]).outputs
p = Statistics([4, 5, 6]) q = p.avg
--rich
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
On 01/17/2011 09:04 AM, Luc Goossens wrote:
Hi all,
Thanks to everybody for your feedback! So I guess the answer to my question (which - I noticed just now - did not end with a question mark), is ... no.
If your function is returning a bunch of related values in a tuple, and that tuple keeps changing as you re-design the code, that's a code smell.
the use cases I have in mind are the functions that return a set of weakly related values, or more importantly report on different aspects of the calculation; an example of the first is a divmod function that returns the div and the mod while callers might only be interested in the div; examples of the latter are the time it took to calculate the value, possible warnings that were encountered, ...
You could use a class instead of a function to get different variations on a function.
class DivMod: ... def div(self, x, y): ... return x//y ... def mod(self, x, y): ... return x%y ... def __call__(self, x, y): ... return x//y, x%y ... dmod = DivMod() dmod(100, 7) (14, 2) dmod.div(100, 7) 14 dmod.mod(100, 7) 2
Adding methods, to time and/or get warnings, should be fairly easy. If you do a bunch of these, you can make a base class and reuse the common parts. For timing, logging, and checking returned values of functions, decorators can be very useful. Cheers, Ron
participants (15)
-
Alexander Belopolsky
-
Arnaud Delobelle
-
Ben Finney
-
Daniel da Silva
-
Eric Smith
-
Ian Bicking
-
Jason Orendorff
-
K. Richard Pixley
-
Luc Goossens
-
Masklinn
-
Nick Coghlan
-
Raymond Hettinger
-
Rob Cliffe
-
Ron Adam
-
Steven D'Aprano