
I know this is going to get rejected, but I want to speak out the idea nevertheless: I love kwargs and named arguments from the beginning (roughly 18 years now) I guess you came across this several times before: In the first version of the API one variable gets returned: Example: status = backend.transmit_data() But later you want to add something to the API. For the input part of a method this is solved. You can add an optional kwarg. But for the output part of a method, there you can't change the interface easily up to now. Use case: you want to add an optional list of messages which could get returned. You want to change to status, messages = backend.transmit_data() If you have 10 different backend implementations, then you need to change all of them. This is difficult, if the backends reside in different repos and maybe you even don't own some of these repos. Current draw-back: you need to change all of them at once. Of course you could work around it be using this status_messages = backend.transmit_data() And then do some fancy guessing if the variable contains only the status or a tuple containing status and messages. Some days ago I head the idea that kwargs for return would help here. This should handle both cases: Case1: The old backend returns only the status, and the caller wants both status and messages. Somehow the default for messages needs to be defined. In my case it would be the empty list. Case2: The new backends returning the status and messages. The old caller just gets the status. The messages get discarded. Above is the use case. How could kwargs for return look like? Maybe like this: ..... Sorry, I could not find a nice, clean and simple syntax for this up to now. Maybe someone else is more creative than I am. What do you think about this? Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines

On Sat, Jan 26, 2019 at 02:04:12PM +0100, Thomas Güttler Lists wrote:
Example:
status = backend.transmit_data()
But later you want to add something to the API. [...] How could kwargs for return look like?
return {'status': True, 'messages': []} Or perhaps better: return ResultObject(status=True, messages=[]) I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility. -- Steve

I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility.
That assumes you knew before hand to do that. The question is about the normal situation when you didn't. Also you totally disregarded the call site where there is no way to do a nice dict unpacking in python. The tuple case is super special and convenient but strictly worse than having properly named fields. To me this question sounds like it's about dict unpacking with one special case to keep backwards compatibility. This should be possible with a simple dict subclass in some cases... / Anders

On Sat, Jan 26, 2019, 6:30 AM Anders Hovmöller <boxed@killingar.net wrote:
I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility.
That assumes you knew before hand to do that. The question is about the normal situation when you didn't.
Also you totally disregarded the call site where there is no way to do a nice dict unpacking in python. The tuple case is super special and convenient but strictly worse than having properly named fields.
To me this question sounds like it's about dict unpacking with one special case to keep backwards compatibility.
My "destructure" module might help. I was playing around with the idea of dict unpacking and extended it to a kind of case matching. https://github.com/selik/destructure Grant Jenks independently came to almost the same idea and implementation.

On Sat, Jan 26, 2019 at 03:29:59PM +0100, Anders Hovmöller wrote:
I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility.
That assumes you knew before hand to do that. The question is about the normal situation when you didn't.
Exactly the same can be said about the given scenario with or without this hypthetical "kwargs for return". Thomas talks about having to change a bunch of backends. Okay, but he still has to change them to use "kwargs for return" because they're not using them yet. So there is no difference here. The point is, you can future-proof your API *right now*, today, without waiting for "kwargs for return" to be added to Python 3.8 or 3.9 or 5000. Return a dict or some object with named fields. Then you can add new fields to the object in the future, without breaking backwards compatibility again, since callers that don't expect the new fields will simply ignore them. Of course if we aren't doing that *yet* then doing so for the first time will be a breaking change. But that can be said about any time we change our mind about what we're doing, and do something different.
Also you totally disregarded the call site where there is no way to do a nice dict unpacking in python.
It wasn't clear to me that Thomas is talking about dict unpacking. It still isn't. He makes the analogy with passing keyword arguments to a function where they are collected in a **kwargs dict. That parameter isn't automatically unpacked, you get a dict. So I expect that "kwargs for return" should work the same way: it returns a dict. If you want to unpack it, you can unpack it yourself in anyway you see fit. But perhaps you are correct, and Thomas actually is talking about dict unpacking and not "kwargs for return". But perhaps if he had spent more time demonstrating what he wanted to do with some pseudo-code, and less explaining why he wanted to do it, I might have found his intention more understandable.
The tuple case is super special and convenient but strictly worse than having properly named fields.
In what way is it worse, given that returning a namedtuple with named fields is backwards compatible with returning a regular tuple? We can have our cake and eat it too. Unless the caller does a type-check, there is no difference. Sequence unpacking will still work, and namedtuples unlike regular tuples can support optional attributes.
To me this question sounds like it's about dict unpacking with one special case to keep backwards compatibility. This should be possible with a simple dict subclass in some cases...
This is hardly the first time dict unpacking has been proposed. Each time, they flounder and go nowhere. What is this "simple dict subclass" that is going to solve the problem? Perhaps you should start by telling us *precisely* what the problem is that your subclass will solve. Because I don't know what your idea of dict unpacking is, and how it compares or differs from previous times it has been proposed. Are there any other languages which support dict unpacking? How does it work there? -- Steve

On Saturday, January 26, 2019, Steven D'Aprano <steve@pearwood.info> wrote:
Perhaps you should start by telling us *precisely* what the problem is that your subclass will solve. Because I don't know what your idea of dict unpacking is, and how it compares or differs from previous times it has been proposed.
Dataclasses initialization may be most useful currently implemented syntactic sugar for a dict return value contract that specifies a variable name (and datatype)? Is there a better way to specify a return object interface with type annotations that throws exceptions at runtime that dataclasses?
Are there any other languages which support dict unpacking? How does it work there?
This about object destructuring in JS is worth a read: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/... Here are two simple cases: """ var o = {p: 42, q: true}; var {p: foo, q: bar} = o; console.log(foo); // 42 console.log(bar); // true """ Does it throw an exception when a value is undefined? You can specify defaults: """ var {a: aa = 10, b: bb = 5} = {a: 3}; console.log(aa); // 3 console.log(bb); // 5 """
-- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

On Sat, Jan 26, 2019 at 11:59:36AM -0500, Wes Turner wrote:
This about object destructuring in JS is worth a read:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/...
Thanks. -- Steve

On Sat, Jan 26, 2019 at 10:31 AM Steven D'Aprano <steve@pearwood.info> wrote:
In what way is it worse, given that returning a namedtuple with named
fields is backwards compatible with returning a regular tuple? We can
have our cake and eat it too. Unless the caller does a type-check, there is no difference. Sequence unpacking will still work, and namedtuples unlike regular tuples can support optional attributes.
I suppose the one difference is where someone improperly relies on tuple unpacking. Old version: def myfun(): # ... return a, b, c # Call site val1, val2, val3 = myfun() New version: def myfun(): # ... return a, b, c, d Now the call site will get "ValueError: too many values to unpack". Namedtuples don't solve this problem, of course. But they don't make anything worse either. The better approach, of course, is to document the API as only using attribute access, not positional. I reckon dataclasses from the start could address that concern... but so can documentation alone. E.g.: Old version (improved): def myfun(): mydata = namedtuple("mydata", "a b c") # ... return mydata(a, b, c) # Call site ret = myfun() val1, val2, val3 = ret.a, ret.b, ret.c New version (improved) def myfun(): mydata = namedtuple("mydata", "a b c d e") # ... return mydata(a, b, c, d, e) Now the call site is completely happy with no changes (assuming it doesn't need to care about what values 'ret.d' or 'ret.e' contain... but presumably those extra values are optional in some way. Moreover, we are even perfectly fine if we had created namedtuple("mydata", "e d c b a") for some reason, completely changing the positions of all the named attributes in the improved namedtuple. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On 1/26/2019 12:30 PM, David Mertz wrote:
On Sat, Jan 26, 2019 at 10:31 AM Steven D'Aprano <steve@pearwood.info <mailto:steve@pearwood.info>> wrote:
In what way is it worse, given that returning a namedtuple with named
fields is backwards compatible with returning a regular tuple? We can have our cake and eat it too. Unless the caller does a type-check, there is no difference. Sequence unpacking will still work, and namedtuples unlike regular tuples can support optional attributes.
I suppose the one difference is where someone improperly relies on tuple unpacking.
Old version:
def myfun(): # ... return a, b, c
# Call site val1, val2, val3 = myfun()
New version:
def myfun(): # ... return a, b, c, d
Now the call site will get "ValueError: too many values to unpack". Namedtuples don't solve this problem, of course. But they don't make anything worse either.
The better approach, of course, is to document the API as only using attribute access, not positional. I reckon dataclasses from the start could address that concern... but so can documentation alone. E.g.:
Old version (improved):
def myfun():
mydata = namedtuple("mydata", "a b c")
# ... return mydata(a, b, c)
# Call site ret = myfun()
val1, val2, val3 = ret.a, ret.b, ret.c
New version (improved)
def myfun():
mydata = namedtuple("mydata", "a b c d e")
# ... return mydata(a, b, c, d, e)
Now the call site is completely happy with no changes (assuming it doesn't need to care about what values 'ret.d' or 'ret.e' contain... but presumably those extra values are optional in some way.
Moreover, we are even perfectly fine if we had created namedtuple("mydata", "e d c b a") for some reason, completely changing the positions of all the named attributes in the improved namedtuple.
Preventing this automatic unpacking (and preventing iteration in general) was one of the motivating factors for dataclasses: https://www.python.org/dev/peps/pep-0557/#id47 Eric

Indeed! I promise to use dataclass next time I find myself about to use namedtuple. :-) I'm pretty sure that virtually all my uses will allow that. On Sat, Jan 26, 2019, 1:09 PM Eric V. Smith <eric@trueblade.com wrote:
On 1/26/2019 12:30 PM, David Mertz wrote:
On Sat, Jan 26, 2019 at 10:31 AM Steven D'Aprano <steve@pearwood.info <mailto:steve@pearwood.info>> wrote:
In what way is it worse, given that returning a namedtuple with named
fields is backwards compatible with returning a regular tuple? We can have our cake and eat it too. Unless the caller does a type-check, there is no difference. Sequence unpacking will still work, and namedtuples unlike regular tuples can support optional attributes.
I suppose the one difference is where someone improperly relies on tuple unpacking.
Old version:
def myfun(): # ... return a, b, c
# Call site val1, val2, val3 = myfun()
New version:
def myfun(): # ... return a, b, c, d
Now the call site will get "ValueError: too many values to unpack". Namedtuples don't solve this problem, of course. But they don't make anything worse either.
The better approach, of course, is to document the API as only using attribute access, not positional. I reckon dataclasses from the start could address that concern... but so can documentation alone. E.g.:
Old version (improved):
def myfun():
mydata = namedtuple("mydata", "a b c")
# ... return mydata(a, b, c)
# Call site ret = myfun()
val1, val2, val3 = ret.a, ret.b, ret.c
New version (improved)
def myfun():
mydata = namedtuple("mydata", "a b c d e")
# ... return mydata(a, b, c, d, e)
Now the call site is completely happy with no changes (assuming it doesn't need to care about what values 'ret.d' or 'ret.e' contain... but presumably those extra values are optional in some way.
Moreover, we are even perfectly fine if we had created namedtuple("mydata", "e d c b a") for some reason, completely changing the positions of all the named attributes in the improved namedtuple.
Preventing this automatic unpacking (and preventing iteration in general) was one of the motivating factors for dataclasses: https://www.python.org/dev/peps/pep-0557/#id47
Eric

On Sat, Jan 26, 2019 at 10:13 AM David Mertz <mertz@gnosis.cx> wrote:
Indeed! I promise to use dataclass next time I find myself about to use namedtuple. :-)
Indeed IIUC, namedtuple was purposely designed to be able to replace tuples as well as adding the named access. But that does indeed cause potential issues. However, dataclasses see kind of heavyweight to me -- am I imagining that, or could one make a named_not_tuple that was appreciably lighter weight? (in creation time, memory use, that sort of thing) -CHB
I'm pretty sure that virtually all my uses will allow that.
On Sat, Jan 26, 2019, 1:09 PM Eric V. Smith <eric@trueblade.com wrote:
On 1/26/2019 12:30 PM, David Mertz wrote:
On Sat, Jan 26, 2019 at 10:31 AM Steven D'Aprano <steve@pearwood.info <mailto:steve@pearwood.info>> wrote:
In what way is it worse, given that returning a namedtuple with named
fields is backwards compatible with returning a regular tuple? We can have our cake and eat it too. Unless the caller does a type-check, there is no difference. Sequence unpacking will still work, and namedtuples unlike regular tuples can support optional attributes.
I suppose the one difference is where someone improperly relies on tuple unpacking.
Old version:
def myfun(): # ... return a, b, c
# Call site val1, val2, val3 = myfun()
New version:
def myfun(): # ... return a, b, c, d
Now the call site will get "ValueError: too many values to unpack". Namedtuples don't solve this problem, of course. But they don't make anything worse either.
The better approach, of course, is to document the API as only using attribute access, not positional. I reckon dataclasses from the start could address that concern... but so can documentation alone. E.g.:
Old version (improved):
def myfun():
mydata = namedtuple("mydata", "a b c")
# ... return mydata(a, b, c)
# Call site ret = myfun()
val1, val2, val3 = ret.a, ret.b, ret.c
New version (improved)
def myfun():
mydata = namedtuple("mydata", "a b c d e")
# ... return mydata(a, b, c, d, e)
Now the call site is completely happy with no changes (assuming it doesn't need to care about what values 'ret.d' or 'ret.e' contain... but presumably those extra values are optional in some way.
Moreover, we are even perfectly fine if we had created namedtuple("mydata", "e d c b a") for some reason, completely changing the positions of all the named attributes in the improved namedtuple.
Preventing this automatic unpacking (and preventing iteration in general) was one of the motivating factors for dataclasses: https://www.python.org/dev/peps/pep-0557/#id47
Eric
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

I'm not certain of memory usage. But using 'make_dataclass' makes the "noise" pretty much no worse than namedtuple. Person = namedtuple("Person", "name age address") Person = make_dataclass("Person", "name age address".split()) Unless you have millions of there's objects, memory probably isn't that important. But I guess you might... and namedtuple did sell itself as "less memory than small dictionaries" On Sat, Jan 26, 2019, 1:26 PM Christopher Barker <pythonchb@gmail.com wrote:
On Sat, Jan 26, 2019 at 10:13 AM David Mertz <mertz@gnosis.cx> wrote:
Indeed! I promise to use dataclass next time I find myself about to use namedtuple. :-)
Indeed IIUC, namedtuple was purposely designed to be able to replace tuples as well as adding the named access.
But that does indeed cause potential issues. However, dataclasses see kind of heavyweight to me -- am I imagining that, or could one make a named_not_tuple that was appreciably lighter weight? (in creation time, memory use, that sort of thing)
-CHB
I'm pretty sure that virtually all my uses will allow that.
On Sat, Jan 26, 2019, 1:09 PM Eric V. Smith <eric@trueblade.com wrote:
On 1/26/2019 12:30 PM, David Mertz wrote:
On Sat, Jan 26, 2019 at 10:31 AM Steven D'Aprano <steve@pearwood.info <mailto:steve@pearwood.info>> wrote:
In what way is it worse, given that returning a namedtuple with named
fields is backwards compatible with returning a regular tuple? We can have our cake and eat it too. Unless the caller does a type-check, there is no difference. Sequence unpacking will still work, and namedtuples unlike regular tuples can support optional attributes.
I suppose the one difference is where someone improperly relies on tuple unpacking.
Old version:
def myfun(): # ... return a, b, c
# Call site val1, val2, val3 = myfun()
New version:
def myfun(): # ... return a, b, c, d
Now the call site will get "ValueError: too many values to unpack". Namedtuples don't solve this problem, of course. But they don't make anything worse either.
The better approach, of course, is to document the API as only using attribute access, not positional. I reckon dataclasses from the start could address that concern... but so can documentation alone. E.g.:
Old version (improved):
def myfun():
mydata = namedtuple("mydata", "a b c")
# ... return mydata(a, b, c)
# Call site ret = myfun()
val1, val2, val3 = ret.a, ret.b, ret.c
New version (improved)
def myfun():
mydata = namedtuple("mydata", "a b c d e")
# ... return mydata(a, b, c, d, e)
Now the call site is completely happy with no changes (assuming it doesn't need to care about what values 'ret.d' or 'ret.e' contain... but presumably those extra values are optional in some way.
Moreover, we are even perfectly fine if we had created namedtuple("mydata", "e d c b a") for some reason, completely changing the positions of all the named attributes in the improved namedtuple.
Preventing this automatic unpacking (and preventing iteration in general) was one of the motivating factors for dataclasses: https://www.python.org/dev/peps/pep-0557/#id47
Eric
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Christopher Barker, PhD
Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 27Jan2019 02:30, Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Jan 26, 2019 at 03:29:59PM +0100, Anders Hovmöller wrote:
I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility.
That assumes you knew before hand to do that. The question is about the normal situation when you didn't.
Exactly the same can be said about the given scenario with or without this hypthetical "kwargs for return".
Thomas talks about having to change a bunch of backends. Okay, but he still has to change them to use "kwargs for return" because they're not using them yet. So there is no difference here.
I don't think so. It looks to me like Thomas' idea is to offer a facility a little like **kw in function, but for assignment. So in his case, he wants to have one backend start returning a richer result _without_ bringing all the other backends up to that level. This is particularly salient when "the other backends" includes third party plugin facilities, where Thomas (or you or I) cannot update their source. So, he wants to converse of changing a function which previously was like: def f(a, b): into: def f(a, b, **kw): In Python you can freely do this without changing _any_ of the places calling your function. So, for assignment he's got: result = backend.foo() and he's like to go to something like: result, **kw = richer_backend.foo() while still letting the older less rich backends be used in the same assignment.
The point is, you can future-proof your API *right now*, today, without waiting for "kwargs for return" to be added to Python 3.8 or 3.9 or 5000. Return a dict or some object with named fields. [...]
Sure, but Thomas' scenario is where nonfutureproof API is already in the wild.
Also you totally disregarded the call site where there is no way to do a nice dict unpacking in python.
It wasn't clear to me that Thomas is talking about dict unpacking. It still isn't. He makes the analogy with passing keyword arguments to a function where they are collected in a **kwargs dict. That parameter isn't automatically unpacked, you get a dict.
Yeah, but with a function call, not only do you not need to unpack it at the function receiving end, you don't even need to _supply_ it at the calling end, and you can still use **kw at the function receiving end; it will simply be empty.
So I expect that "kwargs for return" should work the same way: it returns a dict. If you want to unpack it, you can unpack it yourself in anyway you see fit.
Yeah. Or even *a. Like this: Python 3.6.8 (default, Dec 30 2018, 12:58:01) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> def f(): return 3 ... >>> def f2(): return 3, 4 ... >>> *x = f() File "<stdin>", line 1 SyntaxError: starred assignment target must be in a list or tuple >>> a, *x = f() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not iterable >>> a, *x = f2() Of course this can't work out of the box with current Python because argument unpacking expected the right hand side to be a single unpackable entity, and: a, *x = ... is just choosing to unpack only the first of the values, dropping the rest into x. Of course a **kw analogue is better because it lets one associate names with values. In terms of syntax, we can't go with: a, *x = ... because precedence lets us write "bare" tuples: a, *x = 1, 2, 3 so the right hand side isn't 3 distinct expressions, it is one tuple (yes, made of 3 expressions) and it is the left side choosing to unpack it directly. However, a, **kw = ... is an outright syntax error, leaving a convenient syntactic hole to provide Thomas' notion. In current syntax, the right hand side remains a single expression, and kw will always be an empty dict. The tricky bit isn't the left side, it is what to provide on the right. Idea: what if **kw mean to unpack RHS.__dict__ (for ordinary objects) i.e. to be filled in with the attributes of the RHS expression value. So, Thomas' old API: def foo(): return 3 and: a, **kw = foo() get a=3 and kw={}. But the richer API: class Richness(int): def __init__(self, value): super().__int__(value) self.x = 'x!' self.y = 4 def foo_rich(): return Richness(3) a, **kw = foo_rich() gets a=3 and kw={'x': 'x!', 'y': 4}. I've got mixed mfeelings about this, but it does supply the kind of mechanism he seems to be thinking about. Cheers, Cameron Simpson <cs@cskk.id.au>

On Sun, Jan 27, 2019 at 03:33:15PM +1100, Cameron Simpson wrote:
I don't think so. It looks to me like Thomas' idea is to offer a facility a little like **kw in function, but for assignment.
Why **keyword** arguments rather than **positional** arguments? Aside from the subject line, what part of Thomas' post hints at the analogy with keyword arguments? function(spam=1, eggs=2, cheese=3) Aside from the subject line, I'm not seeing the analogy with keyword parameters here. If he wants some sort of dict unpacking, I don't think he's said so. Did I miss something? But in any case, regardless of whether he wants dict unpacking or not, Thomas doesn't want the caller to be forced to update their calls. Okay, let's consider the analogy carefully: Functions that collect extra keyword args need to explicitly include a **kwargs in their parameter list. If we write this: def spam(x): ... spam(123, foo=1, bar=2, baz=3) we get a TypeError. We don't get foo, bar, baz silently ignored. So if we follow this analogy, then dict unpacking needs some sort of "collect all remaining keyword arguments", analogous to what we can already do with sequences: foo, bar, baz, *extras = [1, 2, 3, 4, 5, 6, 7, 8] Javascript ignores extra values: js> var [x, y] = [1, 2, 3, 4] js> x 1 js> y 2 but in Python, this is an error: foo, bar, baz = [1, 2, 3, 4] So given some sort of "return a mapping of keys to values": def spam(): # For now, assume we simply return a dict return dict(messages=[], success=True) let's gloss over the dict-unpacking syntax, whatever it is, and assume that if a function returns a *single* key:value, and the assignment target matches that key, it Just Works: success = spam() But by analogy with **kwargs that has to be an error since there is nothing to collect the unused key 'messages'. It needs to be: success, **extras = spam() which gives us success=True and extras={'messages': []}. But Thomas doesn't want the caller to have to update their code either. To do so would be analogous to having function calls start ignoring unexpected keyword arguments: assert len([], foo=1, bar=2) == 0 so *not* like **kwargs at all. And it would require ignoring the Zen: Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess.
So in his case, he wants to have one backend start returning a richer result _without_ bringing all the other backends up to that level. This is particularly salient when "the other backends" includes third party plugin facilities, where Thomas (or you or I) cannot update their source.
I've pointed out that we can solve his use-case by planning ahead and returning an object that can hold additional, optional fields. Callers that don't know about those fields can just ignore them. Backends that don't know to supply optional fields can just leave them out. Or he can wrap the backend functions in one of at least three Design Patterns made for this sort of scenario: Adaptor, Bridge or Facade, whichever is more appropriate. By decoupling the backend from the frontend, he can easily adapt the result even if he cannot change the backends directly. So Thomas' use-case already has good solutions. But as people have repeatedly pointed out, they all require some foresight and planning. To which my response is, yes they do. Just like we have to plan ahead and include *extra in your sequence packing assignments, or **kwargs in your function parameter list.
So, he wants to converse of changing a function which previously was like:
def f(a, b):
into:
def f(a, b, **kw):
Yes, but to take this analogy further, he wants to do so without having to actually add that **kw to the parameter list. So he apparently wants errors to pass silently. Since Thomas apparently feels that neither the caller nor the callee should be expected to plan ahead, while still expecting backwards compatibility to hold even in the event of backwards incompatible changes, I can only conclude that he wants the interpreter to guess the intention of the caller AND the callee and Do The Right Thing no matter what: http://www.catb.org/jargon/html/D/DWIM.html (Half tongue in cheek here.)
In Python you can freely do this without changing _any_ of the places calling your function.
But only because the function author has included **kw in their parameter list. If they haven't, it remains an error.
So, for assignment he's got:
result = backend.foo()
and he's like to go to something like:
result, **kw = richer_backend.foo()
while still letting the older less rich backends be used in the same assignment.
That would be equivalent to having unused keyword arguments (or positional arguments for that matter) just disappear into the aether, silently with no error or notice. Like in Javascript. And what about the opposite situation, where the caller is expecting two results, but the backend only returns one? Javascript packs the extra variable with ``undefined``, but Python doesn't do that. Does Thomas actually want errors to pass silently? I don't wish to guess his intentions. [...]
Idea: what if **kw mean to unpack RHS.__dict__ (for ordinary objects) i.e. to be filled in with the attributes of the RHS expression value.
So, Thomas' old API:
def foo(): return 3
and:
a, **kw = foo()
get a=3 and kw={}.
Um, no, it wouldn't do that -- it would fail, because ints don't have a __dict__. And don't forget __slots__. What about properties and other descriptors, private attributes, etc. Is *every* attribute of an object supposed to be a separate part of the return result? If a caller knows about the new API, how to they directly access the newer fields? You might say: a, x, y, **kwargs = foo() to automatically extract a.x and a.y (as in the example class you gave below) but what if I want to give names which are meaningful at the caller end, instead of using the names foo() supplies? a, counter, description, **kwargs = foo() Now my meaningful names don't match the attributes. Nor does the order I give them. Now what happens?
But the richer API:
class Richness(int):
def __init__(self, value): super().__int__(value) self.x = 'x!' self.y = 4
[...]
I've got mixed mfeelings about this
I don't. -- Steve

I was going to write exactly they're same idea Steven did. Right now you can simply design APIs to return dictionaries or, maybe better, namedtuples. Namedtuples are really nice since you can define new attributes when you upgrade an API without breaking any old coffee that used the prior attributes... Of course, you can only add more, not remove old ones, to assure compatibility. Unlike dictionaries, namedtuples cannot contain arbitrary "keywords" at runtime, which is either good or bad depending on your purposes. Recently, dataclasses are also an option. They are cool, but I haven't yet had a reason to use them. They feel heavier than namedtuples though (as a programming construct, not talking about memory usage or speed or whatever). On Sat, Jan 26, 2019, 8:52 AM Steven D'Aprano <steve@pearwood.info wrote:
On Sat, Jan 26, 2019 at 02:04:12PM +0100, Thomas Güttler Lists wrote:
Example:
status = backend.transmit_data()
But later you want to add something to the API. [...] How could kwargs for return look like?
return {'status': True, 'messages': []}
Or perhaps better:
return ResultObject(status=True, messages=[])
I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility.
-- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

On Saturday, January 26, 2019, David Mertz <mertz@gnosis.cx> wrote:
I was going to write exactly they're same idea Steven did.
Right now you can simply design APIs to return dictionaries or, maybe better, namedtuples. Namedtuples are really nice since you can define new attributes when you upgrade an API without breaking any old coffee that used the prior attributes... Of course, you can only add more, not remove old ones, to assure compatibility. Unlike dictionaries, namedtuples cannot contain arbitrary "keywords" at runtime, which is either good or bad depending on your purposes.
Tuples are a dangerous (and classic, legacy) interface contract. NamedTuples must be modified to allow additional (i.e. irrelevant to the FUT) data through the return interface.
Recently, dataclasses are also an option. They are cool, but I haven't yet had a reason to use them. They feel heavier than namedtuples though (as a programming construct, not talking about memory usage or speed or whatever).
On Sat, Jan 26, 2019, 8:52 AM Steven D'Aprano <steve@pearwood.info wrote:
On Sat, Jan 26, 2019 at 02:04:12PM +0100, Thomas Güttler Lists wrote:
Example:
status = backend.transmit_data()
But later you want to add something to the API. [...] How could kwargs for return look like?
return {'status': True, 'messages': []}
Or perhaps better:
return ResultObject(status=True, messages=[])
I don't see anything here that can't be done by returning a dict, a namedtuple (possibly with optional fields), or some other object with named fields. They can be optional, they can have defaults, and you can extend the object by adding new fields without breaking backwards compatibility.
-- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

I think I "get" what Thomas is talking about here: Starting with the simplest example, when defining a function, you can have one take a single positional parameter: def fun(x): ... and you can have code all over the place that calls it: fun(something) Later on, if you want to exapand the API, ytou can add a keyword parameter: def fun(x, y=None): ... And all the old code that already calls that function with one argument still works, and newer code can optionally specify the keyword argument -- this is a really nice feature that makes Python very refactorable. But for return values, there is no such flexibility -- if you have already written your function with the simple API: def fun(...): ... return something And it is being used already as such: x = fun() Then you decide that an optional extra return value would be useful, and you re-write your function: def fun(...): ... return something, something_optional now all the call locations will need to be updated: x, y = fun() or maybe: x, __ = fun() Sure, if you had had the foresight, then you _could_ have written your original function to return a more flexible data structure (dict, NamedTuple, etc), but, well, we usually don't have that foresight :-). My first thought was that function return tuples, so you could document that your function should be called as such: x = fun()[0] but, alas, tuple unpacking is apparently automatically disabled for single value tuples (how do you distinguish a tuple with a single value and the value itself??) . so you could do this if you started with two or more return values: x, y = fun()[:2] OR you could hae your original function return a len-1 tuple in the first place: def test(): return (5,) but then that would require the same foresight. So: IIUC, Thomas's idea is that there be some way to have"optional" return values, stabbing at a possible syntax to make the case: Original: def fun(): return 5 called as: x = fun() Updated: def fun() return 5, *, 6 Now it can still be called as: x = fun() and result in x == 5 or: x, y = fun() and result in x == 5, y == 6 So: syntax asside, I'm not sure how this could be implemented -- As I understand it, functions return either a single value, or a tuple of values -- there is nothing special about how assignment is happening when a function is called. That is: result = fun() x = result is exactly the same as: x = fun() So to impliment this idea, functions would have to return an object that would act like a single object when assigned to a single name: x = fun() but an unpackable object when assigned to multiple names: x, y = fun() but then, if you had this function: def fun() return x, *, y and you called it like so: result = fun() x, y = result either x, y = result would fail, or result would be this "function return object", rather than the single value. I can't think of any way to resolve that problem without really breaking the language. -CHB On Sat, Jan 26, 2019 at 9:48 AM Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Jan 26, 2019 at 12:01:55PM -0500, Wes Turner wrote:
Tuples are a dangerous (and classic, legacy) interface contract.
What?
-- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On Sat, Jan 26, 2019, 1:21 PM Christopher Barker
As I understand it, functions return either a single value, or a tuple of values -- there is nothing special about how assignment is happening when a function is called.
No. It's simpler than that! Functions return a single value, period. That single value might happen to be a tuple or something else unpackable. This makes it feel like we have multiple return values, but we never actually do. The fact that "tuples are spelled by commas not by parentheses" makes this distinction easy to ignore most of the time.

On Sat, Jan 26, 2019 at 10:43 AM David Mertz <mertz@gnosis.cx> wrote:
No. It's simpler than that! Functions return a single value, period.
That single value might happen to be a tuple or something else unpackable.
d'uh -- I was thinking common use case.
This makes it feel like we have multiple return values, but we never actually do. The fact that "tuples are spelled by commas not by parentheses" makes this distinction easy to ignore most of the time.
yup. And unpacking behavior. So I guess the correct way to phrase that is that functions return a single object, and that object may or may not be unpackable. Key to this is that unlike function parameters, python isn't doing anything special when returning a value form a function, or when assigning a value to a name: * functions return a value * assignment applies unpacking when assigning to multiple names. These two things are orthogonal in the language. The challenge in this case is that when you assign to a single name, there is no unpacking: x = 3, So in this case, the 1-tuple doesn't get unpacked -- when you are assigning to a single name, there is no unpacking to be done. but you can unpack a one-tuple, by assigning to name with a trailing comma: In [62]: x, = (3,) In [63]: x Out[63]: 3 So the challenge is that to support this new feature, we'd need to changing assignment, so that: x = an_object would look at an_object, and determine if it was one of these function_return_objects that should have the first value unpacked if assigned to a single name, but unpack the others if assigned to a tuple of names -- and THAT is a pretty big change! And it would still create odd behavior if teh function return value were not assigned right away, but stored in some other container: a_list = [fun1(), fun2()] So really, there is no way to create this functionality without major changes to the language. -CHB -CHB -- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On Sat, Jan 26, 2019 at 10:20:11AM -0800, Christopher Barker wrote:
My first thought was that function return tuples, so you could document that your function should be called as such:
x = fun()[0]
but, alas, tuple unpacking is apparently automatically disabled for single value tuples (how do you distinguish a tuple with a single value and the value itself??)
The time machine strikes again. We have not one but THREE ways of doing so (although two are alternate ways of spelling the same thing): py> def func(): ... return [1] ... py> (spam,) = func() # use a 1-element tuple on the left py> [spam] = func() # or a list py> spam 1 py> spam, *ignore = func() py> spam 1 py> ignore [] But if you're extracting a single value using subscripting on the right hand side, you don't need anything so fancy: py> eggs = func()[0] # doesn't matter how many items func returns py> eggs 1 -- Steve

On Sat, Jan 26, 2019 at 4:01 PM Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Jan 26, 2019 at 10:20:11AM -0800, Christopher Barker wrote:
...
but, alas, tuple unpacking is apparently automatically disabled for single
value tuples (how do you distinguish a tuple with a single value and the value itself??)
The time machine strikes again. We have not one but THREE ways of doing so (although two are alternate ways of spelling the same thing):
py> def func():
... return [1]
Sure, but this requires that you actually return something "unpackable" from the function. As David Mertz pointed out, functions always return a single value, but that value may or may not be unpackable. So the OP's desire, that you could extend a function that was originally written returning a single scalar value to instead return multiple values, and have code that expected a single value still work the same simply isn't possible (without other major changes to Python). -CHB -- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On Sat, Jan 26, 2019 at 10:20:11AM -0800, Christopher Barker wrote: [...]
Starting with the simplest example, when defining a function, you can have one take a single positional parameter: [...] Later on, if you want to exapand the API, ytou can add a keyword parameter:
def fun(x, y=None): ...
And all the old code that already calls that function with one argument still works, and newer code can optionally specify the keyword argument -- this is a really nice feature that makes Python very refactorable.
In the above example, the caller doesn't need to specify ``y`` as a keyword argument, they can call fun(obj) with a single positional argument too. Keyword arguments are a red-herring here. What makes this work is not *keyword arguments* but default values. See below.
But for return values, there is no such flexibility
With no default value, your keyword arguments MUST be supplied and backwards compatibility is broken. Given a default value, the called function merely sees the default value if no other value is given. (You all know how this works, I trust I don't need to demonstrate.) The symmetrically equivalent to arguments with defaults would be if we could supply defaults to the assignment targets, something like this syntax (for illustration purposes only): spam, eggs, cheese="cheddar", aardvark=42 = func() Now func() must return between two and four values. I trust the analogy with parameters with default values is obvious: Calling a function: - parameters without a default are mandatory; - parameters with a default are optional; - supplied arguments are bound to parameters from left to right; - any parameters which don't get an argument have the default bound; - if they don't have a default, it is an error. Returning from a function (hypothetical): - assignment targets without a default are mandatory; - assignment targets with a default are optional; - returned items are bound to targets from left to right; - any target which don't get a result have the default bound; - if they don't have a default, it is an error. Note that all of this is based on positional arguments, so presumably it would use sequence unpacking and allow the equivalent of *args to collect additional positional arguments (if any): spam, eggs, cheese="cheddar", aardvark=42, *extra = func() Javascript already kind of works this way, because it has a default value of undefined, and destructuring assignment (sequence unpacking) assigns undefined to any variable that otherwise wouldn't get a value: js> var a, b = 1 js> a === undefined true js> b 1 But none of this has anything to do with *keyword arguments*, let alone collecting kwargs as in the subject line. The keyword argument analogy might suggest using some form of dict unpacking, but the complexity ramps up even higher: 1. At the callee's end, the function returns some sort of mapping between keys and items. For the sake of the argument, let's assume keys must be identifiers, and invent syntax to make it easier: def func(): return spam=True, eggs=42, messages=[], cheese=(1, 2) (syntax for illustration purposes only). 2. At the caller's end, we need to supply the key, the binding target (which may not be the same!), a possible default value, and somewhere to stash any unexpected key:value pairs. Let's say: spam, eggs->foo.bar, aardvark=None, **extra = func() might bind: spam = True foo.bar = 42 aardvark = None extra = {'messages': [], 'cheese': (1, 2)} At this point somebody will say "Why can't we make the analogy between calling a function and returning from a function complete, and allow *both* positional arguments / sequence unpacking *and* keyword arguments / dict unpacking at the same time?". [...]
Sure, if you had had the foresight, then you _could_ have written your original function to return a more flexible data structure (dict, NamedTuple, etc), but, well, we usually don't have that foresight :-).
*shrug* That can apply to any part of the API. I now want to return an arbitrary float, but I documented that I only return positive ints... if only I had the foresight... [...]
So: IIUC, Thomas's idea is that there be some way to have"optional" return values, stabbing at a possible syntax to make the case: [...] Now it can still be called as:
x = fun()
and result in x == 5
or:
x, y = fun()
and result in x == 5, y == 6
How do you distinguish between these three situations? # I don't care if you return other values, I only care about the first x = fun() # Don't bother unpacking the result, just give it to me as a tuple x = fun() # Oops I forgot that fun() returns two values x = fun() -- Steve

Thomas Güttler Lists writes:
Example:
status = backend.transmit_data()
But later you want to add something to the [API's return value].
Common Lisp has this feature. There, it's called "multiple values". The called function uses a special form that creates a multiple value. The primary value is always available, whether the return is a normal value or a multiple value. Another special form is used to access the secondary values (and if it's not used, any secondary values are immediately discarded). In Python-like pseudo-code: def divide(dividend, divisor): # A special form, represented as syntax. return_multiple_values (dividend // divisor, dividend % divisor) def greatest_smaller_multiple(dividend, divisor): result = divisor * divide(dividend, divisor) # Secondary values are not accessible from result. return result def recreate_dividend(dividend, divisor): # The ``multiple_values`` builtin is magic: there is no other way # to "see" multiple values. # It returns a tuple. # For keyword returns, have a convention that the second value # is a dict or use an appropriate "UnboxedMultipleValues" class. vs = multiple_values(divide(dividend, divisor)) return vs[0] * divisor + vs[1] But Common Lisp experience suggests (1) if you want to use this feature to change the API, you generally want to do a full refactoring anyway, because (2) the called function can (and often enough to be problematic, *does*) creep in the direction of *requiring* attention to the secondary values, can (and sometimes *does*) lead to subtle bugs at call sites that only use the primary value. It's unclear to me that this feature can be safely used in the way your example suggests. Steve

Wow, thank you very much for all those answers and hints to my message. David opened my eyes with this: Functions return a single value, period. Yes, this means my question is not about a function, it is about assignment. Dictionary unpacking could be used for my use case. Since it does not exist, I will look at dataclasses. Thank you very much for your feedback. Thomas Güttler Am 26.01.19 um 14:04 schrieb Thomas Güttler Lists:
I know this is going to get rejected, but I want to speak out the idea nevertheless:
I love kwargs and named arguments from the beginning (roughly 18 years now)
I guess you came across this several times before:
In the first version of the API one variable gets returned:
Example:
status = backend.transmit_data()
But later you want to add something to the API.
For the input part of a method this is solved. You can add an optional kwarg.
But for the output part of a method, there you can't change the interface easily up to now.
Use case: you want to add an optional list of messages which could get returned.
You want to change to
status, messages = backend.transmit_data()
If you have 10 different backend implementations, then you need to change all of them.
This is difficult, if the backends reside in different repos and maybe you even don't own some of these repos.
Current draw-back: you need to change all of them at once.
Of course you could work around it be using this
status_messages = backend.transmit_data()
And then do some fancy guessing if the variable contains only the status or a tuple containing status and messages.
Some days ago I head the idea that kwargs for return would help here.
This should handle both cases:
Case1: The old backend returns only the status, and the caller wants both status and messages. Somehow the default for messages needs to be defined. In my case it would be the empty list.
Case2: The new backends returning the status and messages. The old caller just gets the status. The messages get discarded.
Above is the use case.
How could kwargs for return look like?
Maybe like this:
.....
Sorry, I could not find a nice, clean and simple syntax for this up to now.
Maybe someone else is more creative than I am.
What do you think about this?
Regards, Thomas Güttler
-- Thomas Guettlerhttp://www.thomas-guettler.de/ I am looking for feedback:https://github.com/guettli/programming-guidelines
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
participants (12)
-
Anders Hovmöller
-
Cameron Simpson
-
Christopher Barker
-
David Mertz
-
Eric V. Smith
-
Greg Ewing
-
Michael Selik
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Thomas Güttler
-
Thomas Güttler Lists
-
Wes Turner