Why return None?

Alex Martelli aleaxit at yahoo.com
Fri Aug 27 12:02:05 CEST 2004

Antoon Pardon <apardon at forel.vub.ac.be> wrote:
> > Yes you can, and in the general case get very different effects, e.g.:
> And what about
>   a += b    vs   a.extend(b)

I can go on repeating "in the general case [these constructs] get very
different effects" just as long as you can keep proposing, as if they
might be equivalent, constructs that just aren't so in the general case.

Do I really need to point out that a.extend(b) doesn't work for tuples
and strings, while a+=b works as polymorphically as feasible on all
these types?  It should be pretty obvious, I think.  So, if you want to
get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
is clearly the way to go -- if you want para-polymorphic behavior in
those cases, a+=b.  Isn't it obvious, too?

> >>>> c=a=range(3)
> >>>> b=range(2)
> >>>> a+=b
> >>>> c
> > [0, 1, 2, 0, 1]
> >
> > versus:
> >
> >>>> c=a=range(3)
> >>>> b=range(2)
> >>>> a=a+b
> >>>> c
> > [0, 1, 2]
> I wouldn't say you get different effects in *general*. You get the
> same effect if you use numbers or tuples or any other immutable
> object.

a+=b is defined to be: identical to a=a+b for immutable objects being
bound to name 'a'; but not necessarily so for mutable objects -- mutable
types get a chance  to define __iadd__ and gain efficiency through
in-place mutation for a+=b, while the semantics of a=a+b strictly forbid
in-place mutation.  *IN GENERAL*, the effects of a+=b and a=a+b may
differ, though in specific cases ('a' being immutable, or of a mutable
type which strangely chooses to define __add__ but not __iadd__) they
may be identical.  Like for a+b vs b+a: in general they may differ, but
they won't differ if the types involved just happen to have commutative
addition, of if a and b are equal or identical objects, i.e., in various
special cases.

"You get different effects *in general*" does not rule out that there
may be special cases (immutable types for one issue,
commutative-addition types for another, etc, etc) in which the effects
do not differ.  Indeed, if it was "always" true that you got different
effects, it would be superfluous to add that "in general" qualifier.
Therefore, I find your assertion that you "wouldn't say you get
different effects in *general*" based on finding special cases in which
the effects do not differ to be absurd and unsupportable.

> > But let's be sensible: if 'it' is joining two strings which are bound to
> > names b and c, b+c is the only OBVIOUS way to do it.  Building a
> > sequence whose items are b and c and calling ''.join on it is clearly an
> > indirect and roundabout -- therefore NOT "the one obvious way"! -- to
> > achieve a result.  Proof: it's so unobvious, unusual, rarely used if
> > ever, that you typed entirely wrong code for the purpose...
> That is just tradition. Suppose the "+" operator wouldn't have worked
> on strings an concatenating would from the start been done by joining,
> then that would have been the one obvious way to do it.

In a hypothetical language without any + operator, but with both unary
and binary - operators, the one "obvious" way to add two numbers a and b
might indeed be to code:  a - (-b).  So what?  In a language WITH a
normal binary + operator, 'a - (-b)' is nothing like 'an obvious way'.

> > Nobody ever even wished for there to never be two sequences of code with
> > the same end-result.  The idea (a target to strive for) is that out of
> > all the (probably countable) sequences with that property, ONE stands
> > out as so much simpler, clearer, more direct, more obvious, to make that
> > sequence the ONE OBVIOUS way.
> And what if it are three sequences of code with the same end-result,
> or four. From what number isn't it a problem any more if two sequences
> of that length or more produce the same result.

To add N integers that are bound to N separate identifiers, there are
(quite obviously) N factorial "sequences of [the same] length" producing
the same result.  Is it "a problem"?  I guess it may be considered a
minor annoyance, but it would be absurd to try and do something against
it, e.g. by arbitrary rules forbidding addition between variables except
in alphabetical order.  Practicality beats purity.

> > We can't always get even that, as a+b vs
> > b+a show when a and b are bound to numbers, but we can sure get closer
> > to it by respecting most of GvR's design decisions than by offering
> > unfounded, hasty and badly reasoning critiques of them.
> I think that this goal of GvR is a bad one. 

I'm sure you're a better language designer than GvR, since you're
qualified to critique, not just a specific design decision, but one of
the pillars on which he based many of the design decisions that together
made Python.
Therefore, I earnestly urge you to stop wasting your time critiquing an
inferiorly-designed language and go off and design your own, which will
no doubt be immensely superior.  Good bye; don't slam the door on the
way out, please.

> If someway of doing it
> is usefull then I think it should be included and the fact that
> it introduces more than one obvious way to do some things shouldn't
> count for much.

This is exactly Perl's philosophy, of course.

> Sure you shouldn't go the perl-way where things seemed to have
> been introduced just for the sake of having more than obvious way
> to do things. But eliminating possibilities (method chaining)
> just because you don't like them and because they would create
> more than one obvious way to do things, seems just as bad to
> me.

If a language should not eliminate possibilities because its designer
does not like those possibilities, indeed if it's BAD for a language
designer to omit from his language the possibilities he dislikes, what
else should a language designer do then, except include every
possibility that somebody somewhere MIGHT like?  And that IS a far
better description of Perl's philosophy than "just for the sake" quips
(which are essentially that -- quips).

> What I have herad about the decorators is that one of the
> arguments in favor of decorators is, that you have to
> give the name of the function only once, where tradionally
> you have to repeat the function name and this can introduce
> errors.
> But the same argument goes for allowing method chaining.
> Without method chaining you have to repeat the name of
> the object which can introduce errors. 

I've heard that argument in favour of augmented assignment operators
such as += -- and there it makes sense, since the item you're operating
on has unbounded complexity... mydict[foo].bar[23].zepp += 1 may indeed
be better than repeating that horrid LHS (although "Demeter's Law"
suggests that such multi-dotted usage is a bad idea in itself, one
doesn't always structure code with proper assignment of responsibilities
to objects and so forth...).

For a plain name, particularly one which is just a local variable and
therefore you can choose to be as simple as you wish, the argument makes
no sense to me.  If I need to call several operations on an object I'm
quite likely to give that object a 'temporary alias' in a local name
anyway, of course:
  target = mydict[foo].bar[23].zepp

Doing just the same thing when I don't need intermediate access to the
object between calls that mutate the object and currently return None is
no hardship, just as it isn't when such access IS needed.  Note that you
couldn't do chaining here anyway, since pop mutates the object but also
returns a significant value...

> >> The difference between
> >> 
> >>   print somelist.sort()
> >> 
> >> and
> >> 
> >>   somelist.sort()
> >>   print somelist
> >> 
> >> 
> >> is IMO of the same order as the difference between
> >> 
> >> 
> >>   print a + b
> >> 
> >> and
> >> 
> >>   r = a + b
> >>   print r
> >
> > For a sufficiently gross-grained comparison, sure.  And so?  In the
> > second case, if you're not interested in having the value of a+b kept
> > around for any subsequent use, then the first approach is the one
> > obvious way;
> No it isn't because programs evolve. So you may think you don't
> need the result later on, but that may change, so writing it
> the second way, will making changes easier later on.

Ridiculous.  Keep around a+b, which for all we know here might be a
million-items list!, by having a name bound to it, without ANY current
need for that object, because some FUTURE version of your program may
have different specs?!
If specs change, refactoring the program written in the sensible way,
the way that doesn't keep memory occupied to no good purpose, won't  be
any harder than refactoring the program that wastes megabytes by always
keeping all intermediate results around "just in case".

> > if you ARE, the second, because you've bound a name to it
> > (which you might have avoided) so you can reuse it (if you have no
> > interest in such reuse, it's not obvious why you've bound any name...).
> >
> > In the first case, fortunately the first approach is illegal, the second
> > one is just fine.  Were they exactly equivalent in effect neither would
> > be the one obvious way for all reasonable observer -- some would hate
> > the side effect in the first case, some would hate the idea of having
> > two statements where one might suffice in the second case.
> So? I sometimes get the idea that people here can't cope with
> differences in how people code. So any effort must be made
> to force people to code in one specific way.

When more than one person cooperates in writing a program, the group
will work much better if there is no "code ownership" -- the lack of
individualized, quirky style variations helps a lot.  It's not imposible
to 'cope with differences' in coding style within a team, but it's just
one more roadblock erected to no good purpose.  A language can help the
team reach reasonably uniform coding style (by trying to avoid offering
gratuitous variation which serves no real purpose), or it can hinder the
team in that same goal (by showering gratuitous variation on them).

> > Fortunately the first approach does NOT do the same thing as the second
> > (it prints out None:-) so Python sticks to its design principles.  Let
> > me offer a private libation to whatever deities protect programmers,
> > that Python was designed by GvR rather than by people able to propose
> > analogies such as this last one without following through on all of
> > their implications and seeing why this SHOWS Python is consistent in
> > applying its own design principles!
> That these implications are important is just an implication on the
> design principles. If someone doesn't think particular design principles
> are that important, he doesn't care that if somethings is changed that
> particulat design principle will be violated. Personnaly I'm not
> that impressed with the design of python, it is a very usefull language

Great, so, I repeat: go away and design your language, one that WILL
impress you with its design.  Here, you're just waiting your precious
time and energy, as well of course as ours.

> but having operators like '+=' which have a different kind of result
> depending on whether you have a mutable or immutable object is IMO
> not such a good design and I wonder what design principle inspired
> them.

Practicality beats purity: needing to polymorphically concatenate two
sequences of any kind, without caring if one gets modified or not, is a
reasonably frequent need and is quite well satisfied by += for example.


More information about the Python-list mailing list