[Python-Dev] For review: PEP 285: Adding a bool type
Guido van Rossum
guido@python.org
Sat, 09 Mar 2002 09:51:52 -0500
[MAL]
> > > If you want to dive into logic here, the only correct output
> > > would be:
> > >
> > > >>> True * 10
> > > True
> > > >>> False * 10
> > > False
[Guido]
> > What has logic got to do with this?
[MAL]
> Booleans are by nature truth values. You can do boolean algebra
> on them, but this algebra is completly different from what
> you get if you treat them as numbers having two values 0 and 1
> (bits). Booleans representing a logical value should not
> implement +, -, *, etc. at all. Instead, they would have to
> be converted to integers first, to implement these operations.
So you're reversing your position. First (see above) you were
pleading that True*10 should yield True. Now you are pleading that it
should raise an exception.
Anyway, in a brand new language the latter might be a good idea, but
since we're trying to evolve Python without breaking (much) existing
code, we have no choice: bools must behave like ints as much as
possible, especially in arithmetic.
> I think in summary, the confusion we have run into here is
> caused by Python's use of bits to represent booleans.
I don't understand this. The way I see it, current Python doesn't use
bits for Booleans. It has no Boolean type, and the integers 0 and 1
are conventional but not the only ways to spell Boolean outcomes.
> You would like to move this usage more in the direction of booleans,
> but only half way.
Yes, forced by backwards compatibility.
> Exactly; which is why I don't understand the move to
> override __str__ with something which doesn't have anything
> to do with integers :-)
Probably because we have a different intuition on how str() is used.
For me, it's mostly used as an output conversion. But it seems that
for you, conversion to a decimal int is important because you need to
feed that into other applications that require decimal strings.
> Heck, it's only one method we're argueing about here.
> Why are all these cycles needed to convince you that backwards
> compatibility is better than the cosmetics of having str(True)
> return 'True' ?
Maybe because at the start it wasn't clear that that is your only
*real* objection, and you brought so many other arguments into play?
Or maybe because you and I are using Python in very different ways, so
that what's obvious for you isn't always obvious for me (and vice
versa)?
> You can always implement the latter as a subtype of bool if you care
> enough and without breaking code.
I thought about this last night, and realized that you shouldn't be
allowed to subclass bool at all! A subclass would only be useful when
it has instances, but the mere existance of an instance of a subclass
of bool would break the invariant that True and False are the only
instances of bool! (An instance of a subclass of C is also an
instance of C.) I think it's important not to provide a backdoor to
create additional bool instances, so I think bool should not be
subclassable.
Of course, you can define your own subclass of int similar to the bool
class I show in the PEP, and you can give it any semantics you want --
but that would also defeat the purpose of having a standard bool.
What will happen in reality is that people will have to write:
print repr(b)
rather than:
print b
That's not terribly bad. But I'm not sure I'm ready to answer the
complaints from newbies who are confused by this:
>>> isinstance(1, int)
True
>>> print isinstance(1, int)
1
>>>
So I need someone else to chime in with additional evidence that
str(True) should be '1', not 'True'. So far you've been carrying this
particular argument all by yourself.
--Guido van Rossum (home page: http://www.python.org/~guido/)