>>> There's been some discussion about automatic test discovery lately.
>>> Here's a random (not in any way thought through) idea: add a builtin
>>> function test() that runs tests associated with a given function,
>>> class, module, or object.
>> Improved testing is always welcome, but why a built-in?
>> I know testing is important, but is it so common and important that we
>> need it at our fingertips, so to speak, and can't even import a module
>> first before running tests? What's the benefit to making it a built-in
>> instead of part of a test module?
> The advantage would be a uniform and very simple interface for testing any
> module, without having to know whether I should import doctest,
> unittest or something else (and having to remember the commands
> used by each framework). It would certainly not be a replacement for more
> advanced test frameworks.
By making it a builtin it's also pointing out to users that
code-testing is an important part of the python culture (as well as
good development practice). It may seem easy "just to do a module
import and then run the imported test function", but such a construct
says that testing is just an optional thing among many dozens of
modules within python.
As for a name, Guido's criticism aside, I do like like it spelled
test() with usage very much similar to the builtin help()
function--both would be accessing the same docstrings but for two
different purposes. I think it would add a lot of encouragement for
the use of doctest (one of my favorites) as well as facilitate good
test-driven development. And, regarding the name, if any function
deserves the name test() it would be this builtin--all others would
necessarily be secondary. But if there's rancor regarding the name,
call it testdoc() or something.
Personally, I'm +2 on the idea, but that may only be in cents....
PS. Add test() to the GSoC suggesting of improving doctest with
scope-aware doc-test variables (for easing setup code between
(Originally posted to python-dev, discussion moved here per request by
GvR; also fixed pseudo-code to not use a keyword as local var)
Begin forwarded message:
> I was recently reviewing some Python code for a friend who is a C++
> programmer, and he had code something like this:
> def foo():
> attempt = 0
> while attempt<MAX:
> ret = bar()
> if ret: break
> I was a bit surprised that this was syntactically valid, and because
> the timeout condition only occurred in exceptional cases, the error
> has not yet caused any problems.
> It appears that the grammar treats the above example as the unary +
> op applied twice:
> u_expr ::=
> power | "-" u_expr
> | "+" u_expr | "\~" u_expr
> Playing in the interpreter, expressions like "1+++++++++5" and "1+-
> +-+-+-+-+-5" evaluate to 6.
> I'm not a EBNF expert, but it seems that we could modify the grammar
> to be more restrictive so the above code would not be silently
> valid. E.g., "++5" and "1+++5" and "1+-+5" are syntax errors, but
> still keep "1++5", "1+-5", "1-+5" as valid. (Although, '~' throws in
> a kink... should '~-5' be legal? Seems so...)
(This is a reply to Joe's post on python-dev)
That looks like a good solution.
The downside I see with your rules is that combinations like "~+~-~
+~-" would still be valid, but if people want to write obfuscated
code, there are always ways to do it. Forbidding the examples that you
gave (and the ones I gave) is still a positive move, in my opinion.
On 27 Mar 2009, at 12:15, Joe Smith wrote:
> Jared Grubb wrote:
>> I'm not a EBNF expert, but it seems that we could modify the
>> grammar to be more restrictive so the above code would not be
>> silently valid. E.g., "++5" and "1+++5" and "1+-+5" are syntax
>> errors, but still keep "1++5", "1+-5", "1-+5" as valid. (Although,
>> '~' throws in a kink... should '~-5' be legal? Seems so...)
> So you want something like
> u_expr :: =
> power | "-" xyzzy_expr | "+" xyzzy_expr | "\~" u_expr
> xyzzy_expr :: =
> power | "\~" u_expr
> Such that:
> 5 # valid u_expr
> +5 # valid u_expr
> -5 # valid u_expr
> ~5 # valid u_expr
> ~~5 # valid u_expr
> ~+5 # valid u_expr
> +~5 # valid u_expr
> ~-5 # valid u_expr
> -~5 # valid u_expr
> +~-5# valid u_expr
> ++5 # not valid u_expr
> +-5 # not valid u_expr
> -+5 # not valid u_expr
> --5 # not valid u_expr
> While, I'm not a python developer, (just a python user) that sounds
> reasonable to me, as long as this does not silently change the
> meaning of any expression, but only noisily breaks programs, and
> that the broken constructs are not used frequently.
> Can anybody come up with any expressions that would silently change
> in meaning if the above were applied?
> Obviously a sane name would need to be chosen to replace xyzzy_expr.
Hello fellow Pythonistas!
On a regularly basis I'm bothered and annoyed by the fact that the with
statement takes only one context manager. Often I need to open two files
to read from one and write to the other. I propose to modify the with
statement to accept multiple context manangers.
The nested block::
with open(infile) as fin:
with open(outfile, 'w') as fout:
could be written as::
with lock, open(infile) as fin, open(outfile, 'w') as fout:
The context managers' __enter__() and __exit__() methods are called FILO
(first in, last out). When an exception is raised by the __enter__()
method, the right handed context managers are omitted.
I'm not sure if I got the grammar right but I *think* the new grammar
should look like::
with_stmt: 'with' with_vars ':' suite
with_var: test ['as' expr]
with_vars: with_var (',' with_var)* [',']
> On Tue, Mar 24, 2009, Roy Hyunjin Han wrote:
>> I know that Python has iterator methods called "sorted" and "reversed" and
>> these are handy shortcuts.
>> Why not add a new iterator method called "shuffled"?
You can already write:
sorted(s, key=lambda x: random())
But nobody does that. So you have a good
indication that the proposed method isn't need.
>>>>> "Raymond" == Raymond Hettinger <python(a)rcn.com> writes:
>> Note that using sorting to shuffle is likely very inefficient.
Raymond> Who cares? The OP's goal was to save a few programmer clock
Raymond> cycles so he could in-line what we already get from
Who cares? Jeez... did I say something to get your hackles up?
I'm not sure if I see the original posting, but the one you first reference
in the mailing list archives doesn't say anything about saving clock
cycles. Supposing that is what he was after, posting a cute but O(n lg n)
alternative without saying it's highly inefficient is directly counter to
what you say he was looking for.
The reason I even said anything was because someone (Roy?) then said
"that's nice". That's like someone saying oh, you could do it like this
with bubblesort, someone else saying "that's nice", and there the record
stands, awaiting future generations of uneducated programmers.
Anyway, apologies if you don't care or for commenting out loud on something
that was perhaps obvious to everyone. BTW, I hadn't noticed Antoine's
earlier message amounting to the same thing. He seems to care too :-)
>>>>> "Steven" == Steven D'Aprano <steve(a)pearwood.info> writes:
Steven> On Wed, 25 Mar 2009 07:20:00 am Steven D'Aprano wrote:
>> On Wed, 25 Mar 2009 03:03:31 am Raymond Hettinger wrote:
>> > > On Tue, Mar 24, 2009, Roy Hyunjin Han wrote:
>> > >> I know that Python has iterator methods called "sorted" and
>> > >> "reversed" and these are handy shortcuts.
>> > >>
>> > >> Why not add a new iterator method called "shuffled"?
>> > You can already write:
>> > sorted(s, key=lambda x: random())
>> > But nobody does that. So you have a good
>> > indication that the proposed method isn't needed.
>> That's nice -- not as readable as random.shuffle(s) but still nice. And
>> fast too: on my PC, it is about twice as fast as random.shuffle() for
>> "reasonable" sized lists (tested up to one million items).
Note that using sorting to shuffle is likely very inefficient.
The sort takes O(n lg n) comparisons whereas you can do a perfect
Fischer-Yates (aka Knuth) shuffle with <= n swaps. The model of
computation here is different (comparisons vs swaps), but there is a vast
literature on number of swaps done by sorting algorithms. In any case
there's almost certainly no reason to use anything other than the standard
Knuth shuffle, which is presumably what random.shuffle implements.
There's been some discussion about automatic test discovery lately.
Here's a random (not in any way thought through) idea: add a builtin
function test() that runs tests associated with a given function,
class, module, or object.
>>> import myproject
By default, test(obj) could simply run all doctests in docstrings
attached to obj. For modules, it could also look for unittest.TestCase
instances, and perhaps do some more advanced test discovery. test()
could implement some keyword options to control exactly what and what
not to do. There could perhaps also be a corresponding __test__
method/function for implementing custom test runners.
I've found as I write more and more decorators I need an identity function
often. For example I might write:
if reason == "good reason":
return lambda x: x
# do fancy stuff here
I hate lambdas, so usually I write
It'd be nice to have a shortcut in the stdlib, though. Would this go well in the
operator or functools modules well?
Yes it's true you can do easily the pack part with zip(*[iter(l)]*size)
you can do the slicing with zip(*[l[i:len(l)-(slice-1-i)] for i in
And you could also do the twice but you get something more complicated.
It's also true that with izip you could get iterator.
I use this pack function a lot of times in my code, and it's more readable,
than the zip version. After, the thing it's to know if people really
use this kind
of function on list of it's just me (that's it's totally possible).
On Fri, Mar 20, 2009 at 3:07 PM, Isaac Morland <ijmorlan(a)uwaterloo.ca> wrote:
> On Fri, 20 Mar 2009, paul bedaride wrote:
>> I propose a new function for list for pack values of a list and
>> sliding over them:
>> then we can do things like this:
>> for i, j, k in pack(range(10), 3, partialend=False):
>> print i, j, k
>> I propose this because i need a lot of times pack and slide function
>> over list and this one
>> combine the two in a generator way.
> See the Python documentation for zip():
> And this article in which somebody independently rediscovers the idea:
> Summary: except for the "partialend" parameter, this can already be done in
> a single line. It is not for me to say whether this nevertheless would be
> useful as a library routine (if only perhaps to make it easy to specify
> "partialend" explicitly).
> It seems to me that sometimes one would want izip instead of zip. And I
> think you could get the effect of partialend=True in 2.6 by using
> izip_longest (except with an iterator result rather than a list).
>> def pack(l, size=2, slide=2, partialend=True):
>> lenght = len(l)
>> for p in range(0,lenght-size,slide):
>> def packet():
>> for i in range(size):
>> yield l[p+i]
>> yield packet()
>> p = p + slide
>> if partialend or lenght-p == size:
>> def packet():
>> for i in range(lenght-p):
>> yield l[p+i]
>> yield packet()
> Isaac Morland CSCF Web Guru
> DC 2554C, x36650 WWW Software Specialist