Powersets of a list?
David C. Ullrich
ullrich at math.okstate.edu
Tue May 29 10:05:05 EDT 2001
On Mon, 28 May 2001 17:19:30 +0200, "Alex Martelli"
<aleaxit at yahoo.com> wrote:
>"David C. Ullrich" <ullrich at math.okstate.edu> wrote in message
>news:3b125c53.441935 at nntp.sprynet.com...
> ...
[...]
>
>> I think that comparing AlexClass with the others
>> is like testing the relative speed of range()
>> versus xrange(); of course xrange will be faster
>> because it hasn't actually constructed the entire
>> list in the sense that range has. (And of course
>> the AlexClass version is the one you'd want in some
>> situations, for just that reason.)
>
>Please note (I think it's clear in the code I posted)
>that I'm forcing the "virtual sequence" as embodied
>in AlexClass to be "spelled out" in its entirety as
>I measure its performance.
Yes, it's clear - I wasn't paying attention, sorry.
> Look at the lines actually
>being timed in function timetest:
> start=time.clock()
> rs = fo(seq)
> lrs = [x for x in rs]
> stend=time.clock()
>the list comprehension 'copying' rs into lrs is there
>for this purpose (as well as to allow comparison of
>results for correctness in the following lines:
> lrs.sort()
> assert base is None or base == lrs
>but I would have taken the list-comprehension out
>of the timed-code if I had wanted to make the class
>approach look better:-). And indeed, as evidenced
>by the couple of output lines quoted above, AlexClass
>IS slower than Alex when measured in this way.
>
>It MAY come in handy in memory-constrained situations,
>given that the faster approaches seem hard-ish to turn
>into iterators without some kind of 'generator'
>underlying mechanism (users of stackless must be
>having lots of fun right now:-). I suspect that
>some minor optimizations to it are feasible. E.g.,
>the list of O(N) one-bit bitmasks, taking up O(N*N)
>space, could be precomputed and stored at __init__
>time rather than re-built at each __getitem__ call...
Well believe it or not, something exactly like the
Powerset class with a __getitem__ that calculates
the n-th subset when requested was one of the first
things I thought of. Didn't mention it because I
didn't see how to make it work without those bitshifts
(or a loop something like
while n:
whichbit=0
if n % 2:
//include L[whichbit] in output
n = n/2
whichbit = whichbit + 1
or a version of that with divmod or something.) Thought
about it a little, looking for the sort of optimization
you're talking about, didn't see how that would go.
Hence the question: Huh? You got me all confused again -
I can't figure out what NxN list of bitmasks is going to
help here.
(The only NxN array of bits that seems at all relevant
is where the n-th bitmask is all 0's except for a 1 in
the n-th spot. I don't see how this helps at all with
__getitem_(self, x) if x is an integer between 0 and
2**N - 1. If x is supposed to be a list of N bits then
I see how the __getitem__ would work, but that's not
what you meant, is it? If _that's_ what you mean then
the question would be where we get the list of N bits
to pass to __getitem__ in the first place...)
??? You'd think I'd be used to missing the point
by now.
>it may still be a reasonable space cost to pay
>compared with the O(N*2**N) space needed all at
>once by faster approaches. Or, if one has gmpy
>around, bit-testing might be faster with it...
>
>
>Alex
>
>
>
David C. Ullrich
*********************
"Sometimes you can have access violations all the
time and the program still works." (Michael Caracena,
comp.lang.pascal.delphi.misc 5/1/01)
More information about the Python-list
mailing list