Using the module heapq, I found that it's not easy to use. Or, at
least, that it could be straightforward to use.
My main issue is that for correct usage:
- user should provide an external list, that shouldn't use for
anything else to don't break the invariant
- to alter the heap queue, a bunch of functions must be used, passing
always the external list
I think that changing "external list" for "internal attribute", and
"bunch of functions " for "methods", it will leave the module easier
to use, safer, and more object oriented.
So, I basically coded it. What do you think about including this class
in the heap module?
"""
class Heap(object):
'''Maintains a heapified list, always conserving the invariant.
Heaps are arrays for which heap[k] <= heap[2*k+1] and
heap[k] <= heap[2*k+2] for all k, counting elements from zero.
'''
def __init__(self, iterable=None):
'''Initializes the heap from any iterable.
>>> Heap([1, 2])
Heap([1, 2])
>>> Heap([])
Heap([])
>>> Heap()
Heap([])
>>> Heap((1, 2, 3))
Heap([1, 2, 3])
>>> Heap(x**2 for x in range(3))
Heap([0, 1, 4])
'''
if iterable is None:
self._queue = []
else:
self._queue = list(iterable)
heapq.heapify(self._queue)
def __repr__(self):
return "Heap(%s)" % self._queue
def push(self, item):
'''Push the item to the heap queue.
>>> h = Heap()
>>> h.push(5)
>>> h.push(4)
>>> h.push(3)
>>> h.push(2)
>>> h.push(1)
>>> h.push(0)
>>> h
Heap([0, 2, 1, 5, 3, 4])
'''
heapq.heappush(self._queue, item)
def pop(self):
'''Pop one item from the heap queue.
>>> h = Heap([0, 2, 1, 5, 3, 4])
>>> h.pop()
0
>>> h.pop()
1
>>> h.pop()
2
>>> h.pop()
3
>>> h.pop()
4
>>> h.pop()
5
>>> h
Heap([])
>>> h.pop()
Traceback (most recent call last):
...
IndexError: index out of range
'''
return heapq.heappop(self._queue)
def pushpop(self, item):
'''Push the item and pop one from the heap queue.
Note that this method is more efficient than calling both
push() and pop() separately.
>>> h = Heap()
>>> h.pushpop(7)
7
>>> h.push(5)
>>> h.push(4)
>>> h.push(3)
>>> h.pushpop(7)
3
>>> h.pushpop(7)
4
>>> h
Heap([5, 7, 7])
'''
return heapq.heappushpop(self._queue, item)
def replace(self, item):
'''Pop one item and push the received one into the heap queue
Note that this method is more efficient than calling both
pop() and push() separately.
>>> h = Heap()
>>> h.replace(3)
Traceback (most recent call last):
...
IndexError: index out of range
>>> h.push(7)
>>> h.replace(2)
7
>>> h
Heap([2])
>>> h.push(3)
>>> h
Heap([2, 3])
>>> h.replace(1)
2
>>> h.replace(9)
1
>>> h
Heap([3, 9])
'''
return heapq.heapreplace(self._queue, item)
"""
Regards,
--
. Facundo
Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/
Hi,
I've created a Python Bytecode Verifier in Python. I'm not a Python
guru so I borrowed coding patterns from C/C++. I also created this
with C portability in mind. The only reason I used Python was to
experiment with Python and was easier to morph code during
development.
If this program were ported to C it would only need 8 bytes per opcode
(some additional storage to track blocks) and a single pass. I haven't
found backward jumps to previously unused code in any compiled Python
code but it can easily be supported. In that case some more partial
passes are required.
I also was able to successfully verify all Python files in the
Python-2.5.2 source package.
The verification algorythm should be quite complete but I may have
missed some limitations of the interpreter that could be checked by
the verifier as well.
The ability to create this verfier proved that although Python
bytecode is designed for a dynamically typed interpreter, is still
easily verifiable. I am willing port this code C but only in the case
if there is any chance to be included in Python.
I believe that Python in general would benefit having the ability to
safely load .pyc files and create code objects on the fly.
Both Java and .NET have the ability to safely load compiled byte code.
.NET Framework, just like Python also has the ability to create and
execute new code at run-time.
You may feel that enabling closed source applications and/or creating
a multi-language runtime would hurt Python but both of these have
contributed to the success of Java (both the language and the
runtime).
Kornél
There was a somewhat ancient discussion on OrderedDict and OrderedSet
before: http://mail.python.org/pipermail/python-dev/2005-March/051915.html
The resolution seemed to be that neither of them should be in stdlib. Now
that OrderedDict is in and Raymond Hettinger has created a solid OrderedSet
implementation: http://code.activestate.com/recipes/576694/ , could the
latter also be included in collections?
Here's a very generic use-case:
def _update_key(dct, key, val):
"""
Update a key in dict *dct*. If they key already exists in *dct* but the
value doesn't, a set of previous values is created and the value added
to it.
"""
if key in dct:
if dct[key] == val:
return
s = set(dct[key])
s.update(val)
dct[key] = s
else:
dct[key] = val
The problem is that I both need to remove duplicates and retain insertion
order like list.append().
Hello,
I have noticed a rather strong and nearly systematic opposition to (new) keywords. Cannot really figure out why. Would someone clearly expose the reasoning behind keeping the keyword set as small as possible? (I thought at not preventing users to use the same words as names, but this reason does not seem to hold. On the contrary: non-keyword builtins bite painfully!)
Could not find references on the topic -- probably because a search containing "keyword" returns loads of unrelated stuff.
Denis
------
la vita e estrany
I think it would be nice to add a "cache" argument to the property()
constructor. When "cache" was True, property would only ask the getter function
once for the result. This would simplify properties that require expensive
operations to compute.
Hello
unless I missed it, I couldn't find a built-in to check if an object
is iterable,
so I wrote this function :
def iterable(ob):
try:
iter(ob)
except TypeError:
return False
return True
What about having such a built-in in Python ? (with the proper
implementation if course)
Regards
Tarek
--
Tarek Ziadé | http://ziade.org
> spir a écrit :
>> Le Mon, 27 Apr 2009 20:54:58 +0100,
>> Arnaud Delobelle <arnodel(a)googlemail.com> s'exprima ainsi:
>>
>>
>>>> with A(), B() as a,b:
>>>> # Code that uses a and b.
>>>>
>>>> This would translate directly to:
>>>>
>>>> with A() as a:
>>>> with B() as b:
>>>> # Code that uses a and b.
>>>>
>>> There was a discussion about this on this list:
>>>
>>> http://mail.python.org/pipermail/python-ideas/2009-March/003188.html
>>>
>>> I can't remember the outcome.
>>>
>>
>> There was no clear outcome, for sure ;-)
>>
>> Except maybe it was stated that
>> with A(), B() as a,b:
>> should rather be spelled
>> with A() as a, B() as b:
>> to reuse the syntax of imports.
>>
>> Denis
>>
>> ------
>> la vita e estrany
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas(a)python.org
>> http://mail.python.org/mailman/listinfo/python-ideas
>>
>>
>>
I agree with the idea of auto-nesting "with", however in the case you
pointed out, the main problem was the early evaluation of context
managers ;
maybe a solution would be to delay the creation of context managers,
with something like a partial application (cf functools).
Roughly, we'd need a "delayedNested" function, which takes zero-argument
callables as parameters, and calls/instanciates them inside itself.
Then just call* delayedNested(partial(A,...arguments...), partial(B,
...arguments...))*/ /to have what you want.
Yes I know, it's not pretty (mainly because of the lack of partial
application syntax in python), but it's just a placeholder ^^
Regards,
Pascal
>Mathias Panzenböck a écrit :
>
>Pascal Chambon wrote:
>>
>> I agree with the idea of auto-nesting "with", however in the case you
>> pointed out, the main problem was the early evaluation of context
>> managers ;
>> maybe a solution would be to delay the creation of context managers,
>> with something like a partial application (cf functools).
>>
>> Roughly, we'd need a "delayedNested" function, which takes
zero-argument
>> callables as parameters, and calls/instanciates them inside itself.
>>
>> Then just call* delayedNested(partial(A,...arguments...), partial(B,
>> ...arguments...))*/ /to have what you want.
>>
>
>
>It would be much shorter and more readable to manually nest the with
statements.
>
> -panzi
Indeed, but the constructions we talk about would allow nesting dozens
of context managers without any problem ;
Well then, you'll ask me "what kind of a perverse would need to
imbricate dozens of context managers ???" ; sincerely I don't know ^^
But since "flat is better than nested", even just for 2 or 3 context
managers, I feel a construct like "with A() as a, B() as b, C() as c:"
is anyway better than 3 nested with statements
Regards,
pascal
Hi!
Using contextlib.nested() often me headaches... (This is my first time
posting, so please bear with me...) :-)
I often want to write code similar to this:
with conextlib.nested(A(), B()) as a,b:
# Code that uses a and b.
My problem is that if B() throws an exception then no context manager
will ever be created to release the resources that A() acquired. This
isn't really contextlib.nested's fault since it never comes into
existence.
It would be really useful to have a shorthand for creating truly
nested with statements. My idea then is this: couldn't the language be
tweaked to handle this? It might look something like this:
with A(), B() as a,b:
# Code that uses a and b.
This would translate directly to:
with A() as a:
with B() as b:
# Code that uses a and b.
I really think that this approach to a shorthand for nested with
statements would be better than using contextlib.nested (and perhaps
this is what contextlib.nested intented to do from the beginning).
Regards,
Mattias
Mike Meyer wrote:
> This is just arguing about the meaning of "syntactic sugar". In some
> sense, almost all control constructs are syntactic sugar, as all you
> really need is lambda and if
> (http://en.wikipedia.org/wiki/Lambda_Papers). However, anything that
> lets me reduce the line count by removing boilerplate or - even
> better, duplicate code - moves out of that category for me. Please
> feel free to disagree.
Then by your own definition do/while has yet to move out of the category
of "syntactic sugar". do/while does not reduce line count, and if it
could be said to remove boilerplate or duplicate code it does so to a
vanishingly small degree.
So what's left? My working definition of "syntactic sugar" is:
"likely-redundant syntax that makes expressing a program more appealing
somehow". I think do/while is syntactic sugar. Is it worth making the
language bigger to add it? Particularly when we can't agree on what
would be good syntax? The fact that we're still having the debate
implies... well, that people like arguing on the internet I guess. But
do/while can be found in Pascal, and C / C++, and all the C clones
(Java, C#) cloned that too, so perhaps it has merit.
Anyway, this is one reason why I like the "do:" statement I proposed.
(The other one--the one where I also proposed "continue if" and "break
if". But given your reaction to those let's ignore 'em for now.) This
isn't *pure* sugar--it's a mildly novel flow control construct for
structured programming. Certainly Python doesn't have anything like it
right now; "while True:" isn't an exact match (and "while False:" is
equivalent to "if 0:"). My "do:" would allow spelling do/while as:
do:
something()
if condition: continue
I think that's a reasonable do/while, and so economical: we only added
one new keyword. This form of "do:" has other uses too, as per my
previous email.
Not that I expect this syntax to take the world by storm.
>> do:
>> something()
>> while condition
>>
> Except Guido has already rejected this - or at least stated that he doesn't really like it.
I can believe that. I still think it's as good as we're going to do for
a literal translation of do/while.
If there was a syntax everyone liked we'd already be using it. The fact
that we haven't hit on one by now--recall that the PEP is six years
old--implies that we're not likely to find one. We've added lots of
syntax to Python over the last six years, stuff that has made the
language more powerful, more flexible, and even faster in some places.
do/while doesn't give us any new power, flexibility, or speed--I daresay
if we never added do/while we wouldn't really miss it.
If you have it handy, can you cite where he said he didn't like that?
Not that I don't believe you; I'd just like to read it.
Has Guido cited a syntax that he *did* like?
/larry/