On Tue, Dec 30, 2014 at 9:33 PM, Nils Bruin <bruin.nils(a)gmail.com> wrote:
> We ran into this for sage (www.sagemath.org) and we implemented
> "MonoDict" with basically the semantics you describe. It is
> implemented as a cython extension class. See:
>
> http://git.sagemath.org/sage.git/tree/src/sage/structure/coerce_dict.pyx#n2…
>
> I can confirm your observation that it's hard to get right, but I think our
> implementation by now is both correct and fast. The idiosyncratic
> name reveals its origin in "TripleDict" defined in the same file, which
> was developed first.
>
> Incidentally, we also have a WeakValueDict implementation that is faster
> and a little safer than at least the Python2 version:
>
> http://git.sagemath.org/sage.git/tree/src/sage/misc/weak_dict.pyx
>
> These implementations receive quite a bit of exercise in sage,
> so I would recommend that you look at them before developing
> something from scratch yourself.
This is quite interesting. Is there any interest in maintaining that
code in a separate library? Sage is a rather large dependency which,
for better or for worse, has little to do with my own work. I'm
reluctant to add something that large for a single utility class. If
there is no such interest, I might consider forking.
In my code, I've worked around it by having the owner class and the
descriptor collude (basically, the owner's instances have a private
attribute which the descriptor uses), but it's far from the cleanest
solution.
--
Kevin Norris
On Tue, Dec 30, 2014 at 8:34 PM, Benjamin Peterson <benjamin(a)python.org> wrote:
> Why not just use a wrapper like this for keys?
>
> class IdentityWrapper(object):
> def __init__(self, obj):
> self.obj = obj
> def __hash__(self):
> return id(self.obj)
> def __eq__(self, other):
> return self.obj is other
I considered an approach like this one, but couldn't quite make it
work. How do we make the IdentityWrapper object vanish at the right
time?
If we place it directly into a vanilla WeakKeyDictionary and forget
it, it will immediately die and our WeakKeyDictionary will remove it.
On the other hand, if we modify the IdentityWrapper to hold a weakref
to obj, the IdentityWrapper will not vanish out of the dictionary at
all, and its hash() will become invalid to boot (though we can deal
with the latter issue relatively easily).
If we could somehow make the target incorporate a reference to the
wrapper, that would probably solve these issues, but I can't see a
general way to do that without potentially breaking the target. I
could do something like target._Foo__backref = self (or equivalently,
invoke Python's name mangling), but that's a sloppy solution at best
and a recipe for name collisions at worst (what if the target's class
has multiple instances of Foo() attached to it?). It also falls apart
in the face of __slots__.
--
Kevin Norris
On Mon, Dec 29, 2014 at 08:39:41AM -0800, Rick Johnson wrote:
>
>
> On Monday, December 29, 2014 8:12:49 AM UTC-6, Steven D'Aprano wrote:
>
> I just threw this lazy import proxy object together. I haven't tested it
> > extensively, but it seems to work:
[...]
> All you've done is to replace "import decimal" with "decimal =
> lazy_import('decimal')" -- what's the advantage?
It delays the actual import of the module until you try to use it.
For decimal, that's not much of an advantage, but some modules are quite
expensive to import the first time, and you might not want to pay that
cost at application start-up.
Personally, I have no use for this, but people have been talking about
lazy importing recently, and I wanted to see how hard it would be to do.
--
Steven
At the moment, when a python function, like print, calls an object attribute it does this (if it were written in python): type(obj).__str__(obj)This can be restrictive in lots of situations and I think it would be better to just get the attribute from the object itself. obj.__str__()It would make you free to override it on a per object basis. In some cases, this could lead to huge optimization.Obviously, it is easy to fix yourself, but it is a layer of uncharacteristic unintuitiveness: def __str__(self): self._str()
On Mon, Dec 29, 2014 at 8:09 AM, Steven D'Aprano <steve(a)pearwood.info>
wrote:
>
> On Sun, Dec 28, 2014 at 10:30:41PM +0100, Fetchinson . wrote:
> > On 12/27/14, Russell Keith-Magee <russell(a)keith-magee.com> wrote:
>
> > > This is an entirely
> > > reasonable approach if Kivy is what you want at the end of the day -
> but
> > > it's not what *I* want. I've been taking a much more atomic approach.
> The
> > > Kivy toolchain is also anchored in old versions of Python 2; something
> that
> > > definitely needs to be addressed.
>
> I'm not entirely sure why Russell says this. Kivy claims to support
> Python 3:
>
> http://kivy.org/docs/faq.html#does-kivy-support-python-3-x
>
> Read the whole of that FAQ entry:
"""
However, be aware that while Kivy will run in Python 3.3+, packaging
support is not yet complete. If you plan to create mobile apps for Android
or iOS, you should use Python 2.7 for now.
"""
That is - *Kivy* works on Python 3.3, but none of the tools that let you
run Kivy on Android or iOS exist for 3.X. This is the hard part - the
patches against Python and related libraries are non-trivial.
> As far as I'm concerned python 2 is perfectly okay. If I don't need to
> > worry about python 3, what drawback does kivy have, if any, according
> > to you?
>
> Well, you've just lost me as a potential user. I have zero interest in
> legacy Python versions for Android.
For what it's worth, I agree. I'll probably end up supporting Python 2, but
only because my currently working code (which is derived from Kivy's
toolchain) work on 2.7; but my intention is to start focussing on Python 3
ASAP. IMHO there's no point starting a new project on a platform that has a
known EOL on the horizon.
Yours
Russ Magee %-)
Hi,
I'm only a simple Python developer, not a Type Hinting expert and I don't
know if you already discuss about that somewhere, but:
With the future official support of Type Hinting in Python, is it means
that CPython could use this pieces of information to store variables with
more efficient data structures, not only check types ?
It could possible to have better performance one day with Type Hinting like
you have with Cython (explicit types declaration) or PyPy (guessing types) ?
Is it realistic to except that one day, or I've missed some mountains ;-) ?
If this is correct, better performances will be a great consequence for
Type Hinting, more people will be interested in by this feature, as we have
with AsyncIO (BTW, I'm working to publish benchmarks on this, I'll publish
that on AsyncIO ML).
Regards.
--
Ludovic Gasc
A few months ago we had a long discussion about type hinting. I've thought
a lot more about this. I've written up what I think is a decent "theory"
document -- writing it down like this certainly helped *me* get a lot of
clarity about some of the important issues.
https://quip.com/r69HA9GhGa7J
I should thank Jeremy Siek for his blog post about Gradual Typing, Jukka
Lehtosalo for mypy (whose notation I am mostly borrowing), and Jim Baker
for pushing for an in-person meeting where we all got a better
understanding of several issues.
There's also a PEP draft, written by Łukasz Langa and revised by him based
on notes from the above-mentioned in-person meeting; unfortunately it is
still a bit out of date and I didn't have time to update it yet. Instead of
working on the PEP, I tried to implement a conforming version of typing.py,
for which I also ran out of time -- then I decided to just write up an
explanation of the theory.
I am still hoping to get a PEP out for discussion in early January, and I
am aiming for provisional acceptance by PyCon Montréal, which should allow
a first version of typing.py to be included with Python 3.5 alpha 4. If you
are wondering how I can possibly meet that schedule: (a) the entire runtime
component proposal can be implemented as a single pure-Python module: hence
the use of square brackets for generic types; (b) a static type checker is
not part of the proposal: you can use mypy, or write your own.
--
--Guido van Rossum (python.org/~guido)
Hello again
a few days later, i wad a new idea:
the user who wants the 2.X modules, the 3.2 and 3.4 modules to work will
probably install the three versions.
so, is it possible to have the <somepath>\Python\python34;
<somepath>\Python\python32; and <somepath>\Python\python27 dirs, and the
<somepath>\Python\PythonAutoVersion program, useing the comment "#py ver
XX" to chose which version will take care of it?
(same thing with the IDLE)
how about that?
have a nice day/evening!
I accidentally discovered that the following works, at least in Python 3.4.2:
>>> class foo(object):
... pass
...
>>> setattr(foo, '3', 4)
>>> dir(foo)
['3', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__']
>>> getattr(foo, '3')
4
>>> bar = foo()
>>> dir(bar)
['3', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__']
>>> getattr(bar, '3')
4
>>> hasattr(foo, '3')
True
>>> hasattr(bar, '3')
True
However, the following doesn't work:
>>> foo.3
File "<stdin>", line 1
foo.3
^
SyntaxError: invalid syntax
>>> bar.3
File "<stdin>", line 1
bar.3
^
SyntaxError: invalid syntax
I'd like to suggest that getattr(), setattr(), and hasattr() all be modified so that syntactically invalid statements raise SyntaxErrors.
On a related note, are numbers defined in a module, or they are part of interpreter? Just to see how awful I could make things, I tried to extend the above to redefine 3 to be 4. Fortunately (unfortunately?) I couldn't find a way, but I'm curious if there is a pythonic way of making this happen, and if there is, I'd like to suggest making that impossible ASAP.
Thanks,
Cem Karan
Currently, the python profiler works at the function or method level. This
is good for identifying slow functions, but not so good for identifying
things like slow for loops inside larger functions. You can use timers,
but putting a lot of times in its cumbersome and makes direct comparison
between potential hotspots difficult. And although we should, in practice
it isn't airways feasible to refactor or code so there is only one major
hotspot per function or method.
A solution to this is to have a profiler that shows the execution time for
each line (cumulative and per-call). Currently there is the third-party
line-profiler, but at least to me this seems like the sort of common
functionality that belongs in the standard library. line-profiler also
requires the use of a decorator since it has a large performance penalty,
and perhaps a more deeply integrated line-by-line profiler could avoid this.