Matt Joiner wrote:
> Mark, Good luck with getting this in, I'm also hopeful about coroutines,
> maybe after pushing your dict optimization your coroutine implementation
> will get more consideration.
Shush, don't say the C word or you'll put people off ;)
I'm actually not that fussed about the coroutine implementation.
With "yield from" generators have all the power of asymmetric coroutines.
I think my coroutine implementation is a neater way to do things,
but it is not worth the fuss.
Anyway, I'm working on my next crazy experiment :)
On 1/26/2012 10:25 PM, Gregory P. Smith wrote:
> (and on top of all of this I believe we're all settled on having per
> interpreter hash randomization_as well_ in 3.3; but this AVL tree
> approach is one nice option for a backport to fix the major
If the tree code cures the problem, then randomization just makes
debugging harder. I think if it is included in 3.3, it needs to have a
switch to turn it on/off (whichever is not default).
I'm curious why AVL tree rather than RB tree, simpler implementation?
C++ stdlib includes RB tree, though, for even simpler implementation :)
Can we have a tree type in 3.3, independent of dict?
I have a newbie question about CPython.
Looking at the C code I noted that for example in tupleobject.c there is
only one include
Python.h actually includes everything as far as I can I see so:
- it's very hard with a not-enough smart editor to find out where the
not-locally defined symbols are actually defined (well sure that is
not a problem for most of the people)
- if all the files include python.h, doesn't it generate very big object
files? Or is it not a problem since they are stripped out after?
As already mentioned, the vulnerability of 64-bit Python rather theoretical and not practical. The size of the hash makes the attack is extremely unlikely. Perhaps the easiest change, avoid 32-bit Python on the vulnerability, will use 64-bit (or more) hash on all platforms. The performance is comparable to the randomization. Keys order depended code will be braked not stronger than when you change the platform or Python feature version. Maybe all the 64 bits used only for strings, and for other objects -- only the lower 32 bits.
Something that's maybe worth mentioning is that the "official" python
benchmark suite http://hg.python.org/benchmarks/ has a pretty
incomplete set of benchmarks for python 3 compared to say what we run
for pypy: https://bitbucket.org/pypy/benchmarks I think a very
worthwhile project would be to try to port other benchmarks (that
actually use existing python projects like sympy or django) for those
that has been ported to python 3.
Nick Coghlan wrote:
> (redirecting to python-ideas - coroutine proposals are nowhere near
> mature enough for python-dev)
> On Wed, Jan 25, 2012 at 5:35 PM, Matt Joiner <anacrolix(a)gmail.com> wrote:
>> If someone can explain what's stopping real coroutines being into
>> Python (3.3), that would be great.
> The general issues with that kind of idea:
> - the author hasn't offered the code for inclusion and relicensing
> under the PSF license (thus we legally aren't allowed to do it)
If by the author you mean me, then of course it can be included.
Since it is a fork of CPython and I haven't changed the licence
I assumed it already was under the PSF licence.
> - complexity
> - maintainability
Hard to measure, but it adds about 200 lines of code.
> - platform support
Its all fully portable standard C.
> In the specific case of coroutines, you have the additional hurdle of
> convincing people whether or not they're a good idea at all.
That may well be the biggest obstacle :)
One other obstacle (and this may be a killer) is that it may not be
practical to refactor Jython to use coroutines since Jython compiles
Python direct to JVM bytecodes and the JVM doesn't support coroutines.
Jython should be able to support yield-from much more easily.
I have this in my mind since a long time, but I didn't talked about that
on this list, was only writing on distutils@ or another list we had for
distutils2 (the fellowship of packaging).
AFAIK, we're almost good about packaging in python 3.3, but there is
still something that keeps bogging me. What we've done (I worked
especially on this bit) is to provide a compatibility layer for the
distributions packaged using setuptools/distribute. What it does,
basically, is to install things using setuptools or distribute (the one
present with the system) and then convert the metadata to the new one
described in PEP 345.
A few things are not handled yet, regarding setuptools: entrypoints and
namespaces. I would like to espeicially talk about entrypoints here.
Entrypoints basically are a plugin system. They are storing information
in the metadata and then retrieving them when needing them. The problem
with this, as everything when trying to get information from metadata is
that we need to parse all the metadata for all the installed
distributions. (say O(N)).
I'm wondering if we should support that (a way to have plugins) in the
new packaging thing, or not. If not, this mean we should come with
another solution to support this outside of packaging (may be in
distribute). If yes, then we should design it, and probably make it a
sub-part of packaging.
What are your opinions on that? Should we do it or not? and if yes,
what's the way to go?
I marked PEP 380 as Final this evening, after pushing the tested and
documented implementation to hg.python.org:
As the list of names in the NEWS and What's New entries suggests, it
was quite a collaborative effort to get this one over the line, and
that's without even listing all the people that offered helpful
suggestions and comments along the way :)
print("\n".join(list((lambda:(yield from ("Cheers,", "Nick")))())))
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia