-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I have a question and I would rather have an answer instead of
actually trying and getting myself in a messy situation.
Let say we have the following scenario:
1. A programer clones hg.python.org.
2. Programer creates a named branch and start to develop a new feature.
3. She adds her repository&named branch to the bugtracker.
4. From time to time, she posts updates in the tracker using the
"Create Patch" button.
So far so good. Now, the question:
5. Development of the new feature is taking a long time, and python
canonical version keeps moving forward. The clone+branch and the
original python version are diverging. Eventually there are changes in
python that the programmer would like in her version, so she does a
"pull" and then a merge for the original python branch to her named
branch.
6. What would be posted in the bug tracker when she does a new "Create
Patch"?. Only her changes, her changes SINCE the merge, her changes
plus merged changes or something else?. What if the programmer
cherrypick changesets from the original python branch?.
Thanks! :-).
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea(a)jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
jabber / xmpp:jcea@jabber.org _/_/ _/_/ _/_/_/_/_/
. _/_/ _/_/ _/_/ _/_/ _/_/
"Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
"My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQCVAwUBTs50H5lgi5GaxT1NAQJsTAP6AsUsLo2REdxxyVvPBDQ51GjZermCXD08
jOqKkKY9cre4OHx/+uZHEvO8j7RJ5X3o2/0Yl4OeDSTBDY8/eWINc9cgtuNqrJdW
W27fu1+UTIpgl1oLh06P23ufOEWPWh90gsV6eiVnFlj7r+b3HkP7PNdZCmqU2+UW
92Ac9B1JOvU=
=goYv
-----END PGP SIGNATURE-----
Hey folks,
I'm pleased to announce that as of changeset 74d182cf0187, the
standard library now includes support for the LZMA compression
algorithm (as well as the associated .xz and .lzma file formats). The
new lzma module has a very similar API to the existing bz2 module; it
should serve as a drop-in replacement for most use cases.
If anyone has any feedback or any suggestions for improvement, please
let me know.
I'd like to ask the owners of (non-Windows) buildbots to install the
XZ Utils development headers so that they can build the new module.
For Debian-derived Linux distros, the relevant package is named
"liblzma-dev"; on Fedora I believe the correct package is "xz-devel".
Binaries for OS X are available from the project's homepage at
<http://tukaani.org/xz/>.
Finally, a big thanks to everyone who contributed feedback during this
module's development!
Cheers,
Nadeem
Hi,
our current deprecation policy is not so well defined (see e.g. [0]),
and it seems to me that it's something like:
1) deprecate something and add a DeprecationWarning;
2) forget about it after a while;
3) wait a few versions until someone notices it;
4) actually remove it;
I suggest to follow the following process:
1) deprecate something and add a DeprecationWarning;
2) decide how long the deprecation should last;
3) use the deprecated-remove[1] directive to document it;
4) add a test that fails after the update so that we remember to
remove it[2];
Other related issues:
PendingDeprecationWarnings:
* AFAIK the difference between PDW and DW is that PDW are silenced by
default;
* now DW are silence by default too, so there are no differences;
* I therefore suggest we stop using it, but we can leave it around[3]
(other projects might be using it for something different);
Deprecation Progression:
Before, we more or less used to deprecated in release X and remove in
X+1, or add a PDW in X, DW in X+1, and remove it in X+2.
I suggest we drop this scheme and just use DW until X+N, where N is >=1
and depends on what is being removed. We can decide to leave the DW for
2-3 versions before removing something widely used, or just deprecate in
X and remove in X+1 for things that are less used.
Porting from 2.x to 3.x:
Some people will update directly from 2.7 to 3.2 or even later versions
(3.3, 3.4, ...), without going through earlier 3.x versions.
If something is deprecated on 3.2 but not in 2.7 and then is removed in
3.3, people updating from 2.7 to 3.3 won't see any warning, and this
will make the porting even more difficult.
I suggest that:
* nothing that is available and not deprecated in 2.7, will be
removed until 3.x (x needs to be defined);
* possibly we start backporting warnings to 2.7 so that they are
visible while running with -3;
Documenting the deprecations:
In order to advertise the deprecations, they should be documented:
* in their doc, using the deprecated-removed directive (and possibly
not the 'deprecated' one);
* in the what's new, possibly listing everything that is currently
deprecated, and when it will be removed;
Django seems to do something similar[4].
(Another thing I would like is a different rending for deprecated
functions. Some part of the docs have a deprecation warning on the top
of the section and the single functions look normal if you miss that.
Also while linking to a deprecated function it would be nice to have it
rendered with a different color or something similar.)
Testing the deprecations:
Tests that fail when a new release is made and the version number is
bumped should be added to make sure we don't forget to remove it.
The test should have a related issue with a patch to remove the
deprecated function and the test.
Setting the priority of the issue to release blocker or deferred blocker
can be done in addition/instead, but that works well only when N == 1
(the priority could be updated for every release though).
The tests could be marked with an expected failure to give some time
after the release to remove them.
All the deprecation-related tests might be added to the same file, or
left in the test file of their module.
Where to add this:
Once we agree about the process we should write it down somewhere.
Possible candidates are:
* PEP387: Backwards Compatibility Policy[5] (it has a few lines about
this);
* a new PEP;
* the devguide;
I think having it in a PEP would be good, the devguide can then link to it.
Best Regards,
Ezio Melotti
[0]: http://bugs.python.org/issue13248
[1]: deprecated-removed doesn't seem to be documented in the documenting
doc, but it was added here: http://hg.python.org/cpython/rev/03296316a892
[2]: see e.g.
http://hg.python.org/cpython/file/default/Lib/unittest/test/test_case.py#l1…
[3]: we could also introduce a MetaDeprecationWarning and make
PendingDeprecationWarning inherit from it so that it can be used to
pending-deprecate itself. Once PendingDeprecationWarning is gone, the
MetaDeprecationWarning will become useless and can then be used to
meta-deprecate itself.
[4]: https://docs.djangoproject.com/en/dev/internals/deprecation/
[5]: http://www.python.org/dev/peps/pep-0387/
==================================
PyPy 1.7 - widening the sweet spot
==================================
We're pleased to announce the 1.7 release of PyPy. As became a habit, this
release brings a lot of bugfixes and performance improvements over the 1.6
release. However, unlike the previous releases, the focus has been on widening
the "sweet spot" of PyPy. That is, classes of Python code that PyPy can greatly
speed up should be vastly improved with this release. You can download the 1.7
release here:
http://pypy.org/download.html
What is PyPy?
=============
PyPy is a very compliant Python interpreter, almost a drop-in replacement for
CPython 2.7. It's fast (`pypy 1.7 and cpython 2.7.1`_ performance comparison)
due to its integrated tracing JIT compiler.
This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or
Windows 32. Windows 64 work is ongoing, but not yet natively supported.
The main topic of this release is widening the range of code which PyPy
can greatly speed up. On average on
our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up
to **20 times** faster on some benchmarks.
.. _`pypy 1.7 and cpython 2.7.1`: http://speed.pypy.org
Highlights
==========
* Numerous performance improvements. There are too many examples which python
constructs now should behave faster to list them.
* Bugfixes and compatibility fixes with CPython.
* Windows fixes.
* PyPy now comes with stackless features enabled by default. However,
any loop using stackless features will interrupt the JIT for now, so no real
performance improvement for stackless-based programs. Contact pypy-dev for
info how to help on removing this restriction.
* NumPy effort in PyPy was renamed numpypy. In order to try using it, simply
write::
import numpypy as numpy
at the beginning of your program. There is a huge progress on numpy in PyPy
since 1.6, the main feature being implementation of dtypes.
* JSON encoder (but not decoder) has been replaced with a new one. This one
is written in pure Python, but is known to outperform CPython's C extension
up to **2 times** in some cases. It's about **20 times** faster than
the one that we had in 1.6.
* The memory footprint of some of our RPython modules has been drastically
improved. This should impact any applications using for example cryptography,
like tornado.
* There was some progress in exposing even more CPython C API via cpyext.
Things that didn't make it, expect in 1.8 soon
==============================================
There is an ongoing work, which while didn't make it to the release, is
probably worth mentioning here. This is what you should probably expect in
1.8 some time soon:
* Specialized list implementation. There is a branch that implements lists of
integers/floats/strings as compactly as array.array. This should drastically
improve performance/memory impact of some applications
* NumPy effort is progressing forward, with multi-dimensional arrays coming
soon.
* There are two brand new JIT assembler backends, notably for the PowerPC and
ARM processors.
Fundraising
===========
It's maybe worth mentioning that we're running fundraising campaigns for
NumPy effort in PyPy and for Python 3 in PyPy. In case you want to see any
of those happen faster, we urge you to donate to `numpy proposal`_ or
`py3k proposal`_. In case you want PyPy to progress, but you trust us with
the general direction, you can always donate to the `general pot`_.
.. _`numpy proposal`: http://pypy.org/numpydonate.html
.. _`py3k proposal`: http://pypy.org/py3donate.html
.. _`general pot`: http://pypy.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately).
Instead of copy&paste the test manually between versions, has anybody
a better workflow?.
Since any change applied to 3.2 should be applied to 3.3 too (except
very few cases), Mercurial merge machinery should be able to merge
both versions except when the changes are very near the version
headers. I haven't checked, but I guess that the problem is that the
different issues have been added in different positions in the file,
so both branches are diverging, instead of only divert in the python
versions referenced.
If that is the case, could be acceptable to reorganize 3.3 version to
ease future merges?. Would that solve it?
Ideas?.
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea(a)jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
jabber / xmpp:jcea@jabber.org _/_/ _/_/ _/_/_/_/_/
. _/_/ _/_/ _/_/ _/_/ _/_/
"Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
"My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQCVAwUBTrlPl5lgi5GaxT1NAQKVggP/bn6vUhQlHjEYg+pFEInnVXYSudamPafP
m6bgX6hKS/MtaixVJGlRnAwJ6UQ/nftjmVn80Yd7CsxnsyPApUZVgzkaLMLOhh++
H08gwxgoh1skciYmtyjsy4Vi4xi/4tehu2IVc73SVXkLVbnkc4z1c2Xmsu4TZ2ai
r2ncgxRkHgw=
=pCHL
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Trying to clear the licensing issues surrounding my DTrace work
(http://bugs.python.org/issue13405) I am contacting Sun/Oracle guys.
Checking documentation abut the contributor license agreement, I had
encounter a wrong HTML link in http://www.python.org/about/help/ :
* "Python Patch Guidelines" points to
http://www.python.org/dev/patches/, that doesn't exist.
Other links in that page seems OK.
PS: The devguide doesn't say anything (AFAIK) about the contributor
agreement.
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea(a)jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
jabber / xmpp:jcea@jabber.org _/_/ _/_/ _/_/_/_/_/
. _/_/ _/_/ _/_/ _/_/ _/_/
"Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
"My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQCVAwUBTs7ayZlgi5GaxT1NAQLOfwQAoa1GFuQZKhbXD3FnmH3XUiylzTMBmXMh
vB++AdDP8fcEBC/NYZ9j0DH+AGspXrPg4YVta09zJJ/1kHa2UxRVmtXM8centl3V
Jkad+6lJw6YYjtXXgM4QExlzClsYNn1ByhYaRSiSa8g9dtsFq4YTlKzfeAXLPC50
DUju8DavMyo=
=xOEe
-----END PGP SIGNATURE-----
Hi there,
I was doing some experiments with the buffer interface of bytearray today,
for the purpose of quickly reading a file's contents into a bytearray which
I can then modify. I decided to do some benchmarking and ran into
surprising results. Here are the functions I was timing:
def justread():
# Just read a file's contents into a string/bytes object
f = open(FILENAME, 'rb')
s = f.read()
def readandcopy():
# Read a file's contents and copy them into a bytearray.
# An extra copy is done here.
f = open(FILENAME, 'rb')
b = bytearray(f.read())
def readinto():
# Read a file's contents directly into a bytearray,
# hopefully employing its buffer interface
f = open(FILENAME, 'rb')
b = bytearray(os.path.getsize(FILENAME))
f.readinto(b)
FILENAME is the name of a 3.6MB text file. It is read in binary mode,
however, for fullest compatibility between 2.x and 3.x
Now, running this under Python 2.7.2 I got these results ($1 just reflects
the executable name passed to a bash script I wrote to automate these runs):
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
1000 loops, best of 3: 461 usec per loop
$1 -m timeit -s'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 2.81 msec per loop
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
1000 loops, best of 3: 697 usec per loop
Which make sense. The readinto() approach is much faster than copying the
read buffer into the bytearray.
But with Python 3.2.2 (built from the 3.2 branch today):
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
1000 loops, best of 3: 336 usec per loop
$1 -m timeit -s'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 2.62 msec per loop
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
100 loops, best of 3: 2.69 msec per loop
Oops, readinto takes the same time as copying. This is a real shame,
because readinto in conjunction with the buffer interface was supposed to
avoid the redundant copy.
Is there a real performance regression here, is this a well-known issue, or
am I just missing something obvious?
Eli
P.S. The machine is quad-core i7-2820QM, running 64-bit Ubuntu 10.04