[RELEASED] Python 3.3.0 release candidate 3

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On behalf of the Python development team, I'm delighted to announce the third release candidate of Python 3.3.0. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features and changes in the 3.3 release series are: * PEP 380, syntax for delegating to a subgenerator ("yield from") * PEP 393, flexible string representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications * The import system (__import__) now based on importlib by default * The new "lzma" module with LZMA/XZ support * PEP 397, a Python launcher for Windows * PEP 405, virtual environment support in core * PEP 420, namespace package support * PEP 3151, reworking the OS and IO exception hierarchy * PEP 3155, qualified name for classes and functions * PEP 409, suppressing exception context * PEP 414, explicit Unicode literals to help with porting * PEP 418, extended platform-independent clocks in the "time" module * PEP 412, a new key-sharing dictionary implementation that significantly saves memory for object-oriented code * PEP 362, the function-signature object * The new "faulthandler" module that helps diagnosing crashes * The new "unittest.mock" module * The new "ipaddress" module * The "sys.implementation" attribute * A policy framework for the email package, with a provisional (see PEP 411) policy that adds much improved unicode support for email header parsing * A "collections.ChainMap" class for linking mappings to a single unit * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" * Hash randomization, introduced in earlier bugfix releases, is now switched on by default In total, almost 500 API items are new or improved in Python 3.3. For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! - -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlBf+0cACgkQN9GcIYhpnLBqfgCglbN63XUr2m4Ya4ff8Hza1Axl SgMAniQZRJi8uYfeqltf5/G4QV/+SdWT =KXTo -----END PGP SIGNATURE-----

Georg Brandl <georg@python.org> wrote:
* A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications
Could you bump up the factor to 120x in the final announcement? There were a couple of performance improvements in the meantime, and this is what I'm consistently measuring now. Thanks, Stefan Krah

Brett Cannon <brett@python.org> wrote:
It's the pi benchmark from bench.py. This is what I'm typically getting on a Core 2 Duo 3.16 GHz: Precision: 9 decimal digits float: result: 3.1415926535897927 time: 0.113188s cdecimal: result: 3.14159265 time: 0.158313s decimal: result: 3.14159265 time: 18.671457s Precision: 19 decimal digits float: result: 3.1415926535897927 time: 0.112874s cdecimal: result: 3.141592653589793236 time: 0.348100s decimal: result: 3.141592653589793236 time: 43.241220s Stefan Krah

On 29 September 2012 06:51, Paul Moore <p.f.moore@gmail.com> wrote:
Wow! I had no idea cdecimal was that close in speed to float. That's seriously impressive.
If those numbers are similar in other benchmarks, would it be accurate and/or reasonable to include a statement along the lines of: "comparable to float performance - usually no more than 3x for calculations within the range of numbers covered by float" or is there not enough evidence to conclude that? Substitute suitable factor for 3x. Obviously

Tim Delaney <timothy.c.delaney@gmail.com> wrote:
For numerical programs, 1.4x (9 digits) to 3x (19 digits) slower would be accurate. On Windows the difference is even less. For output formatting, cdecimal is faster than float (at least it was when I posted a benchmark a couple of months ago). Stefan Krah

On 29 September 2012 10:17, Stefan Krah <stefan@bytereef.org> wrote:
To me, this means that the key point is that for the casual user, float is no longer the "obvious" choice. You'd choose float for the convenience of a built in type, and Decimal for the more natural rounding and precision semantics. If you are sufficiently interested in performance for it to matter, you're no longer a "casual" user. (Up until now, I'd have said use float unless your need for the better behaviour justifies the performance loss - that's no longer the case). Paul.

On Sat, Sep 29, 2012 at 9:07 AM, Paul Moore <p.f.moore@gmail.com> wrote:
Does this mean we want to re-open the discussion about decimal constants? Last time this came up I think we decided that we wanted to wait for cdecimal (which is obviously here) and work out how to handle contexts, the syntax, etc.

On Sat, Sep 29, 2012 at 11:26 AM, Brett Cannon <brett@python.org> wrote:
I think that ought to be a Python 4 feature if we ever want it to be a feature. And I'm not saying this to kill the discussion; I just think it will be a huge change that we have to consider very carefully. While the existing float definitely has problems for beginners, it is incredibly useful for advanced users. Consider e.g. the implications for numpy / scipy, one of the fastest-growing specialized Python user communities. Now if you want to introduce a new notation for decimals, e.g. 3.14d and 1e42d, that would be a fine thing. (Should we also have complex decimals? 1jd anyone?) -- --Guido van Rossum (python.org/~guido)

On 9/29/2012 2:38 PM, Guido van Rossum wrote:
It is also one of, if not *the* oldest application communities.
I think not. The money community does not use complexes that I know of, and complex decimals would not be very useful without a complex decimal module (and 3rd party modules). Even the complex float operations and cmath library could use work to touch up and test corner cases. I can imagine giving IDLE a calulator mode. If Python itself had a startup switch, Idle could just restart the remote process. If a new suffix is used, Idle could add the suffix as literal numbers are entered. -- Terry Jan Reedy

BTW, "What's New": http://www.python.org/download/releases/3.3.0/ still says 80x for decimal performance. Tim Delaney

Also the example at http://docs.python.org/py3k/whatsnew/3.3.html#pep-409-suppressing-exception-...: ... raise AttributeError(attr) from None... Looks like there's an elipsis there that shouldn't be. Tim Delaney

On Sat, Sep 29, 2012 at 10:09 PM, R. David Murray <rdmurray@bitdance.com> wrote:
Yes, this was previously discussed here: http://mail.python.org/pipermail/python-dev/2012-August/121467.html where Georg wrote:
--Chris

In article <CAN8CLgmmfVVxmxU8nSme9uphOoYAeVXMjd81mPHMSzoQsV53Ng@mail.gmail.com>, Tim Delaney <timothy.c.delaney@gmail.com> wrote:
BTW, "What's New": http://www.python.org/download/releases/3.3.0/ still says 80x for decimal performance.
Thanks for the report. The page has now been updated to match the final 3.3.0 release announcement post. -- Ned Deily, nad@acm.org

On Fri, 28 Sep 2012 21:51:39 +0100 Paul Moore <p.f.moore@gmail.com> wrote:
I think this means the performance difference is on the same order of magnitude as the CPython interpretation overhead. Still, it's impressive indeed. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net

Stefan Krah <stefan@bytereef.org> wrote:
Apparently there were concerns about the correctness of the result on some message board. To be clear: This is a *benchmark* designed to execute the same code for float, cdecimal and decimal. The *original* algorithm from the decimal docs uses a higher intermediate context precision. Since floats do not have such a context, this step was deliberately dropped from the benchmark. Stefan Krah

Georg Brandl <georg@python.org> wrote:
* A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications
Could you bump up the factor to 120x in the final announcement? There were a couple of performance improvements in the meantime, and this is what I'm consistently measuring now. Thanks, Stefan Krah

Brett Cannon <brett@python.org> wrote:
It's the pi benchmark from bench.py. This is what I'm typically getting on a Core 2 Duo 3.16 GHz: Precision: 9 decimal digits float: result: 3.1415926535897927 time: 0.113188s cdecimal: result: 3.14159265 time: 0.158313s decimal: result: 3.14159265 time: 18.671457s Precision: 19 decimal digits float: result: 3.1415926535897927 time: 0.112874s cdecimal: result: 3.141592653589793236 time: 0.348100s decimal: result: 3.141592653589793236 time: 43.241220s Stefan Krah

On 29 September 2012 06:51, Paul Moore <p.f.moore@gmail.com> wrote:
Wow! I had no idea cdecimal was that close in speed to float. That's seriously impressive.
If those numbers are similar in other benchmarks, would it be accurate and/or reasonable to include a statement along the lines of: "comparable to float performance - usually no more than 3x for calculations within the range of numbers covered by float" or is there not enough evidence to conclude that? Substitute suitable factor for 3x. Obviously

Tim Delaney <timothy.c.delaney@gmail.com> wrote:
For numerical programs, 1.4x (9 digits) to 3x (19 digits) slower would be accurate. On Windows the difference is even less. For output formatting, cdecimal is faster than float (at least it was when I posted a benchmark a couple of months ago). Stefan Krah

On 29 September 2012 10:17, Stefan Krah <stefan@bytereef.org> wrote:
To me, this means that the key point is that for the casual user, float is no longer the "obvious" choice. You'd choose float for the convenience of a built in type, and Decimal for the more natural rounding and precision semantics. If you are sufficiently interested in performance for it to matter, you're no longer a "casual" user. (Up until now, I'd have said use float unless your need for the better behaviour justifies the performance loss - that's no longer the case). Paul.

On Sat, Sep 29, 2012 at 9:07 AM, Paul Moore <p.f.moore@gmail.com> wrote:
Does this mean we want to re-open the discussion about decimal constants? Last time this came up I think we decided that we wanted to wait for cdecimal (which is obviously here) and work out how to handle contexts, the syntax, etc.

On Sat, Sep 29, 2012 at 11:26 AM, Brett Cannon <brett@python.org> wrote:
I think that ought to be a Python 4 feature if we ever want it to be a feature. And I'm not saying this to kill the discussion; I just think it will be a huge change that we have to consider very carefully. While the existing float definitely has problems for beginners, it is incredibly useful for advanced users. Consider e.g. the implications for numpy / scipy, one of the fastest-growing specialized Python user communities. Now if you want to introduce a new notation for decimals, e.g. 3.14d and 1e42d, that would be a fine thing. (Should we also have complex decimals? 1jd anyone?) -- --Guido van Rossum (python.org/~guido)

On 9/29/2012 2:38 PM, Guido van Rossum wrote:
It is also one of, if not *the* oldest application communities.
I think not. The money community does not use complexes that I know of, and complex decimals would not be very useful without a complex decimal module (and 3rd party modules). Even the complex float operations and cmath library could use work to touch up and test corner cases. I can imagine giving IDLE a calulator mode. If Python itself had a startup switch, Idle could just restart the remote process. If a new suffix is used, Idle could add the suffix as literal numbers are entered. -- Terry Jan Reedy

BTW, "What's New": http://www.python.org/download/releases/3.3.0/ still says 80x for decimal performance. Tim Delaney

Also the example at http://docs.python.org/py3k/whatsnew/3.3.html#pep-409-suppressing-exception-...: ... raise AttributeError(attr) from None... Looks like there's an elipsis there that shouldn't be. Tim Delaney

On Sat, Sep 29, 2012 at 10:09 PM, R. David Murray <rdmurray@bitdance.com> wrote:
Yes, this was previously discussed here: http://mail.python.org/pipermail/python-dev/2012-August/121467.html where Georg wrote:
--Chris

In article <CAN8CLgmmfVVxmxU8nSme9uphOoYAeVXMjd81mPHMSzoQsV53Ng@mail.gmail.com>, Tim Delaney <timothy.c.delaney@gmail.com> wrote:
BTW, "What's New": http://www.python.org/download/releases/3.3.0/ still says 80x for decimal performance.
Thanks for the report. The page has now been updated to match the final 3.3.0 release announcement post. -- Ned Deily, nad@acm.org

On Fri, 28 Sep 2012 21:51:39 +0100 Paul Moore <p.f.moore@gmail.com> wrote:
I think this means the performance difference is on the same order of magnitude as the CPython interpretation overhead. Still, it's impressive indeed. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net

Stefan Krah <stefan@bytereef.org> wrote:
Apparently there were concerns about the correctness of the result on some message board. To be clear: This is a *benchmark* designed to execute the same code for float, cdecimal and decimal. The *original* algorithm from the decimal docs uses a higher intermediate context precision. Since floats do not have such a context, this step was deliberately dropped from the benchmark. Stefan Krah
participants (14)
-
Antoine Pitrou
-
Brett Cannon
-
Chris Angelico
-
Chris Jerdonek
-
Ethan Furman
-
Georg Brandl
-
Guido van Rossum
-
Mark Lawrence
-
Ned Deily
-
Paul Moore
-
R. David Murray
-
Stefan Krah
-
Terry Reedy
-
Tim Delaney