I've report http://bugs.python.org/issue16728 , but I am confused
about what is the sequence now.
Glossary defines sequence as iteratable having __getitem__ and __len__.
Objects doesn't have __iter__ is iterable when it having __getitem__.
> Sequences also support slicing: a[i:j] selects all items with index *k*such that
*i* <= *k* < *j*. When used as an expression, a slice is a sequence of the
same type. This implies that the index set is renumbered so that it starts
But I think this sentence explains about standard types and not definition
> This module provides *abstract base classes*<http://docs.python.org/3/glossary.html#term-abstract-base-class>that can be used to test whether a class provides a particular interface;
for example, whether it is hashable or whether it is a mapping.
And collections.abc.Sequence requires "index()" and "count()".
What is the requirement for calling something is "sequence"?
Off Topc: Sequence.__iter__ uses __len__ and __getitem__ but default
iterator uses only __getitem__. This difference is ugly.
INADA Naoki <songofacandy(a)gmail.com>
Don't sure about applying doc changes to 3.3.
They are very minor.
The main change will be deprecation of aliases in the docs, that can
be applied only to upcoming release.
On Wed, Dec 19, 2012 at 7:05 PM, Serhiy Storchaka <storchaka(a)gmail.com> wrote:
> On 19.12.12 09:24, Nick Coghlan wrote:
>> With any of these changes in the docs, please don't forget to include
>> appropriate "versionchanged" directives. Many people using the Python 3
>> docs at "docs.python.org/3/ <http://docs.python.org/3/>" will still be
>> on Python 3.2, and thus relying on the presence of such directives to
>> let them know that while the various OS-related exception names are now
>> just aliases for OSError in 3.3+, the distinctions still matter in 3.2.
> I also propose to apply all this documentation changes to 3.3.
> Python-checkins mailing list
How about folding them???
I did it, now I don't need a power supply anymore :O
On Thu 20/12/12 19:52, Trent Nelson trent(a)snakebite.org wrote:
> No problemo'. If only all the other Snakebite servers could fit in
> my palm and run off 0.25A.
The http.client HTTPConnection._send_output method has an optimization for
avoiding bad interactions between delayed-ack and the Nagle algorithm:
Unfortunately this interacts rather poorly if the case where the
message_body is a bytes instance and is rather large.
If the message_body is bytes it is appended to the headers, which causes a
copy of the data. When message_body is large this duplication of data can
cause a significant spike in memory usage.
(In my particular case I was uploading a 200MB file to 30 hosts at the same
leading to memory spikes over 6GB.
I've solved this by subclassing and removing the optimization, however I'd
appreciate thoughts on how this could best be solved in the library itself.
Options I have thought of are:
1: Have some size threshold on the copy. A little bit too much magic.
Unclear what the size threshold should be.
2: Provide an explicit argument to turn the optimization on/off. This is
ugly as it would need to be attached up the call chain to the request
3: Provide a property on the HTTPConnection object which enables the
optimization or not. Optionally configured as part of __init__.
4: Add a class level attribute (similar to auto_open, default_port, etc)
which controls the optimization.
Be very interested to get some feedback so I can craft the appropriate
On Wed, Dec 19, 2012 at 7:16 AM, andrew.svetlov
> changeset: 80934:a6ea6f803017
> user: Andrew Svetlov <andrew.svetlov(a)gmail.com>
> date: Tue Dec 18 23:16:44 2012 +0200
> Mention OSError instead of IOError in the docs.
> Doc/faq/library.rst | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
> diff --git a/Doc/faq/library.rst b/Doc/faq/library.rst
> --- a/Doc/faq/library.rst
> +++ b/Doc/faq/library.rst
> @@ -209,7 +209,7 @@
> c = sys.stdin.read(1)
> print("Got character", repr(c))
> - except IOError:
> + except OSError:
> termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
> @@ -222,7 +222,7 @@
> :func:`termios.tcsetattr` turns off stdin's echoing and disables
> mode. :func:`fcntl.fnctl` is used to obtain stdin's file descriptor
> and modify them for non-blocking mode. Since reading stdin when it is
> - results in an :exc:`IOError`, this error is caught and ignored.
> + results in an :exc:`OSError`, this error is caught and ignored.
With any of these changes in the docs, please don't forget to include
appropriate "versionchanged" directives. Many people using the Python 3
docs at "docs.python.org/3/" will still be on Python 3.2, and thus relying
on the presence of such directives to let them know that while the various
OS-related exception names are now just aliases for OSError in 3.3+, the
distinctions still matter in 3.2.
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I'm implementing the buffer API and some of memoryview for Jython. I
have read with interest, and mostly understood, the discussion in Issue
#10181 that led to the v3.3 re-implementation of memoryview and
much-improved documentation of the buffer API. Although Jython is
targeting v2.7 at the moment, and 1-D bytes (there's no Jython NumPy),
I'd like to lay a solid foundation that benefits from the recent CPython
work. I hope that some of the complexity in memoryview stems from legacy
considerations I don't have to deal with in Jython.
I am puzzled that PEP 3118 makes some specifications that seem
unnecessary and complicate the implementation. Would those who know the
API inside out answer a few questions?
My understanding is this: When a consumer requests a buffer from the
exporter it specifies using flags how it intends to navigate it. If the
buffer actually needs more apparatus than the consumer proposes, this
raises an exception. If the buffer needs less apparatus than the
consumer proposes, the exporter has to supply what was asked for. For
example, if the consumer sets PyBUF_STRIDES, and the buffer can only be
navigated by using suboffsets (PIL-style) this raises an exception.
Alternatively, if the consumer sets PyBUF_STRIDES, and the buffer is
just a simple byte array, the exporter has to supply shape and strides
arrays (with trivial values), since the consumer is going to use those
Is there any harm is supplying shape and strides when they were not
requested? The PEP says: "PyBUF_ND ... If this is not given then shape
will be NULL". It doesn't stipulate that strides will be null if
PyBUF_STRIDES is not given, but the library documentation says so.
suboffsets is different since even when requested, it will be null if
Similar, but simpler, the PEP says "PyBUF_FORMAT ... If format is not
explicitly requested then the format must be returned as NULL (which
means "B", or unsigned bytes)". What would be the harm in returning "B"?
One place where this really matters is in the implementation of
memoryview. PyMemoryView requests a buffer with the flags PyBUF_FULL_RO,
so even a simple byte buffer export will come with shape, strides and
format. A consumer (of the memoryview's buffer API) might specify
PyBUF_SIMPLE: according to the PEP I can't simply give it the original
buffer since required fields (that the consumer will presumably not
access) are not NULL. In practice, I'd like to: what could possibly go
Looks like Windows buildbots broken by this commit.
On Tue, Dec 18, 2012 at 12:07 AM, antoine.pitrou
> changeset: 80923:a85673b55177
> user: Antoine Pitrou <solipsis(a)pitrou.net>
> date: Mon Dec 17 23:05:59 2012 +0100
> Following issue #13390, fix compilation --without-pymalloc, and make sys.getallocatedblocks() return 0 in that situation.
> Doc/library/sys.rst | 15 ++++++++-------
> Lib/test/test_sys.py | 7 ++++++-
> Objects/obmalloc.c | 7 +++++++
> 3 files changed, 21 insertions(+), 8 deletions(-)
> diff --git a/Doc/library/sys.rst b/Doc/library/sys.rst
> --- a/Doc/library/sys.rst
> +++ b/Doc/library/sys.rst
> @@ -396,16 +396,17 @@
> .. function:: getallocatedblocks()
> Return the number of memory blocks currently allocated by the interpreter,
> - regardless of their size. This function is mainly useful for debugging
> - small memory leaks. Because of the interpreter's internal caches, the
> - result can vary from call to call; you may have to call
> - :func:`_clear_type_cache()` to get more predictable results.
> + regardless of their size. This function is mainly useful for tracking
> + and debugging memory leaks. Because of the interpreter's internal
> + caches, the result can vary from call to call; you may have to call
> + :func:`_clear_type_cache()` and :func:`gc.collect()` to get more
> + predictable results.
> + If a Python build or implementation cannot reasonably compute this
> + information, :func:`getallocatedblocks()` is allowed to return 0 instead.
> .. versionadded:: 3.4
> - .. impl-detail::
> - Not all Python implementations may be able to return this information.
> .. function:: getcheckinterval()
> diff --git a/Lib/test/test_sys.py b/Lib/test/test_sys.py
> --- a/Lib/test/test_sys.py
> +++ b/Lib/test/test_sys.py
> @@ -7,6 +7,7 @@
> import operator
> import codecs
> import gc
> +import sysconfig
> # count the number of test runs, used to create unique
> # strings to intern in test_intern()
> @@ -616,9 +617,13 @@
> "sys.getallocatedblocks unavailable on this build")
> def test_getallocatedblocks(self):
> # Some sanity checks
> + with_pymalloc = sysconfig.get_config_var('WITH_PYMALLOC')
> a = sys.getallocatedblocks()
> self.assertIs(type(a), int)
> - self.assertGreater(a, 0)
> + if with_pymalloc:
> + self.assertGreater(a, 0)
> + else:
> + self.assertEqual(a, 0)
> # While we could imagine a Python session where the number of
> # multiple buffer objects would exceed the sharing of references,
> diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c
> --- a/Objects/obmalloc.c
> +++ b/Objects/obmalloc.c
> @@ -1316,6 +1316,13 @@
> + return 0;
> #endif /* WITH_PYMALLOC */
> #ifdef PYMALLOC_DEBUG
> Repository URL: http://hg.python.org/cpython
> Python-checkins mailing list
Scenario: I'm working on a change that I want to actively test on a
bunch of Snakebite hosts. Getting the change working is going to be
an iterative process -- lots of small commits trying to attack the
problem one little bit at a time.
Eventually I'll get to a point where I'm happy with the change. So,
it's time to do all the necessary cruft that needs to be done before
making the change public. Updating docs, tweaking style, Misc/NEWS,
etc. That'll involve at least a few more commits. Most changes
will also need to be merged to other branches, too, so that needs to
be taken care of. (And it's a given that I would have been pulling
and merging from hg.p.o/cpython during the whole process.)
Then, finally, it's time to push.
Now, if I understand how Mercurial works correctly, using the above
workflow will result in all those little intermediate hacky commits
being forever preserved in the global/public cpython repo. I will
have polluted the history of all affected files with all my changes.
That just doesn't "feel" right. But, it appears as though it's an
intrinsic side-effect of how Mercurial works. With git, you have a
bit more flexibility to affect how your final public commits via
merge fast-forwarding. Subversion gives you the ultimate control of
how your final commit looks (albeit at the expense of having to do
the merging in a much more manual fashion).
As I understand it, even if I contain all my intermediate commits in
a server-side cloned repo, that doesn't really change anything; all
commits will eventually be reflected in cpython via the final `hg
So, my first question is this: is this actually a problem? Is the
value I'm placing on "pristine" log histories misplaced in the DVCS
world? Do we, as a collective, care?
I can think of two alternate approaches I could use:
- Use a common NFS mount for each source tree on every Snakebite
box (and coercing each build to be done in a separate area).
Get everything perfect and then do a single commit of all
changes. The thing I don't like about this approach is that
I can't commit/rollback/tweak/bisect intermediate commits as
I go along -- some changes are complex and take a few attempts
to get right.
- Use a completely separate clone to house all the intermediate
commits, then generate a diff once the final commit is ready,
then apply that diff to the main cpython repo, then push that.
This approach is fine, but it seems counter-intuitive to the
whole concept of DVCS.