I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
http://code.google.com/p/googleappengine/issues/detail?id=671
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
threads?
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
PEP: 3145
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
Content-Type: text/plain
Created: 04-Aug-2009
Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data [1] [2] [3]. The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented [4] [5]. While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation [9] as well as blog detailing
the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
platforms.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
interval.
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
References:
[1] [ python-Feature Requests-1191964 ] asynchronous Subprocess
http://mail.python.org/pipermail/python-bugs-list/2006-December/
036524.html
[2] Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
http://ivory.idyll.org/blog/feb-07/problems-with-subprocess
[3] How can I run an external command asynchronously from Python? - Stack
Overflow
http://stackoverflow.com/questions/636561/how-can-i-run-an-external-
command-asynchronously-from-python
[4] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.wait
[5] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.kill
[6] Issue 1191964: asynchronous Subprocess - Python tracker
http://bugs.python.org/issue1191964
[7] Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
http://code.activestate.com/recipes/440554/
[8] subprocess.rst - subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=s…
[9] subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev
[10] Python Subprocess Dev
http://subdev.blogspot.com/
Copyright:
This P.E.P. is licensed under the Open Publication License;
http://www.opencontent.org/openpub/.
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>>
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
>
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
>
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
>
>
> --
> Regards,
> Benjamin
>
In reviewing a fix for the metaclass calculation in __build_class__
[1], I realised that PEP 3115 poses a potential problem for the common
practice of using "type(name, bases, ns)" for dynamic class creation.
Specifically, if one of the base classes has a metaclass with a
significant __prepare__() method, then the current idiom will do the
wrong thing (and most likely fail as a result), since "ns" will
probably be an ordinary dictionary instead of whatever __prepare__()
would have returned.
Initially I was going to suggest making __build_class__ part of the
language definition rather than a CPython implementation detail, but
then I realised that various CPython specific elements in its
signature made that a bad idea.
Instead, I'm thinking along the lines of an
"operator.prepare(metaclass, bases)" function that does the metaclass
calculation dance, invoking __prepare__() and returning the result if
it exists, otherwise returning an ordinary dict. Under the hood we
would refactor this so that operator.prepare and __build_class__ were
using a shared implementation of the functionality at the C level - it
may even be advisable to expose that implementation via the C API as
PyType_PrepareNamespace().
The correct idiom for dynamic type creation in a PEP 3115 world would then be:
from operator import prepare
cls = type(name, bases, prepare(type, bases))
Thoughts?
Cheers,
Nick.
[1] http://bugs.python.org/issue1294232
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I recently started looking at some ctypes issues. I dug a bit into
http://bugs.python.org/issue6069 and then I found
http://bugs.python.org/issue11920. They both boil down to the fact that
bitfield allocation is up to the compiler, which is different in GCC and
MSVC. Currently we have hard-coded allocation strategy based on paltform in
cfields.c:
> if (bitsize /* this is a bitfield request */
> && *pfield_size /* we have a bitfield open */
> #ifdef MS_WIN32
> /* MSVC, GCC with -mms-bitfields */
> && dict->size * 8 == *pfield_size
> #else
> /* GCC */
> && dict->size * 8 <= *pfield_size
> #endif
> && (*pbitofs + bitsize) <= *pfield_size) {
> /* continue bit field */
> fieldtype = CONT_BITFIELD;
> #ifndef MS_WIN32
> } else if (bitsize /* this is a bitfield request */
> && *pfield_size /* we have a bitfield open */
> && dict->size * 8 >= *pfield_size
> && (*pbitofs + bitsize) <= dict->size * 8) {
> /* expand bit field */
> fieldtype = EXPAND_BITFIELD;
> #endif
So when creating a bitfield structure, it's size can be different on Linux
vs Windows.
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1 # aligned to 8 bits, not ctypes default of 32
_fields_ = [
("Data0", ctypes.c_uint32, 32),
("Data1", ctypes.c_uint8, 3),
("Data2", ctypes.c_uint16, 12),
]
sizeof for above structure is 6 on GCC build and 7 on MSVC build. This leads
to some confusion and issues, because we can't always interop correctly with
code compiled with different compiler than the one Python is compiled with
on the platform. Short term solution is to add a warning in the
documentation about this. Longer term though, I think it would be better to
add a property on the Structure class for configurable allocation strategy,
for example Native (default), GCC, MSVC and when allocating the bitfield,
use given strategy. Native would behave similar to what happens now, but we
would also allow GCC-style allocation on Windows for example and the ability
to extend this if we ever run into similar issues with other compilers. This
shouldn't be too difficult to implement, will be backwards compatible and it
would improve interop. I would like to hear some opinions on this.
Thank you,
Vlad
Hello,
There is a need for the default Python2 install to place a symlink at
/usr/bin/python2 that points to /usr/bin/python, or for the documentation to
recommend that packagers ensure that python2 is defined. Also, all
documentation should be changed to recommend that "#!/usr/bin/env python2"
be used as the shebang for Python 2 scripts.
This is needed because some distributions (Arch Linux, in particular), point
/usr/bin/python to /usr/bin/python3, while others (including Slackware,
Debian, and the BSDs, probably more) do not even define the python2 command.
This means that a script has no way of achieving cross-platform
compatibility. The point at which many distributions begin to alias
/usr/bin/python to /usr/bin/python3 is due soon, and for the next couple of
years, it would be best to use a python2 or python3 shebang in all scripts,
making no assumptions about plain python, which should only be invoked
interactively. This email from about 3 years ago seems relevant: :
http://mail.python.org/pipermail/python-3000/2008-March/012421.html
Again, this issue needs to be addressed by the Python developers themselves
so that different *nix distributions will handle it consistently, allowing
Python scripts to continue to be cross-platform.
Thanks,
Kerrick Staley
Hello,
I'm writing because discussion in a bug report I submitted
(<http://bugs.python.org/issue12014>) has suggested that, insofar as
at least part of the issue revolves around the interpretation of PEP
3101, that aspect belonged on python-dev. In particular, I was told
that the PEP, not the documentation, is authoritative. Since I'm the
one who thinks something is wrong, it seems appropriate for me to be
the one to bring it up.
Basically, the issue is that the current behavior of str.format is at
variance at least with the documentation
<http://docs.python.org/library/string.html#formatstrings>, is almost
certainly at variance with PEP3101 in one respect, and is in my
opinion at variance with PEP3101 in another respect as well, regarding
what characters can be present in what the grammar given in the
documentation calls an element_index, that is, the bit between the
square brackets in "{0.attr[idx]}".
Both discovering the current behavior and interpreting the
documentation are pretty straightforward; interpreting what the PEP
actually calls for is more vexed. I'll do the first two things first.
TOC for the remainder:
1. What does the current implementation do?
2. What does the documentation say?
3. What does the PEP say? [this part is long, but the PEP is not
clear, and I wanted to be thorough]
4. Who cares?
1. What does the current implementation do?
Suppose you have this dictionary:
d = {"@": 0,
"!": 1,
":": 2,
"^": 3,
"}": 4,
"{": {"}": 5},
}
Then the following expressions have the following results:
(a) "{0[@]}".format(d) --> '0'
(b) "{0[!]}".format(d) --> ValueError: Missing ']' in format string
(c) "{0[:]}".format(d) --> ValueError: Missing ']' in format string
(d) "{0[^]}".format(d) --> '3'
(e) "{0[}]}".format(d) --> ValueError: Missing ']' in format string
(f) "{0[{]}".format(d) --> ValueError: unmatched '{' in format
(g) "{0[{][}]}".format(d) --> '5'
Given (e) and (f), I think (g) should be a little surprising, though
you can probably guess what's going on and it's not hard to see why it
happens if you look at the source: (e) and (f) fail because
MarkupIterator_next (in Objects/stringlib/string_format.h) scans
through the string looking for curly braces, because it treats them as
semantically significant no matter what context they occur in. So,
according to MarkupIterator_next, the *first* right curly brace in (e)
indicates the end of the replacement field, giving "{0[}". In (f), the
second left curly brace indicates (to MarkupIterator_next) the start
of a *new* replacement field, and since there's only one right curly
brace, it complains. In (g), MarkupIterator_next treats the second
left curly brace as starting a new replacement field and the first
right curly brace as closing it. However, actually, those braces don't
define new replacement fields, as indicated by the fact that the whole
expression treats the element_index fields as just plain old strings.
(So the current implementation is somewhat schizophrenic, acting at
one point as if the braces have to be balanced because '{[]}' is a
replacement field and at another point treating the braces as
insignificant.)
The explanation for (b) and (c) is that parse_field (same source file)
treats ':' and '!' as indicating the end of the field_name section of
the replacement field, regardless of whether those characters occur
within square brackets or not.
So, that's what the current implementation does, in those cases.
2. What does the documentation say?
The documentation gives a grammar for replacement fields:
"""
replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"
field_name ::= arg_name ("." attribute_name | "[" element_index "]")*
arg_name ::= [identifier | integer]
attribute_name ::= identifier
element_index ::= integer | index_string
index_string ::= <any source character except "]"> +
conversion ::= "r" | "s"
format_spec ::= <described in the next section>
"""
Given this definition of index_string, all of (a)--(g) should be
legal, and the results should be the strings '0', '1', '2', '3',
"{'}': 5}", and '5'. There is no need to exclude ':', '!', '}', or '{'
from the index_string field; allowing them creates no ambiguity,
because the field is delimited by square brackets.
Tangent: the definition of attribute_name here also does not describe
the current behavior ("{0. ;}".format(x) works fine and will call
getattr(x, " ;")) and the PEP does not require the attribute_name to
be an identifier; in fact it explicitly states that the attribute_name
doesn't need to be a valid Python identifier. attribute_name should
read (to reflect something like actual behavior, anyway) "<any source
character except '[', '.', ':', '!', '{', or '}'> +". The same goes
for arg_name (with the integer alternation). Observe:
>>> x = lambda: None
>>> setattr(x, ']]', 3)
>>> "{].]]}".format(**{"]":x}) # (h)
'3'
One can also presently do this (hence "something like actual behavior"):
>>> setattr(x, 'f}', 4)
>>> "{a{s.f}}".format(**{"a{s":x})
'4'
But not this:
>>> "{a{s.func_name}".format(**{"a{s":x})
as it raises a ValueError, for the same reason as explains (g) above.
3. What does the PEP say?
Well... It's actually hard to tell!
Summary: The PEP does not contain a grammar for replacement fields,
and is surprisingly nonspecific about what can appear where, at least
when talking about the part of the replacement field before the format
specifier. The most reasonable interpretation of the parts of the PEP
that seem to be relevant favors the documentation, rather than the
implementation.
This can be separated into two sub-questions.
A. What does the PEP say about ':' and '!'?
It says two things that pertain to element_index fields.
The first is this:
"""
The rules for parsing an item key are very simple.
If it starts with a digit, then it is treated as a number, otherwise
it is used as a string.
Because keys are not quote-delimited, it is not possible to
specify arbitrary dictionary keys (e.g., the strings "10" or
":-]") from within a format string.
"""
So it notes that some things can't be used:
- Because anything composed entirely of digits is treated as a
number, you can't get a string composed entirely of digits. Clear
enough.
- What's the explanation for the second example, though? Well, you
can't have a right square bracket in the index_string, so that would
already mean that you can't do this: "{0[:-]]}".format(...) regardless
of the whether colons are legal or not. So, although the PEP gives an
example of a string that can't in the element_index part of a
replacement field, and that string contains a colon, that string would
have been disallowed for other reasons anyway.
The second is this:
"""
The str.format() function will have
a minimalist parser which only attempts to figure out when it is
"done" with an identifier (by finding a '.' or a ']', or '}',
etc.).
"""
This requires some interpretation. For one thing, the contents of an
element_index aren't identifiers. For another, it's not true that
you're done with an identifier (or whatever) whenever you see *any* of
those; it depends on context. When parsing this: "{0[a.b]}" the parser
should not stop at the '.'; it should keep going until it reaches the
']', and that will give the element_index. And when parsing this:
"{0.f]oo[bar].baz}", it's done with the identifier "foo" when it
reaches the '[', not when it reaches the second '.', and not when it
reaches the ']', either (recall (h)). The "minimalist parser" is, I
take it, one that works like this: when parsing an arg_name you're
done when you reach a '[', a ':', a '!', a '.', '{', or a '}'; the
same rules apply when parsing a attribute_name; when parsing an
element_index you're done when you see a ']'.
Now as regards the curly braces there are some other parts of the PEP
that perhaps should be taken into account (see below), but I can find
no justification for excluding ':' and '!' from the element_index
field; the bit quoted above about having a minimalist parser isn't a
good justification for that, and it's the only part of the entire PEP
that comes close to addressing the question.
B. What does it say about '}' and '{'?
It still doesn't say much explicitly. There's the quotation I just
gave, and then these passages:
"""
Brace characters ('curly braces') are used to indicate a
replacement field within the string:
[...]
Braces can be escaped by doubling:
"""
Taken by itself, this would suggest that wherever there's an unescaped
brace, there's a replacement field. That would mean that the current
implementation's behavior is correct in (e) and (f) but incorrect in
(g). However, I think this is a bad interpretation; unescaped braces
can indicate the presence of a replacement field without that meaning
that *within* a replacement field braces have that meaning, no matter
where within the replacement field they occur.
Later in the PEP, talking about this example:
"{0:{1}}".format(a, b)
We have this:
"""
These 'internal' replacement fields can only occur in the format
specifier part of the replacement field. Internal replacement fields
cannot themselves have format specifiers. This implies also that
replacement fields cannot be nested to arbitrary levels.
Note that the doubled '}' at the end, which would normally be
escaped, is not escaped in this case. The reason is because
the '{{' and '}}' syntax for escapes is only applied when used
*outside* of a format field. Within a format field, the brace
characters always have their normal meaning.
"""
The claim "within a format field, the brace characters always have
their normal meaning" might be taken to mean that within a
*replacement* field, brace characters always indicate the start (or
end) of a replacement field. But the PEP at this point is clearly
talking about the formatting section of a replacement field---the part
that follows the ':', if present. ("Format field" is nowhere defined
in the PEP, but it seems reasonable to take it to mean "the format
specifier of a replacement field".) However, it seems most reasonable
to me to take "normal meaning" to mean "just a brace character".
Note that the present implementation only kinda sorta conforms to the
PEP in this respect:
>>> import datetime
>>> format(datetime.datetime.now(), "{{%Y")
'{{2011'
>>> "{0:{{%{1}}".format(datetime.datetime.now(), 'Y') # (i)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: unmatched '{' in format
>>> "{0:{{%{1}}}}".format(datetime.datetime.now(), 'Y') # (j)
'{2011}'
Here the brace characters in (i) and (j) are treated, again in
MarkupIterator_next, as indicating the start of a replacement field.
In (i), this leads the function to throw an exception; since they're
balanced in (j), processing proceeds further, and the doubled braces
aren't treated as indicating the start or end of a replacement
field---because they're escaped. Given that the format spec part of a
replacement field can contain further replacement fields, this is, I
think, correct behavior, but it's not easy to see how it comports with
the PEP, whose language is not very exact.
The call to the built-in format() bypasses the mechanism that leads to
these results.
The PEP is very, very nonspecific about the parts of the replacement
field that precede the format specifier. I don't know what kind of
discussion surrounded the drawing up of the grammar that appears in
the documentation, but I think that it, and not the implementation,
should be followed.
The implementation only works the way it does because of parsing
shortcuts: it raises ValueErrors for (b) and (c) because it
generalizes something true of the attribute_name field (encountering a
':' or '!' means one has moved on to the format_spec or conversion
part of the replacement field) to the element_index field. And it
raises an error for (e) and (f), but lets (g) through, for the reasons
already mentioned. It is, in that respect, inconsistent; it treats the
curly brace as having one semantic significance at one point and an
entirely different significance at another point, so that it does the
right thing in the case of (g) entirely by accident. There is, I
think, no way to construe the PEP so that it is reasonable to do what
the present implementation does in all three cases (if "{" indicates
the start of a replacement field in (f), it should do so in (g) as
well); I think it's
actually pretty difficult to construe the PEP in any way that makes
what it does in the case of (e) and (f) correct.
4. Who cares?
Well, I do. (Obviously.) I even have a use case: I want to be able to
put arbitrary (or as close to arbitrary as possible) strings in the
element_index field, because I've got some objects that (should!)
enable me to do this:
p.say("I'm warning you, {e.red.underline[don't touch that!]}")
and have this written ("e" for "effects") to p.out:
I'm warning you, \x1b[31m\x1b[4mdon't touch that!\x1b[0m
I have a way around the square bracket problem, but it would be quite
burdensome to have to deal with all of !:{} as well; enough that
I would fall back on something like this:
"I'm warning you, {0}".format(e.red.underline("don't touch that!"))
or some similar interpolation-based strategy, which I think would be a
shame, because of the way it chops up the string.
But I also think the present behavior is extremely counterintuitive,
unnecessarily complex, and illogical (even unpythonic!). Isn't it
bizarre that (g) should work, given what (e) and (f) do? Isn't it
strange that (b) and (c) should fail, given that there's no real
reason for them to do so---no ambiguity that has to be avoided? And
something's gotta give; the documentation and the implementation do
not correspond.
Beyond the counterintuitiveness of the present implementation, it is
also, I think, self-inconsistent. (e) and (f) fail because the
interior brace is treated as starting or ending a replacement field,
even though interior replacement fields aren't allowed in that part of
a replacement field. (g) succeeds because the braces are balanced:
they are treated at one point as if they were part of a replacement
field, and at another (correctly) as if they are not. But this makes
the failure of (e) and (f) unaccountable. It would not have been
impossible for the PEP to allow interior replacement fields anywhere,
and not just in the format spec, in which case we might have had this:
(g') "{0[{][}]}".format(range(10), **{'][':4}) --> '3'
or this:
(g'') "{0[{][}]}".format({'4':3}, **{'][':4}) --> '3'
or something with that general flavor.
As far as I can tell, the only way to consistently maintain that (e)
and (f) should fail requires that one take (g') or (g'') to be
correct: either the interior braces signal replacement fields (hence
must be balanced) or they don't (or they're escaped).
Regarding the documentation, it could of course be rendered correct by
changing *it*, rather than the implementation. But wouldn't it be
tremendously weird to have to explain that, in the part of the
replacement field preceding the conversion, you can't employ any curly
braces, unless they're balanced, and you can't employ ':' or '!' at
all, even though they have no semantic significance? So these are out:
{0[{].foo}
{0[}{}]}
{0[a:b]}
But these are in:
{0[{}{}]}
{0[{{].foo.}}} (k)
((k) does work, if you give it an object with the right structure,
though it probably should not.)
And, moreover, someone would then have to actually change the
documentation, whereas there's a patch already, attached to the bug
report linked way up at the top of this email, that makes (a)--(g) all
work, leaves (i) and (j) as they are, and has the welcome side-effect
of making (k) *not* work (if any code anywhere relies on (k) or things
like it working, I will be very surprised: anyway the fact that (k)
works is, technically, undocumented). It is also quite simple. It
doesn't effect the built-in format(), because the built-in format() is
concerned only with the format-specifier part of the replacement field
and not all the stuff that comes before that, telling str.format
*what* object to format.
Thanks for reading,
--
Ben Wolfson
"Human kind has used its intelligence to vary the flavour of drinks,
which may be sweet, aromatic, fermented or spirit-based. ... Family
and social life also offer numerous other occasions to consume drinks
for pleasure." [Larousse, "Drink" entry]
PEP 397 (Python launcher for Windows) has a reference implementation in Python.
Does anyone know of a C implementation, or is planning/working on one? I realise
this is the final objective, so such implementation might be premature, but
perhaps someone has been experimenting ...
Regards,
Vinay Sajip
Hello!
http://bugs.python.org/issue10403 is a documentation bug which talks
about using the term 'attribute' instead of the term 'member' when it
denotes the class attributes. Agreed.
But the discussion goes on to mention that,
"Members and methods" should just be "attributes".
I find this bit odd. If the two terms are used together, then
replacing it with attributes is fine. But the term 'methods' cannot be
replaced with 'attributes' as it changes the meaning.
Take this case,
:class:`BZ2File` provides all of the methods specified by the
:class:`io.BufferedIOBase`, except for :meth:`detach` and :meth:`truncate`.
Iteration and the :keyword:`with` statement are supported.
is correct, whereas replacing "methods with attributes" would make it as
:class:`BZ2File` provides all of the attributes specified by the
:class:`io.BufferedIOBase`, except for :meth:`detach` and :meth:`truncate`.
Iteration and the :keyword:`with` statement are supported.
It does not seem correct.
My stance is, "It is attributes instead of term members and the term method
when it denotes methods can be left as such." Can I still hold on to that and
modify the patch which Adam has submitted or the 'attribute' substitution
everywhere makes sense?
Thanks,
Senthil
Hi!
This is a request for clarification for the thread "how to call a function for
evry 10 seconds" from the user mailinglist/newsgroup.
The gist is this:
1. On Linux/Python 2.6, time.sleep(-1.0) raises an IOError.
2. On MS Windows/Python 2.5 or 2.7 this sleeps forever. It seems that
converting this to a 32-bit integer for Sleep() causes an underflow.
Questions:
1. Is the IOError somehow explainable?
2. Is the IOError acceptable or a bug?
3. Is the behaviour under MS Windows acceptable or a bug?
4. Since time.sleep() is documented to possibly sleep a bit longer than
specified, someone suggested that the overall sleep time could also be
truncated to a minimum of zero, i.e. treating negative values as zero.
Greetings!
Uli
**************************************************************************************
Domino Laser GmbH, Fangdieckstra�e 75a, 22547 Hamburg, Deutschland
Gesch�ftsf�hrer: Thorsten F�cking, Amtsgericht Hamburg HR B62 932
**************************************************************************************
Visit our website at http://www.dominolaser.com
**************************************************************************************
Diese E-Mail einschlie�lich s�mtlicher Anh�nge ist nur f�r den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empf�nger sein sollten. Die E-Mail ist in diesem Fall zu l�schen und darf weder gelesen, weitergeleitet, ver�ffentlicht oder anderweitig benutzt werden.
E-Mails k�nnen durch Dritte gelesen werden und Viren sowie nichtautorisierte �nderungen enthalten. Domino Laser GmbH ist f�r diese Folgen nicht verantwortlich.
**************************************************************************************
Not surprisingly, people are still attempting to browse and/or checkout
Python source from svn.python.org. They could be following out-of-date
instructions from somewhere or might be expecting the svn repos are
mirroring the hg ones. This causes wasted time and frustration for
users and for us following up on issues.
Could some text be added to the svn browser pages to at least indicate
that the repo is no longer being updated with a link to the
hg.python.org browser? I'm not sure what to do about the repos
themselves if people attempt to do an svn co. Perhaps that should just
be disabled totally for python?
--
Ned Deily,
nad(a)acm.org