As far as I know, the only way to report a typo or change something in
the documentation is actually open an issue in the bug tracker.
This implies that:
1. If the user is not registered to the bug tracker he can't open the
issue, and he won't probably register for a small mistake;
2. The user has to spend some time to reach the bug tracker page, open a
new issue, write a brief description of the problem and possibly create
and attach a patch;
3. A developer (of Python) has to read the issue, write a patch or check
if the attached patch is ok and then apply it (even if I think that some
developers can edit the doc directly).
In my opinion this is rather clumsy and certainly not user-friendly.
Even if the user is registered to the bug tracker and knows how to
report an issue (and this is already a small subset of the doc readers)
he may not want to go through all these step just to fix a typo.
The idea is to allow all the users to edit the documentation pages
directly (like a wiki), but wait the approval of a developer before
apply the changes.
The steps will then be:
1. The user finds a mistake, clicks on an [edit] link and fixes it;
2. A developer check if the correction is ok and approves of refuses it.
This will also lead to the following benefits:
1. All the users can contribute even if they are not registered and/or
they don't know how/where to report the problem;
2. The process is simpler so the users are more likely to report
mistakes, spending less time;
3. If the process is easy enough, users may want to submit some example
or tip that could be useful to others;
4. The developers just have to check and approve/refuse the changes.
Again, this will require less time and they will be able to fix several
mistakes in few minutes (if not seconds);
5. The bug tracker won't be "polluted" by issues regarding typos and
other small mistakes in the doc.
Problems and limitations:
Even if probably there's already something like this out there, I don't
know how easy it is to find/implement it. It shouldn't be too hard to
write something ex-novo, but then again, someone will have to do it.
Something like this works well for self-explanatory corrections (like
typos), but it could not be the best when you have to explain the
reasons of the change. Possible solutions are:
1. Allow the user to write an (optional) comment to the correction (e.g.
"Changed xyz to match the new docstring.");
2. Open an issue where to discuss about the correction and then edit the
page (some developers could have direct access to the page so they can
edit them immediately -- I don't know if there's already something like
that now or if they have to apply patches);
3. Have a "discussion page" like the the ones that are commonly used in
wikis.
I don't know how feasible this idea is, but I'd really like to have a
simpler way of editing the doc. It would also be nice, if the users
could contribute actively to improve the doc, adding more exampes and
pointing out possible pitfalls (and the developers' approval will still
assure correctness).
--
Ezio Melotti
One small side-effect of not being able to compare incompatible types in 3.0
is that None cannot be used any more as the smallest element. Yes this has
always been an implementation artifact and a hack, but it was very
convenient none the less. Is it maybe the right time to add a builtin
Smallest (and also Largest) object, i.e. two singletons so that `Smallest <
x` for every x: x is not Smallest and `Largest > x` for every x: x is not
Largest ? Although it's not hard to define them in pure Python and one could
object with "not every n-liner needs to be a builtin", the main added value
is that these will be endorsed as the standard, otherwise we risk
mymodule.Smallest clashing with with yourmodule.Smallest.
George
I appreciate the inclusion of the fractions module in Python 2.6 and
therefore in Python 3.0. But I feel there's something missing: no
possibility for complex rationals (or arbitrary precision) integers. I
was just checking the complex number support in Python, compared, for
instance, to Common Lisp and Scheme, and I realized that there was
this subtle omission. The inclusion of rationals and arbitrary
integers is cool, but the numeric tower (say, compared to Scheme) is
not complete. I don't think there would be a performance hit if
complex rationals were provided. Ordinary operations on complex
floats, in theory, should not be affected and handled separately. But
it would be nice to be able to do:
(3/4 + 1/2j) * (1/4 - j) = 11/16 - 5/8j
with no loss of precision.
Python is heavily used in math and science all over the world. We've
even got a recent symbolic math project (sympy) that looks very
promising, so I guess this could be an important issue.
Note: there exists a library that implements what I'm talking about:
http://calcrpnpy.sourceforge.net/clnum.html
but still I personally would have liked to see this stuff included
natively in the new Python 3.0.
Hi.
Maybe I'm just blind and can't find it, but it seems that python has no file
based lock. I wrote one using fcntl (mscvrt on windows):
http://twoday.tuwien.ac.at/pub/stories/319462/
I think this would be a nice addition to pythons standard library.
I called the lock/unlock methods, well, lock and unlock (and there is a trylock
method, too). I just now saw the methods of threading.Lock are called acquire
and release. Shall I change the method names? (a matter of s/\<lock\>/acquire/g
s/\<unlock\>/release/g s/\<trylock\>/tryacquire/g) I will change it if it gets
included to python that way.
I like lock/unlock more, though.
-panzi
We have a special exception handling hook written for the application
platform we developed at work. Mostly it makes sure any exceptions raised
get their tracebacks logged before going belly up. But it also makes sure
the program does go belly up.
When using bits of this platform in an interactive session it can be
annoying (to say the least) for the interpreter to exit when you make a
spelling error. Consequently I wanted something like:
if we are not running interactively:
sys.excepthook = exception_handler
The only problem was to decide what the proper python-speak was for "we are
not running interactively". It seemed to me that sys.argv[0] was the empty
string when running interactively but that seemed fragile so I decided to
ask on comp.lang.python. Peter Otten responded with
hasattr(sys, "ps1")
and a link to the sys module doc section where the presence of sys.ps1 only
in interacive mode is documented.
So now I know the proper spelling in python-speak, but that seems like a
very non-obvious way to do that (and a bit fragile since it's possible, if
not exactly reasonable to "del sys.ps1". It seems to me that it would be
preferable if sys had an "interactive" attribute or an "isinteractive"
method. Maybe in 2.7 and 3.0 or 3.1? Am I way off base? I realize that
would present two ways to do it, but in my mind one would be obvious, the
other not.
Thx,
Skip
On 2008/11/09, at 8:15 pm, Mike Meyer wrote:
> On Sun, 9 Nov 2008 16:56:55 -1000
> Carl Johnson <carl(a)carlsensei.com> wrote:
>
>> This list had a proposal last month to make everything an expression,
>> and it has not infrequent attempts to create multi-line lambdas,
>> and I
>> think the reason for this is that people want a better way to create
>> functions that act like Ruby-style blocks since sometimes it makes
>> more sense to write the function that will be passed after the thing
>> it will be passed to and not before. So, here's my proposal. I expect
>> it to get rejected, but hey, someone approved @decorator, so maybe it
>> will make it...
>
> I think you may be on to something. You've identified the real cause
> of the reasons this kind of thing generates ugly code, and attempted
> to deal with them. But it needs some tweaking.
>
>> Instead of
>>
>>>>> def decorator(f):
>> ... def inner(*args, **kwargs):
>> ... print(*args, **kwargs)
>> ... return f(*args, **kwargs)
>> ... return inner
>> ...
>>
>> where the def inner comes before you know what it's going to be used
>> for, why not
>>
>>>>> def decorator(f):
>> ... return @(*args, **kwargs):
>> ... print(*args, **kwargs)
>> ... return f(*args, **kwargs)
>> ...
>
> You seem to have realized *part* of the problem, and attempt to deal
> with it here:
>
>> Caveats: This shouldn't be allowed to work with two @s in one line.
>> If
>> you have two of them, you should give them names ahead of time with
>> def. This also should not be allowed work inside a for-, if-, with-,
>> or while- statement's initial expression, since the indenting won't
>> work out.
>
> Yup, this makes lots of sense, and would seem to eliminate some simple
> ugly cases. But it doesn't deal with the real ugliness that comes with
> multiple things in a statement. This would look like a godsend to
> creating properties, where you need three functions that could all be
> anonymous, like so:
>
> class Ugly(object):
> x = property(@(self):
> return self._x
> , @(self, value):
> self._x = value
> , @(self), "I'm the x property"):
> del self._x
>
> Except that that's the simple case, and it border on unreadable all by
> itself.
>
> Maybe @() shouldn't be allowed except for the last line in a
> multi-line statement?
Properties have already been fixed in Python 2.6+ as far as I'm
concerned, but I don't think that should be a valid use of the @, even
without my caveats, since it's mixing blocks around inside an
expression. The expression should be terminated before the beginning
of the block.
>> Also, myfunc = @(args) should be preemptively banned, since
>> def is the one right way to do it. Also you shouldn't be allowed to
>> do
>> this to make one liners with this (as you can eg. with if-
>> statements),
>> since that's why there's lambda.
>
> I disagree with both of these, mostly on symmetry grounds. I can
> already do:
>
> lowerfunc = lamba word: word.lower()
>
> why should this nifty new construct not be allowed to play? Similarly,
> every statement that is followed by a block can be followed by a
> simple statement, or a list of simple statements separated by
> semicolons. Why is this one different?
Mostly on grounds of TWOOTDI. I know people already consider "x =
lambda:" to be poor style, so I want to ban bad style preemptively
with @.
Incidentally, does anyone know why for decorators,
@lambda x: x
def f(): pass
is a syntax error? I guess people just don't want lambda based
decorators, for whatever reason.
> In particular, this fixes one of the properties of lambda's that
> people complain about most often: the inability to put statements in
> them. Why provide that win, only to basically make it worthless by
> forcing people it to use multiple statements? I mean, this:
>
> newlist = sorted(words, key=lambda word, print word; word.lower())
>
> would be great, but isn't allowed because lambda is restricted to an
> expression. You fix it by giving us:
>
> newlist = sorted(words, @(word)): print word; return word.lower()
>
> only to turn around and day "No, this block is special, and you can't
> do that here."????
I think it would look better on four lines. Maybe I'm wrong, but
that's my intuition.
>> And using this on the declaration line of a decorator is just crazy.
>
> Why? One of the most powerful languages I know of came about because
> the designers let people do "crazy" things. Sure, using a @( construct
> for the decorator expression is sort of pointless, but what if you
> want to pass an anonymous function to a decorator. Why is:
>
> @magicdecorator(lambda word, word.lower())
> def function(...
>
> ok, but:
>
> @advancedmagicdecorator(@(word)):
> return word.lower()
> def function(...
>
> not ok?
Ah, to be honest, I hadn't thought it through and just assumed it was
impossible to make it work, since there would be two things needing
indent-dedent tokens in a row. Still, isn't it a little ugly? (And
remember, this proposal is not about adding power to Python, just
making things more readable.)
> I kinda like it, if you can figure out how to deal with things like
> property. On the other hand, it smells a lot like lambda, which some
> people would like removed from the language...
Agreed. I fully expect the BDFL to reject this proposal, but I still
think it's interesting to consider as a "solution" to the insolvable
problem of multiline anonymous functions.
> <mike
> --
> Mike Meyer <mwm(a)mired.org> http://www.mired.org/consulting.html
> Independent Network/Unix/Perforce consultant, email for more
> information.
>
> O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
>
This list had a proposal last month to make everything an expression,
and it has not infrequent attempts to create multi-line lambdas, and I
think the reason for this is that people want a better way to create
functions that act like Ruby-style blocks since sometimes it makes
more sense to write the function that will be passed after the thing
it will be passed to and not before. So, here's my proposal. I expect
it to get rejected, but hey, someone approved @decorator, so maybe it
will make it...
Instead of
>>> def decorator(f):
... def inner(*args, **kwargs):
... print(*args, **kwargs)
... return f(*args, **kwargs)
... return inner
...
where the def inner comes before you know what it's going to be used
for, why not
>>> def decorator(f):
... return @(*args, **kwargs):
... print(*args, **kwargs)
... return f(*args, **kwargs)
...
? Here's it's very straightforward that what you're doing is returning
an anonymous function, and then you find out what's in the function.
Similarly,
>>> words = ["blah one", "Blah two", " bLAh three"]
>>> sorted(words, key=@(word)):
... word = word.lower()
... word = word.replace("one", "1")
... word = word.replace("two", "2")
... word = word.replace("three", "3")
... word = word.replace(" ", "")
... return word
...
[u'blah one', u'Blah two', u' bLAh three']
Which will be equivalent to:
>>> words = ["blah one", "Blah two", " bLAh three"]
>>> def key(word):
... word = word.lower()
... word = word.replace("one", "1")
... word = word.replace("two", "2")
... word = word.replace("three", "3")
... word = word.replace(" ", "")
... return word
...
>>> sorted(words, key=key)
[u'blah one', u'Blah two', u' bLAh three']
Again, I think it's clear here to be told first, "Oh, we're going to
sort something" and then learn how the sorting will be done than it is
to read a weird function and only after that learn what it's for.
Caveats: This shouldn't be allowed to work with two @s in one line. If
you have two of them, you should give them names ahead of time with
def. This also should not be allowed work inside a for-, if-, with-,
or while- statement's initial expression, since the indenting won't
work out. Also, myfunc = @(args) should be preemptively banned, since
def is the one right way to do it. Also you shouldn't be allowed to do
this to make one liners with this (as you can eg. with if-statements),
since that's why there's lambda. And using this on the declaration
line of a decorator is just crazy.
Of course, for all I know, Python's grammar is too simple to make this
work with all the caveats, but if it's not, I think this might be a
good way to improve readability without killing the indention-based
nature of Python that we all know and love.
My other thought is that if @ is deemed to be too "Perl-ish" of line
noise, def could be used as the keyword instead. Or perhaps a new
keyword like "block" or something. That said, I think there is a good
analogy here to the existing @-decorator where things are, strictly
speaking, written out of order so that the readability is improved and
the function that follows is given as an argument to the initial @ line.
Thoughts?
-- Carl
Several times I find myself using the following idiom:
_Missing = object() # sentinel
def foo(x, y=_Missing):
if y is _Missing:
do_this
else:
do_that
The reason for using a "private" sentinel is that any python object,
including None, might be a valid argument. Another option is using
*args or **kwds instead of y but that obfuscates unnecessarily the
function signature.
It would be nice if a new object or keyword, say __missing__, was
introduced as a canonical way to address this common scenario.
Specifically, the only valid usages of __missing__ would be:
1. As a default argument in a callable.
2. In identity tests: <var> is __missing__
Anything else would raise either a SyntaxError (e.g. `x =
__missing__`) or a RuntimeError/TypeError (e.g. `x = y` if y is
__missing__). Only the interpreter could assign __missing__ to a name
when binding objects to formal parameters.
If this was to be accepted, a further generalization could be to allow
the `var is __missing__` expression even if `var` is not a formal
parameter. This would be equivalent to:
try: var
except NameError: expr = True
else: expr = False
Thoughts ?
George
I have added support for naming objects in dumps, cross-references
and declarative support in dumps/loads too.
I can't now commit changes. This night I will commit changes to
sources and update project pages.
Here are examples of how it works now:
== Examples with loads==
===Example with recursive list===
>>> lst = pyon.loads("""
... lst = ['foo', lst]
... lst""")
>>> lst is lst[1]
True
===Example with recursive dict===
>>> d = pyon.loads("""
... d = {'a':'foo', 'b':d}
... d""")
>>> d is d['b']
True
===Example with cross-references===
>>> ob = pyon.loads("""
... lst = ['foo', lst, d]
... d = {'a':'foo', 'b':d, 'c':lst}
... [d, lst]""")
>>> lst = ob[1]
>>> d = ob[0]
>>> lst[1] is lst
True
>>> d is d['b']
True
>>> lst[2] is d
True
>>> d['c'] is lst
True
===Example with recursive class and cross-references===
>>> ob = pyon.loads("""
... c = C(parent=c, lst=lst, d=d)
... lst = ['foo', lst, d, c]
... d = {'a':'foo', 'b':d, 'c':lst, 'd':c}
... [d, lst, c]""")
>>> lst = ob[1]
>>> d = ob[0]
>>> c = ob[2]
>>> c.parent is c
True
>>> c.lst is lst
True
>>> c.d is d
True
>>> d is d['b']
True
>>> lst[2] is d
True
>>> d['c'] is lst
True
>>> d['d'] is c
True
>>> lst[3] is c
True
==Examples with dumps==
===Example with naming and assignments==
>>> p1 = (1,2)
>>> p2 = [1,2]
>>> pyon.dumps([p1,p2,p1,p2], fast=False)
_p__0=(1,2)
_p__1=[1,2]
[_p__0,_p__1,_p__0,_p__1]
>>> pyon.dumps([p1,p2,p1,p2], fast=False, p1=p1,p2=p2)
p1=(1,2)
p2=[1,2]
[p1,p2,p1,p2]
===Example with recursive list===
>>> lst = ['foo']
>>> lst.append(lst)
>>> pyon.dumps(lst, fast=False)
_p__0=['foo',_p__0]
_p__0
>>> pyon.dumps(lst, fast=False, lst=lst)
lst=['foo',lst]
lst
===Example with recursive dict===
>>> d = {'a':'foo'}
>>> d['b'] = d
>>> pyon.dumps(d, fast=False)
_p__0={'a':'foo', 'b':_p__0}
_p__0
>>> pyon.dumps(d, fast=False, d=d)
d={'a':'foo', 'b':d}
d
===Example with recursion in class instance==
class C(object):
def __reduce__(self):
return C, (), self.__dict__
>>> c= C()
>>> c.parent = c
>>> pyon.dumps(c, fast=False)
_p__0=C(parent=_p__0)
_p__0
>>> pyon.dumps(c, fast=False, c=c)
c=C(parent=c)
c
===Example with cross-refernce===
class C(object):
def __reduce__(self):
return C, (), self.__dict__
>>> lst = ['foo']
>>> lst.append(lst)
>>> d = {'a':'bar','b':lst}
>>> d['c'] = d
>>> c = C()
>>> c.lst = lst
>>> c.d = d
>>> c.parent = c
>>> d['d'] = c
>>> pyon.dumps([lst,d,c], fast = False, lst=lst, d=d)
lst=['foo',lst]
_p__1=C(lst=lst,d=d,parent=_p__1)
d={'a':'bar','c':d,'b':lst,'d':_p__1}
[lst,d,_p__1]
Best regards,
Zaur
I am sure that it is now too early to draw conclusions about PyON.
Best regards,
Zaur
2008/11/7 Josiah Carlson <josiah.carlson(a)gmail.com>:
> On Thu, Nov 6, 2008 at 12:08 AM, Zaur Shibzoukhov <szport(a)gmail.com> wrote:
>> 2008/11/6 Josiah Carlson <josiah.carlson(a)gmail.com>:
>>
>>> I guess I really don't understand the purpose of PyON. Syntactically
>>> it doesn't fit between json and yaml. It supports features that are
>>> more useful for a serialization language for RPC, etc., rather than
>>> configuration/inlining. And it doesn't really offer a reverse of
>>> representation -> object without going through the standard Python
>>> parser and executing the result (which has security implications).
>> PyON dosn't use exec in order to reconstruct obbject.
>> It uses python parser for constructing AST and then builds object
>> using pickle-like protocol from AST.
>> I guess PyON could be used as standard reconstructable representation
>> of python objects.
>
> Then that strictly limits it to Python. No other language can make
> use of it. And in that sense, we may as well use pickle (which is
> fast, already exists, supports recursive object definitions, as well
> as arbitrary Python objects).
>
>>> Again, json is very human readable/writable, is very close to literal
>>> Python syntax, and already has support in just about every language
>>> worth discussing (and Python offers a json loading module in the
>>> standard library). Can you give me a good reason why someone would
>>> want to choose PyON over json in 6 months?
>> If someone need standard way for reconstructable representation
>> of almost every python object based on python syntax then PyON is a
>> possible choice (probably in near future).
>> Also maybe in the future someone will offer better solution for
>> reconstructable representation of
>> python objects based on python syntax.
>>
>> Also I do not think that PyON should be used instead of JSON.
>
> Limiting yourself to Python syntax for something that really isn't
> human readable (in particular recursive structures and arbitrary
> object serialization) doesn't make a lot of sense. It's like limiting
> yourself to a 3rd grade vocabulary level for a doctoral dissertation
> because you think that 3rd graders might want to read it some day.
> People don't really read object representations. And when they do,
> it's because the representations are configuration files, in which
> self-references are a *really* bad idea.
>
> If you want a pure-python representation (rather than json), look at
> 'unrepr'; it uses the Python parser, and by default only supports
> standard data structures. It can be extended to support arbitrary
> classes with standard instantiation methods.
>
> - Josiah
>
--
С уважением,
Шибзухов З.М.