There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
I didn't think of this when we were discussing 448. I ran into this today,
so I agree with you that it would be nice to have this.
Best,
Neil
On Monday, December 4, 2017 at 1:02:09 AM UTC-5, Eric Wieser wrote:
>
> Hi,
>
> I've been thinking about the * unpacking operator while writing some numpy
> code. PEP 448 allows the following:
>
> values = 1, *some_tuple, 2
> object[(1, *some_tuple, 2)]
>
> It seems reasonable to me that it should be extended to allow
>
> item = object[1, *some_tuple, 2]
> item = object[1, *some_tuple, :]
>
> Was this overlooked in the original proposal, or deliberately rejected?
>
> Eric
>
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Good day all,
as a continuation of thread "OS related file operations (copy, move,
delete, rename...) should be placed into one module"
https://mail.python.org/pipermail/python-ideas/2017-January/044217.html
please consider making pathlib to a central file system module with putting
file operations (copy, move, delete, rmtree etc) into pathlib.
BR,
George
Hey List,
this is my very first approach to suggest a Python improvement I'd think
worth discussing.
At some point, maybe with Dart 2.0 or a little earlier, Dart is now
supporting multiline strings with "proper" identation (tried, but I can't
find the according docs at the moment. probably due to the rather large
changes related to dart 2.0 and outdated docs.)
What I have in mind is probably best described with an Example:
print("""
I am a
multiline
String.
""")
the closing quote defines the "margin indentation" - so in this example all
lines would get reduces by their leading 4 spaces, resulting in a "clean"
and unintended string.
anyways, if dart or not, doesn't matter - I like the Idea and I think
python3.x could benefit from it. If that's possible at all :)
I could also imagine that this "indentation cleanup" only is applied if the
last quotes are on their own line? Might be too complicated though, I can't
estimated or understand this...
thx for reading,
Marius
Hi folks,
I normally wouldn't bring something like this up here, except I think
that there is possibility of something to be done--a language
documentation clarification if nothing else, though possibly an actual
code change as well.
I've been having an argument with a colleague over the last couple
days over the proper way order of statements when setting up a
try/finally to perform cleanup of some action. On some level we're
both being stubborn I think, and I'm not looking for resolution as to
who's right/wrong or I wouldn't bring it to this list in the first
place. The original argument was over setting and later restoring
os.environ, but we ended up arguing over
threading.Lock.acquire/release which I think is a more interesting
example of the problem, and he did raise a good point that I do want
to bring up.
</prologue>
My colleague's contention is that given
lock = threading.Lock()
this is simply *wrong*:
lock.acquire()
try:
do_something()
finally:
lock.release()
whereas this is okay:
with lock:
do_something()
Ignoring other details of how threading.Lock is actually implemented,
assuming that Lock.__enter__ calls acquire() and Lock.__exit__ calls
release() then as far as I've known ever since Python 2.5 first came
out these two examples are semantically *equivalent*, and I can't find
any way of reading PEP 343 or the Python language reference that would
suggest otherwise.
However, there *is* a difference, and has to do with how signals are
handled, particularly w.r.t. context managers implemented in C (hence
we are talking CPython specifically):
If Lock.__enter__ is a pure Python method (even if it maybe calls some
C methods), and a SIGINT is handled during execution of that method,
then in almost all cases a KeyboardInterrupt exception will be raised
from within Lock.__enter__--this means the suite under the with:
statement is never evaluated, and Lock.__exit__ is never called. You
can be fairly sure the KeyboardInterrupt will be raised from somewhere
within a pure Python Lock.__enter__ because there will usually be at
least one remaining opcode to be evaluated, such as RETURN_VALUE.
Because of how delayed execution of signal handlers is implemented in
the pyeval main loop, this means the signal handler for SIGINT will be
called *before* RETURN_VALUE, resulting in the KeyboardInterrupt
exception being raised. Standard stuff.
However, if Lock.__enter__ is a PyCFunction things are quite
different. If you look at how the SETUP_WITH opcode is implemented,
it first calls the __enter__ method with _PyObjet_CallNoArg. If this
returns NULL (i.e. an exception occurred in __enter__) then "goto
error" is executed and the exception is raised. However if it returns
non-NULL the finally block is set up with PyFrame_BlockSetup and
execution proceeds to the next opcode. At this point a potentially
waiting SIGINT is handled, resulting in KeyboardInterrupt being raised
while inside the with statement's suite, and finally block, and hence
Lock.__exit__ are entered.
Long story short, because Lock.__enter__ is a C function, assuming
that it succeeds normally then
with lock:
do_something()
always guarantees that Lock.__exit__ will be called if a SIGINT was
handled inside Lock.__enter__, whereas with
lock.acquire()
try:
...
finally:
lock.release()
there is at last a small possibility that the SIGINT handler is called
after the CALL_FUNCTION op but before the try/finally block is entered
(e.g. before executing POP_TOP or SETUP_FINALLY). So the end result
is that the lock is held and never released after the
KeyboardInterrupt (whether or not it's handled somehow).
Whereas, again, if Lock.__enter__ is a pure Python function there's
less likely to be any difference (though I don't think the possibility
can be ruled out entirely).
At the very least I think this quirk of CPython should be mentioned
somewhere (since in all other cases the semantic meaning of the
"with:" statement is clear). However, I think it might be possible to
gain more consistency between these cases if pending signals are
checked/handled after any direct call to PyCFunction from within the
ceval loop.
Sorry for the tl;dr; any thoughts?
Hi,
For technical reasons, many functions of the Python standard libraries
implemented in C have positional-only parameters. Example:
-------
$ ./python
Python 3.7.0a0 (default, Feb 25 2017, 04:30:32)
>>> help(str.replace)
replace(self, old, new, count=-1, /) # <== notice "/" at the end
...
>>> "a".replace("x", "y") # ok
'a'
>>> "a".replace(old="x", new="y") # ERR!
TypeError: replace() takes at least 2 arguments (0 given)
-------
When converting the methods of the builtin str type to the internal
"Argument Clinic" tool (tool to generate the function signature,
function docstring and the code to parse arguments in C), I asked if
we should add support for keyword arguments in str.replace(). The
answer was quick: no! It's a deliberate design choice.
Quote of Yury Selivanov's message:
"""
I think Guido explicitly stated that he doesn't like the idea to
always allow keyword arguments for all methods. I.e. `str.find('aaa')`
just reads better than `str.find(needle='aaa')`. Essentially, the idea
is that for most of the builtins that accept one or two arguments,
positional-only parameters are better.
"""
http://bugs.python.org/issue29286#msg285578
I just noticed a module on PyPI to implement this behaviour on Python functions:
https://pypi.python.org/pypi/positional
My question is: would it make sense to implement this feature in
Python directly? If yes, what should be the syntax? Use "/" marker?
Use the @positional() decorator?
Do you see concrete cases where it's a deliberate choice to deny
passing arguments as keywords?
Don't you like writing int(x="123") instead of int("123")? :-) (I know
that Serhiy Storshake hates the name of the "x" parameter of the int
constructor ;-))
By the way, I read that "/" marker is unknown by almost all Python
developers, and [...] syntax should be preferred, but
inspect.signature() doesn't support this syntax. Maybe we should fix
signature() and use [...] format instead?
Replace "replace(self, old, new, count=-1, /)" with "replace(self,
old, new[, count=-1])" (or maybe even not document the default
value?).
Python 3.5 help (docstring) uses "S.replace(old, new[, count])".
Victor
Apologies for letting this languish; life has an annoying habit of
getting in the way now and then.
Feedback from the previous rounds has been incorporated. From here,
the most important concern and question is: Is there any other syntax
or related proposal that ought to be mentioned here? If this proposal
is rejected, it should be rejected with a full set of alternatives.
Text of PEP is below; formatted version will be live shortly (if it
isn't already) at:
https://www.python.org/dev/peps/pep-0572/
ChrisA
PEP: 572
Title: Syntax for Statement-Local Name Bindings
Author: Chris Angelico <rosuav(a)gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 28-Feb-2018
Python-Version: 3.8
Post-History: 28-Feb-2018, 02-Mar-2018, 23-Mar-2018
Abstract
========
Programming is all about reusing code rather than duplicating it. When
an expression needs to be used twice in quick succession but never again,
it is convenient to assign it to a temporary name with small scope.
By permitting name bindings to exist within a single statement only, we
make this both convenient and safe against name collisions.
Rationale
=========
When a subexpression is used multiple times in a list comprehension, there
are currently several ways to spell this, none of which is universally
accepted as ideal. A statement-local name allows any subexpression to be
temporarily captured and then used multiple times.
Additionally, this syntax can in places be used to remove the need to write an
infinite loop with a ``break`` in it. Capturing part of a ``while`` loop's
condition can improve the clarity of the loop header while still making the
actual value available within the loop body.
Syntax and semantics
====================
In any context where arbitrary Python expressions can be used, a **named
expression** can appear. This must be parenthesized for clarity, and is of
the form ``(expr as NAME)`` where ``expr`` is any valid Python expression,
and ``NAME`` is a simple name.
The value of such a named expression is the same as the incorporated
expression, with the additional side-effect that NAME is bound to that
value for the remainder of the current statement.
Just as function-local names shadow global names for the scope of the
function, statement-local names shadow other names for that statement.
(They can technically also shadow each other, though actually doing this
should not be encouraged.)
Assignment to statement-local names is ONLY through this syntax. Regular
assignment to the same name will remove the statement-local name and
affect the name in the surrounding scope (function, class, or module).
Statement-local names never appear in locals() or globals(), and cannot be
closed over by nested functions.
Execution order and its consequences
------------------------------------
Since the statement-local name binding lasts from its point of execution
to the end of the current statement, this can potentially cause confusion
when the actual order of execution does not match the programmer's
expectations. Some examples::
# A simple statement ends at the newline or semicolon.
a = (1 as y)
print(y) # NameError
# The assignment ignores the SLNB - this adds one to 'a'
a = (a + 1 as a)
# Compound statements usually enclose everything...
if (re.match(...) as m):
print(m.groups(0))
print(m) # NameError
# ... except when function bodies are involved...
if (input("> ") as cmd):
def run_cmd():
print("Running command", cmd) # NameError
# ... but function *headers* are executed immediately
if (input("> ") as cmd):
def run_cmd(cmd=cmd): # Capture the value in the default arg
print("Running command", cmd) # Works
Function bodies, in this respect, behave the same way they do in class scope;
assigned names are not closed over by method definitions. Defining a function
inside a loop already has potentially-confusing consequences, and SLNBs do not
materially worsen the existing situation.
Differences from regular assignment statements
----------------------------------------------
Using ``(EXPR as NAME)`` is similar to ``NAME = EXPR``, but has a number of
important distinctions.
* Assignment is a statement; an SLNB is an expression whose value is the same
as the object bound to the new name.
* SLNBs disappear at the end of their enclosing statement, at which point the
name again refers to whatever it previously would have. SLNBs can thus
shadow other names without conflict (although deliberately doing so will
often be a sign of bad code).
* SLNBs cannot be closed over by nested functions, and are completely ignored
for this purpose.
* SLNBs do not appear in ``locals()`` or ``globals()``.
* An SLNB cannot be the target of any form of assignment, including augmented.
Attempting to do so will remove the SLNB and assign to the fully-scoped name.
In many respects, an SLNB is akin to a local variable in an imaginary nested
function, except that the overhead of creating and calling a function is
bypassed. As with names bound by ``for`` loops inside list comprehensions,
SLNBs cannot "leak" into their surrounding scope.
Example usage
=============
These list comprehensions are all approximately equivalent::
# Calling the function twice
stuff = [[f(x), x/f(x)] for x in range(5)]
# External helper function
def pair(x, value): return [value, x/value]
stuff = [pair(x, f(x)) for x in range(5)]
# Inline helper function
stuff = [(lambda y: [y,x/y])(f(x)) for x in range(5)]
# Extra 'for' loop - potentially could be optimized internally
stuff = [[y, x/y] for x in range(5) for y in [f(x)]]
# Iterating over a genexp
stuff = [[y, x/y] for x, y in ((x, f(x)) for x in range(5))]
# Expanding the comprehension into a loop
stuff = []
for x in range(5):
y = f(x)
stuff.append([y, x/y])
# Wrapping the loop in a generator function
def g():
for x in range(5):
y = f(x)
yield [y, x/y]
stuff = list(g())
# Using a statement-local name
stuff = [[(f(x) as y), x/y] for x in range(5)]
If calling ``f(x)`` is expensive or has side effects, the clean operation of
the list comprehension gets muddled. Using a short-duration name binding
retains the simplicity; while the extra ``for`` loop does achieve this, it
does so at the cost of dividing the expression visually, putting the named
part at the end of the comprehension instead of the beginning.
Statement-local name bindings can be used in any context, but should be
avoided where regular assignment can be used, just as ``lambda`` should be
avoided when ``def`` is an option. As the name's scope extends to the full
current statement, even a block statement, this can be used to good effect
in the header of an ``if`` or ``while`` statement::
# Current Python, not caring about function return value
while input("> ") != "quit":
print("You entered a command.")
# Current Python, capturing return value - four-line loop header
while True:
command = input("> ");
if command == "quit":
break
print("You entered:", command)
# Proposed alternative to the above
while (input("> ") as command) != "quit":
print("You entered:", command)
# See, for instance, Lib/pydoc.py
if (re.search(pat, text) as match):
print("Found:", match.group(0))
while (sock.read() as data):
print("Received data:", data)
Particularly with the ``while`` loop, this can remove the need to have an
infinite loop, an assignment, and a condition. It also creates a smooth
parallel between a loop which simply uses a function call as its condition,
and one which uses that as its condition but also uses the actual value.
Performance costs
=================
The cost of SLNBs must be kept to a minimum, particularly when they are not
used; the normal case MUST NOT be measurably penalized. SLNBs are expected
to be uncommon, and using many of them in a single function should definitely
be discouraged. Thus the current implementation uses a linked list of SLNB
cells, with the absence of such a list being the normal case. This list is
used for code compilation only; once a function's bytecode has been baked in,
execution of that bytecode has no performance cost compared to regular
assignment.
Other Python implementations may choose to do things differently, but a zero
run-time cost is strongly recommended, as is a minimal compile-time cost in
the case where no SLNBs are used.
Forbidden special cases
=======================
In two situations, the use of SLNBs makes no sense, and could be confusing due
to the ``as`` keyword already having a different meaning in the same context.
1. Exception catching::
try:
...
except (Exception as e1) as e2:
...
The expression ``(Exception as e1)`` has the value ``Exception``, and
creates an SLNB ``e1 = Exception``. This is generally useless, and creates
the potential confusion in that these two statements do quite different
things:
except (Exception as e1):
except Exception as e2:
The latter captures the exception **instance**, while the former captures
the ``Exception`` **type** (not the type of the raised exception).
2. Context managers::
lock = threading.Lock()
with (lock as l) as m:
...
This captures the original Lock object as ``l``, and the result of calling
its ``__enter__`` method as ``m``. As with ``except`` statements, this
creates a situation in which parenthesizing an expression subtly changes
its semantics, with the additional pitfall that this will frequently work
(when ``x.__enter__()`` returns x, eg with file objects).
Both of these are forbidden; creating SLNBs in the headers of these statements
will result in a SyntaxError.
Alternative proposals
=====================
Proposals broadly similar to this one have come up frequently on python-ideas.
Below are a number of alternative syntaxes, some of them specific to
comprehensions, which have been rejected in favour of the one given above.
1. ``where``, ``let``, ``given``::
stuff = [(y, x/y) where y = f(x) for x in range(5)]
stuff = [(y, x/y) let y = f(x) for x in range(5)]
stuff = [(y, x/y) given y = f(x) for x in range(5)]
This brings the subexpression to a location in between the 'for' loop and
the expression. It introduces an additional language keyword, which creates
conflicts. Of the three, ``where`` reads the most cleanly, but also has the
greatest potential for conflict (eg SQLAlchemy and numpy have ``where``
methods, as does ``tkinter.dnd.Icon`` in the standard library).
2. ``with NAME = EXPR``::
stuff = [(y, x/y) with y = f(x) for x in range(5)]
As above, but reusing the `with` keyword. Doesn't read too badly, and needs
no additional language keyword. Is restricted to comprehensions, though,
and cannot as easily be transformed into "longhand" for-loop syntax. Has
the C problem that an equals sign in an expression can now create a name
binding, rather than performing a comparison. Would raise the question of
why "with NAME = EXPR:" cannot be used as a statement on its own.
3. ``with EXPR as NAME``::
stuff = [(y, x/y) with f(x) as y for x in range(5)]
As per option 2, but using ``as`` rather than an equals sign. Aligns
syntactically with other uses of ``as`` for name binding, but a simple
transformation to for-loop longhand would create drastically different
semantics; the meaning of ``with`` inside a comprehension would be
completely different from the meaning as a stand-alone statement, while
retaining identical syntax.
4. ``EXPR as NAME`` without parentheses::
stuff = [[f(x) as y, x/y] for x in range(5)]
Omitting the parentheses from this PEP's proposed syntax introduces many
syntactic ambiguities. Requiring them in all contexts leaves open the
option to make them optional in specific situations where the syntax is
unambiguous (cf generator expressions as sole parameters in function
calls), but there is no plausible way to make them optional everywhere.
5. Adorning statement-local names with a leading dot::
stuff = [[(f(x) as .y), x/.y] for x in range(5)]
This has the advantage that leaked usage can be readily detected, removing
some forms of syntactic ambiguity. However, this would be the only place
in Python where a variable's scope is encoded into its name, making
refactoring harder. This syntax is quite viable, and could be promoted to
become the current recommendation if its advantages are found to outweigh
its cost.
6. Allowing ``(EXPR as NAME)`` to assign to any form of name.
This is exactly the same as the promoted proposal, save that the name is
bound in the same scope that it would otherwise have. Any expression can
assign to any name, just as it would if the ``=`` operator had been used.
Such variables would leak out of the statement into the enclosing function,
subject to the regular behaviour of comprehensions (since they implicitly
create a nested function, the name binding would be restricted to the
comprehension itself, just as with the names bound by ``for`` loops).
7. Enhancing ``if`` and ``while`` syntax to permit the capture of their
conditions::
if re.search(pat, text) as match:
print("Found:", match.group(0))
This works beautifully if and ONLY if the desired condition is based on the
truthiness of the captured value. It is thus effective for specific
use-cases (regex matches, socket reads that return `''` when done), and
completely useless in more complicated cases (eg where the condition is
``f(x) < 0`` and you want to capture the value of ``f(x)``). It also has
no benefit to list comprehensions.
Discrepancies in the current implementation
===========================================
1. SLNBs are implemented using a special (and mostly-invisible) name
mangling. They may sometimes appear in globals() and/or locals() with
their simple or mangled names (but buggily and unreliably). They should
be suppressed as though they were guinea pigs.
2. The forbidden special cases do not yet raise SyntaxError.
References
==========
.. [1] Proof of concept / reference implementation
(https://github.com/Rosuav/cpython/tree/statement-local-variables)
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
I have prepared a PEP draft for unifying function/method classes. You
can find it at
https://github.com/jdemeyer/PEP-functions
This has not officially been submitted as PEP yet, I want to hear your
comments first.
Thanks,
Jeroen.