There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
I just spent a few minutes staring at a bug caused by a missing comma
-- I got a mysterious argument count error because instead of foo('a',
'b') I had written foo('a' 'b').
This is a fairly common mistake, and IIRC at Google we even had a lint
rule against this (there was also a Python dialect used for some
specific purpose where this was explicitly forbidden).
Now, with modern compiler technology, we can (and in fact do) evaluate
compile-time string literal concatenation with the '+' operator, so
there's really no reason to support 'a' 'b' any more. (The reason was
always rather flimsy; I copied it from C but the reason why it's
needed there doesn't really apply to Python, as it is mostly useful
inside macros.)
Would it be reasonable to start deprecating this and eventually remove
it from the language?
--
--Guido van Rossum (python.org/~guido)
This idea is already casually mentioned, but sank deep into the threads
of the discussion. Raise it up.
Currently reprs of classes and functions look as:
>>> int
<class 'int'>
>>> int.from_bytes
<built-in method from_bytes of type object at 0x826cf60>
>>> open
<built-in function open>
>>> import collections
>>> collections.Counter
<class 'collections.Counter'>
>>> collections.Counter.fromkeys
<bound method Counter.fromkeys of <class 'collections.Counter'>>
>>> collections.namedtuple
<function namedtuple at 0xb6fc4adc>
What if change default reprs of classes and functions to just full
qualified name __module__ + '.' + __qualname__ (or just __qualname__ if
__module__ is builtins)? This will look more neatly. And such reprs are
evaluable.
On Mon, Mar 23, 2015 at 2:08 AM, anatoly techtonik <techtonik(a)gmail.com>
wrote:
>
> That's nice to know, but IIRC datetime is from the top 10 Python
> modules that need a redesign. Things contained therein doesn't pass
> human usability check, and are not used as a result.
Where have you been when PEP 3108 was discussed? I have not seen any other
list of Python modules that needed a redesign, so I cannot tell what's on
your top ten list.
Speaking of the datetime module, in what sense does it not "pass human
usability check"? It does have a few quirks, for example I would rather
see date accept a single argument in the constructor which may be a string,
another date or a tuple, but I am not even sure this desire is shared by
many other humans. It would be nice if datetime classes were named in
CamelCase according to PEP 8 conventions, but again this is a very minor
quirk.
In my view, if anyone is to blame for the "human usability" of the datetime
module, it would be Pope Gregory XIII, Benjamin Franklin and scores of
unnamed astronomers who made modern timekeeping such a mess.
Hello everybody,
recently I posted PEP 487, a simpler customization of class creation.
For newcomers: I propose the introduction of a __init_subclass__ classmethod
which initializes subclasses of a class, simplifying what metaclasses can
already do.
It took me a while to digest all the ideas from the list here, but well, we're
not in a hurry. So, I updated PEP 487, and pushed the changes to github
at https://github.com/tecki/peps/commits/master
I applied the following changes:
PEP 487 contained the possibility to set a namespace for subclasses.
The most important usecase for this feature would be to have an
OrderedDict as the class definition namespace. As Eric Snow pointed
out, that will be soon standard anyways, so I took out this feature from
the PEP. The implementation on PyPI now just uses an OrderedDict
as a namespace, anticipating the expected changes to CPython.
I also did some reading on possible usecases for PEP 487, so that it
actually may be used by someone. Some Traits-like thing is a standard
usecase, so I looked especially at IPython's traitlets, which are a simple
example of that usecase.
Currently traitlets use both __new__ and __init__ of a metaclass. So I
tried to also introduce a __new_subclass__ the same way I introduced
__init_subclass__. This turned out much harder than I thought, actually
impossible, because it is type.__new__ that sets the method resolution
order, so making super() work in a __new_subclass__ hook is a
chicken-egg problem: we need the MRO to find the next base class to
call, but the most basic base class is the one creating the MRO. Nick,
how did you solve that problem in PEP 422?
Anyhow, I think that traitlets can also be written just using
__init_subclass__. There is just this weird hint in the docs that you should
use __new__ for metaclasses, not __init__, a hint I never understood
as the reasons when to use __new__ or __init__ are precisely the same
for normal classes and metaclasses. So I think we don't miss out much
when not having __new_subclass__.
I also updated the implementation of PEP 487, it's still at
https://pypi.python.org/pypi/metaclass
Greetings
Martin
I want Python to have macros. This is obviously a hard sell. I'm willing
to do some legwork to demonstrate value.
What would a good proposal include? Are there good examples of failed
proposals on this topic?
Is the core team open to this topic?
Thank you for your time,
- Mathew Rocklin
Sometimes we need a simple class to hold some mutable attributes,
provide a nice repr, support == for testing, and support iterable
unpacking, so you can write:
>>> p = Point(3, 4)
>>> x, y = p
That's very much like the classes built by namedtuple, but mutable.
I propose we add to the collections module another class factory. I am
calling it plainclass, but perhaps we can think of a better name. Here
is how it would be used:
>>> import collections
>>> Point = collections.plainclass('Point', 'x y')
The signature of the plainclass function would be exactly the same as
namedtuple, supporting the same alternative ways of naming the
attributes.
The semantics of the generated Point class would be like this code:
https://gist.github.com/ramalho/fd3d367e9d3b2a659faf
What do you think?
Cheers,
Luciano
PS. I am aware that there are "Namespace" classes in the standard
library (e.g. [2]). They solve a different problem.
[2] https://docs.python.org/3/library/argparse.html#argparse.Namespace
--
Luciano Ramalho
| Author of Fluent Python (O'Reilly, 2015)
| http://shop.oreilly.com/product/0636920032519.do
| Professor em: http://python.pro.br
| Twitter: @ramalhoorg
Hi,
any progress on infinite datetimes?
I just got the commitment from psycopg2 (lib to connect to PostgreSQL) developers,
that they will support them, if python would:
https://github.com/psycopg/psycopg2/issues/283#issuecomment-88016783
I can't help you with code in this context.
I have never started a crowd funding campaign. Do you think it would
be successful?
Regards,
Thomas Güttler
--
http://www.thomas-guettler.de/