There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
I just spent a few minutes staring at a bug caused by a missing comma
-- I got a mysterious argument count error because instead of foo('a',
'b') I had written foo('a' 'b').
This is a fairly common mistake, and IIRC at Google we even had a lint
rule against this (there was also a Python dialect used for some
specific purpose where this was explicitly forbidden).
Now, with modern compiler technology, we can (and in fact do) evaluate
compile-time string literal concatenation with the '+' operator, so
there's really no reason to support 'a' 'b' any more. (The reason was
always rather flimsy; I copied it from C but the reason why it's
needed there doesn't really apply to Python, as it is mostly useful
inside macros.)
Would it be reasonable to start deprecating this and eventually remove
it from the language?
--
--Guido van Rossum (python.org/~guido)
Hi,
I was re-reading some old threads about ordereddict literals to see if
any of them had gotten anywhere. Amongst them, I came across a post by
Tim Delaney:
https://mail.python.org/pipermail/python-ideas/2011-January/009049.html
that mentioned an odict literal of ['key': 'value', 'key2': 'value2']
could be confused with slice notation.
>From a syntax point-of-view, that doesn't seem to be true (as
mentioned in some of the replies to that thread), but it seems like
you can abuse the similarity to make it a little easier to declare
ordereddicts:
from collections import OrderedDict
class ODSlicer(object):
def __getitem__(self, key):
if type(key) is slice:
key = [key]
od = OrderedDict()
for k in key:
if type(k) is slice:
od[k.start] = k.stop
else:
od[k] = k
return od
od = ODSlicer()
print(od[1:2])
print(od["a":"b", "c":5])
print(od['a':'b', 'c':'d', ..., 'a':10, 'e':'f'])
You could then replace:
mydict = {
'foo': 'bar',
'baz': 'quux',
}
with:
mydict = od[
'foo': 'bar',
'baz': 'quux',
]
if you need to convert a hardcoded dict into a hardcoded ordereddict.
Works fine in python2.7 and python3.4.
At this point, I'd like to note in my defence that this isn't called
the python-good-ideas list :)
What's the actual objection to supporting ['foo': 'bar'] odict
literals? I saw Guido gave a -100, way back in the day, but no actual
explanation for why it was distasteful?
https://mail.python.org/pipermail/python-ideas/2009-June/004924.html
Cheers,
aj
--
Anthony Towns <aj(a)erisian.com.au>
With the CPython sprint, I was thinking about a lib to mark a
function/class as deprecated.
Example:
#!/usr/bin/env python
import sys
import deprecation # from an external lib or from a stdlib module
__version__ = (1,0)
@deprecation.deprecated(python=(3,7,), msg='use foo_v2')
def foo():
pass
@deprecation.deprecated(program=(1,2), msg='use bar_v2')
def bar():
pass
@deprecation.deprecated(python=(3,7), msg='use inspect.signature()')
@deprecation.deprecated(python=(3,7), to_use=inspect.signature)
def getfullargspec(*args, **kwargs):
pass
The deprecated decorator should check the version of the software and
the version of Python if asked with the arguments.
it will raise warnings.warn with PendingDeprecationWarning or
DeprecationWarning. Can be used in the documentation, via introspection.
It's just an idea, there is no code, no specs but if you are interested
I think I can try to propose a real solution
with an external library and if this idea seems to be interesting, I
will propose a PEP for 3.5 or 3.6.
The interest of this lib, we can parse the source code and find the
deprecated functions/classes with a small tool and the maintenance of
the code should be improved.
Please, could you give me your feedback?
Thank you,
Stephane
--
Stéphane Wirtel - http://wirtel.be - @matrixise
On 18 April 2014 16:58, Ed Kellett edk141-at-gmail.com |
python-ideas-at-python.org| <baswps8ght(a)sneakemail.com> wrote:
> case foo():
>
> would have to become
>
> case (foo(),):
>
> to work as expected when foo() returned a tuple, which would mean
> wrapping things in 1-tuples whenever you wanted to reliably match a
> case that is determined dynamically.
>
To obviate this, instead of tuples case could check membership of
CaseTuples. CaseTuple will be a class that extends tuple, identical to it,
and a case expression list with commas will yield a CaseTuple. To use other
iterables as case expression list you'll have to unpack them, or they will
be matched for equality.
On 18 April 2014 17:03, Joao S. O. Bueno jsbueno-at-python.org.br |
python-ideas-at-python.org| <0ucjz7dbjt(a)sneakemail.com> wrote:
> It may be just me, but I fail - in a complete manner - to see how this
> syntax can offer any
> improvement on current if/elif chains. It seems to differ from that
> only by reducing
> the explicitness, and the flexibility of the test condition that can
> be used in any
> of the subconditions
>
It's more simple to use when you can use it, as switch statement in the
other languages. And it somewhat adheres to the DRY principle: why repeat
the subject? If I'm checking what type of tarot card I have, why should I
repeat every time I'm trying to identify a tarot card?
I also thought about a syntax like this:
"case" comparator case_expr ":"
but IMHO it's too verbose for the typical uses of switch. If you want
flexibility, you can always use if-elif.
@Skip Montanaro: yes, switch statement is used in C also for code
optimization. Frankly, I think this aspect is unimportant for CPython in
the present time.
On 18 April 2014 18:51, Andrew Barnert abarnert-at-yahoo.com |
python-ideas-at-python.org| <3en9kh2cbt(a)sneakemail.com> wrote:
> No it isn't. First, the "for ... in" keywords are not the same as just
> "for" and a bunch of parens and semicolons.
>
My propose has elcase, that is not present in other languages.
Also, notice that if you try to read the switch statement, or your Python
> version, as English, it's nonsense.
>
Yes, the fact a case will behave differently for tuples and non-tuples will
be difficult to translate in English. I think that with CaseTuple proposal
it will be more understandable, since you have to explicitly unpack an
iterable, or use elements separated by commas.
Bash doesn't have separate "case" and "elcase" cases. After one case is
> done, the rest are skipped, just as in most other languages.
>
But it has ;& and ;;& similarly to break and continue of C, that it's
equivalent to case and elcase of my proposal.
I don't see how skipping over any elcase but falling through to the next
> case is in any way simpler than C.
Well, because it's coherent with if-elif. See my last example in my first
message.
Well, then at least look at the limited form of pattern matching Python has
> in the for and assignment statements and parameter matching, and maybe look
> at how pattern matching is used with case statements in other languages;
> don't try to suggest language designs based on guesses.
>
Excuse me? I know list comprehensions, lambdas and argument unpacking. And
I do not think you can see what I do before I post a message. If so, you
could see me googling before writing about something that I don't know very
well or I don't remember very well. So don't guess about what I do or not
do or know and not know, thank you.
About pattern matching in the for statement, I really don't know what they
are.
.... and? Are you suggesting that if the switch expression is a string and
> the case expression a compiled regex you could automatically call match
> instead of testing for equality? If not, how is having regexp even relevant
> here? And how are recursive functions relevant?
>
I'm suggesting to use if-elif with re module, if you want to use regular
expression, and to use recursive functions if you want... recursive
functions. To be more clear, IMHO switch-case is useful if it's simple.
A generator expression is equal to anything except itself, and doesn't
> contain anything.
>
You can convert it to an iterable. Probably an overkill, but you can do it.
I don't know what you mean by "symbolic pattern" here.
>
For what I know (not too much), in Mathematica pattern matching can be used
for symbols, and symbols can be used as identifiers:
https://reference.wolfram.com/mathematica/guide/Patterns.html
Hello everybody. I firstly proposed this syntax in the python-list. As I already wrote there, I read PEP 3103 and I'm not completely satisfied by any of the proposed solutions.
My first idea was a little different, this is my final proposal after a short but good brainstorm:
switch_stmt ::= "switch" switch_expr "case" case_expr ":" suite
("case" | "elcase" case_expr ":" suite)*
["else" ":" suite]
switch_expr ::= expression
case_expr ::= expression_list
- if case_expr is a tuple, the case suite will be executed if switch_expr is a member of the tuple
- if case_expr is not a tuple, the case suite will be executed if switch_expr == case_expr
- if a case_expr is checked, any subsequent elcase statements are skipped, and the next case statement is performed, of there's one. This is completely identical to if - elif.
Example:
briefing_days = ("Tue", "Thu")
normal_days = ("Mon", "Wed", "Fri")
switch day case normal_days + briefing_days:
go_to_work = True
day_type = "weekday"
case normal_days:
lunch_time = datetime.time(12)
meeting_time = datetime.time(14)
elcase briefing_days:
lunch_time = datetime.time(11, 30)
meeting_time = datetime.time(12, 30)
else:
go_to_work = False
day_type = "festive"
lunch_time = None
meeting_time =None
A simpler example:
switch tarot case 0:
card = "Fool"
elcase 1:
card = "Alan Moore"
elcase 2:
card = "High Priestess"
<etc....>
Some remarks:
1. switch is on the same line of the first case. This is because alternatives in PEP 3103 seems unpythonic to me
2. I decided to not use already existing keywords like "if" or "in", since this will be misleading
3. I preferred case / elcase instead of using break or continue keyword because:
a. break / continue can confuse, since they are used in loop statements, and this is not a loop statement, as Ethan Furman pointed out in the other mailing list
b. break / continue is useful only if it can be used not only as final command, so you can skip the rest of the current case suite. If it must be the final command, it's the same of case - elcase. IMHO there's no much real need to allow this
c. case - elcase syntax is much more readable and less confusing of a break / continue
d. case - elcase syntax is identical to if - elif, so people will be more familiar with it
4. notice that you can put a "case" statement after an "elcase" one. For example:
switch cpu case "A6-6400K", "A8-6600K"
clock = 3.9
elcase "A10-6790K", "A10-6800B"
clock = 4.1
case "A6-6400K"
tdp = 65
elcase "A8-6600K", "A10-6790K", "A10-6800B"
tdp = 100
is equivalent to
if cpu in ("A6-6400K", "A8-6600K")
clock = 3.9
elif cpu in ("A10-6790K", "A10-6800B")
clock = 4.1
if cpu == "A6-6400K"
tdp = 65
elif cpu in ("A8-6600K", "A10-6790K", "A10-6800B")
tdp = 100
Hi there,
In my travails to get py-lmdb (github.com/dw/py-lmdb) into a state I can
depend on, I've been exploring continuous integration options since
sometime mid last year, in the hopes of finding the simplest option that
allows the package to be tested on all sensible versions of Python that
I might bump into in a commercial environment.
It seems this simple task currently yields one solution - Travis CI,
except that by default Travis supports neither Windows, nor any version
of Python older than about 3 months, without requiring serious manual
configuration and maintenance.
The library just about has a working Travis CI at this point – although
it is incredibly ugly. Testing against all recent versions of Python (by
"recent" I mean "anything I've seen and might reasonably expect to bump
into while consulting, i.e. since 2.5") requires around 14 Debian
packages to be installed, followed by installation of compatible
versions of easy_install and Pip for each interesting Python version.
The result just about works, but this week I'm again finding that it
doesn't quite go far enough, as I begin working on a Windows binary
distribution for my package.
Windows is almost the same problem all over again - testing
supported/sensible extension module builds requires 14 binary
installations of Python (one for each of 32 bit and 64 bit), 3 distinct
versions of Microsoft Visual Studio (that I'm aware of so far), and a
compatible version of Windows to run it all on.
So the work done to support CI on Travis more or less isn't enough. It
seems the best option now is a custom Jenkins setup, and acquiring
licenses for all the Microsoft software necessary to build the package.
You may wonder why one might care about so many old Python versions in
weird and exotic environments - to which I can't really give a better
answer than that I've lost so much time working with packages that
suffer from platform/version compatibility issues, that I'd like to
avoid it in my own code.
While it isn't so true in web development, there are pretty huge
installations of Python around the world, many of which aren't quite
near migrating to 2.6, never mind 3.4. This can happen for different
reasons, from maintaining private forks of Python (perhaps in a
post-audited state, where another audit simply isn't in the budget),
through to having thousands of customers who have written automation
scripts for their applications using an older dialect.
In any case it's enough to note these installations do and will continue
to exist, than to question why they exist..
So in the coming months I'll probably end up configuring a Jenkins,
porting the Travis CI scripts over to it to a private Ubuntu VM
configured as a Jenkins runner, and setting up a new Windows VM as a
second runner, but it seems entirely wasteful that so much effort should
be made for just one project.
And that leads to thinking about a Python-specific CI service, that
could support builds similar to Travis CI, except for all configurations
of Python still in active use. One aspect of such a service that is
particularly interesting, is the ability to centralize management of the
required licenses – the PSF has a much greater reputation, and is much
more likely to be granted a blanket license covering a CI service for
all open source Python code, than each individual project is by applying
individually.
Licensing aside, having a 0-cost option for any Python project to be
tested would improve the quality of our ecosystem overall, especially in
the middle of the largest transition to date - the 3.x series.
Following from the ecosystem improvement angle, there might be no reason
a Python-specific CI service could not automatically download, build and
py.test every new package entering PyPI, producing as side effect
scoreboards similar to the Python 3 Wall of Superpowers
(http://python3wos.appspot.com/). This would make a sound base for
building future improvements for PyPI - for example a lint service,
verifying "setup.py" is not writing to /etc/passwd, or uploaded eggs
aren't installing a new SVCHOST.EXE to %SystemDir%.
It could even make sense to have the hypothetical service produce
signed, "golden" artefacts for upload to PyPI - binaries built from
source in a trusted environment, free of any Bitcoin ransom-ware
surreptitiously inserting itself into the bdist_egg ZIP file.
There are many problems with providing a CI service, for example, the
handling of third party commercial libraries required during the build
process. An example of this might be producing a build of cx_Oracle,
where Oracle's proprietary SDK is a prerequisite.
Still, it would not require many resources, or even very much code, to
provide a service that could minimally cover at least basic needs for
purely open source software, and so the idea appeals to me.
Any thoughts?
David
In http://bugs.python.org/issue17552 me and a bunch of other devs are
currently discussing what kind of API socket.sendfile() should have and
because it seems there are different opinions about it I though I should
have brought this up here for further discussion.
With issue 17552 I'm proposing to provide a wrapper on top of
os.sendfile(), socket.send() and possibly TransmitFile on Windows. AFAIK
the modules which could immediately benefit from this addition are ftplib (
http://bugs.python.org/issue13564) and httplib (
http://bugs.python.org/issue13559).
This is the current function signature as of socket-sendfile5.patch:
def sendfile(self, file, blocksize=262144, offset=0, use_fallback=True):
"""sendfile(file[, blocksize[, offset[, use_fallback]]]) -> sent
Send a file until EOF is reached attempting to use high-performance
os.sendfile() in which case *file* must be a regular file object
opened in binary mode; if not and *use_fallback* is True send()
will be used instead.
File position is updated on return or also in case of error in
which case file.tell() can be used to figure out the number of
bytes which were transmitted.
*blocksize* is the maximum number of bytes to transmit at one time,
*offset* tells from where to start reading the file.
The socket must be of SOCK_STREAM type.
Non-blocking sockets are not supported.
Return the total number of bytes which were transmitted.
"""
Debatable questions which were raised during the discussion on the bug
tracker are:
1 - whether to provide a "use_fallback" argument which when False raises an
exception in case os.sendfile() could not be used
2 - whether to provide a custom exception in order to signal the number of
bytes transmitted as opposed to use file.tell() afterwards; for the record,
I'm -1 on this.
3 - whether to commit the patch as-is without including Windows support via
TransmitFile() and
post-pone that for a later time in a separate issue
4 - (extra) whether to provide a "callback" argument which gets called
against each block of data before it is being sent. This would be useful
to apply data transformation or to implement a progress bar or something.
Note if this gets added it would sort of conflict with the
"use_fallback=False" parameter because in order to apply data
transformation socket.send() must be used instead of os.sendfile().
So far #1 appears to be the most debatable question.
--
Giampaolo - http://grodola.blogspot.com