Currently these functions fail if the supplied object has no len().
There are algorithms for this task that can work on any finite iterator
(for example, files as a stream of lines), and the functions could fall
back to these if there is no len().
I like python easy extend class and reusse code, but sometime I overwrite involontary some functions or variables from parents.
I think about a decorator to confirm overwrite is intended and put a warning if is not present.
Warning overwrite hello method...
Dont know if this should be add to language this way or another, or if this should be in pylint only ...
Perhaps someone get better way to achieve this or something already exist?
May the python be with you,
I have a case of multithreaded program where under some condition, I
would want to test the flag of the Event object and clear it atomically.
This would allow me to know which thread has actually cleared the flag
and thus should perform some more work before setting it again.
The isSet method is of course not good enough because it's subject to
race conditions. (Not even sure the GIL would make it safe.)
The changes to threading.py would be pretty trivial:
@@ -515,8 +515,10 @@
+ prevflag = self._flag
self._flag = True
+ return prevflag
"""Reset the internal flag to false.
@@ -526,7 +528,9 @@
+ prevflag = self._flag
self._flag = False
+ return prevflag
def wait(self, timeout=None):
"""Block until the internal flag is true.
What do you think about it?
Hi all !
Ive heard some people saying it was rude to post on a mailing list without
introducing yourself so here goes something: my name is James Pic and I've
been developing and deploying a wide variety fof Python projects Python for
the last 8 years, I love to learn and share and writing documentation
amongst other things such as selling liquor.
The way I've been deploying Python projects so far is probably similar to
what a lot of people do and it almost always includes building runtime
dependencies on the production server. So, nobody is going to congratulate
me for that for sure but I think a lot of us have been doing so.
Now I'm fully aware of distribution specific packaging solutions like
dh-virtualenv shared by Spotify but here's my mental problem: I love to
learn and to hack. I'm always trying now distributions and I rarely run the
one that's in production in my company and when I'm deploying personal
projects I like funny distributions like arch, Alpine Linux, or
interesting paas solutions such as cloudfoundry, openshift, rancher and
And I'm always facing the same problem: I have to either build runtime
dependencies on the server, either package my thing in the platform
specific way. I feel like I've spent a really huge amount of time doing
this king of thing. But the java people, they have jars, and they have
smooth deployments no matter where they deploy it.
So that's the idea I'm trying to share: I'd like to b able to build a file
with my dependencies and my project in it. I'm not sure packaging only
Python bytecode would work here because of c modules. Also, I'm always
developing against a different Python version because I'm using different
distributions because it's part of my passions in life, as ridiculous as it
could sound to most people, I'm expecting at least some understanding from
this list :)
So I wonder, do you think the best solution for me would be to build an elf
binary with my Python and dependencies that I could just run on any
distribution given its on the right architecture ? Note that I like to use
Arm too, so I know I'd need to be able to cross compile too.
Thanks a lot for reading and if you can to take some time to share your
thoughts and even better : point me in a direction, if that idea is the
right solution and I'm going to be the only one interested I don't care if
it's going to take years for me to achieve this.
Thanks a heap !
PS: I'm currently at the openstack summit in Barcelona if anybody there
would like to talk about it in person, in which case I'll buy you the
I was thinking if a reward system for contributions would add motivation
to those who want to contribute.
Something in the lines of a reputation system.
This has been done before with success.
On Thu, Nov 17, 2016 at 07:37:36AM +0100, Mikhail V wrote:
> On 17 November 2016 at 01:06, Steven D'Aprano <steve(a)pearwood.info> wrote:
> > It doesn't matter what keyword you come up with, you still have the
> > problem that this idea introduces a new keyword. New keywords always
> > break backwards compatibility, which means there has to be a good reason
> > to do it.
> > Any meaningful name you pick as a keyword will probably clash with
> > somebody's code, which means you will break their working program by
> > introducing a new keyword.
> I appreciate how you explain things, you point to the root of problems and
> it always makes my logic better.
Thank you :-)
> > But the first question
> > we'd ask is, what do other languages do? Python rarely invents new
> > programming ideas itself, but it does copy good ideas from other
> > languages.
> I don't know why, but this sounds somehow pessimistic.
In some ways it is. But Python is a conservative language (despite the
"radical" feature of significant indentation, but even that was a proven
concept from at least one other language, ABC, before Guido used the
idea). Things may have been different two decades ago when Python was a
brand-new language with a handful of users. Python is a mature language
now with a huge user-base. It is probably one of the top five most
popular languages in the world, and certainly one of the top ten.
We have a responsibility to our users:
- to minimise the amount of disruption to their code when they upgrade;
- that means avoiding breaking backwards compatibility unless there is
significant benefit to make up for the pain;
- and long deprecation periods before removing obsolete features;
- since it typically takes five years, or even ten, to remove a
deprecated feature, we should avoid wasting users' time by adding new
and exciting experimental features that turn out to be a bad idea.
Nick Coghlan has a good blog post that explains the tension in the
Python community between those who want the language to evolve faster,
and those who want it to evolve slower:
For people used to the rapid change in the application space (new
versions of Chrome virtually daily; Ubuntu brings out new operating
system versions every six months) Python seems to move really, really
slowly. But for a programming language, Python moves radically fast:
most C code written in the 1970s probably works correctly today, and it
can easily be a decade between C versions.
> First main question then would be:
> Are augmented assignment operators needed *much*?
They are. They were one of the most often requested features before
Python. Most people consider that they improve the quality of code.
> In this case the work on thinking out a better
> syntax for in-place stuff is not so useless as it
> seems to be now.
Mikhail, you are missing the point that people have already spent
decades thinking about "a better syntax for in-place stuff", and for
Python that syntax is augmented assignment.
I'm sorry that *you personally* don't like this syntax. That puts you in
a very small minority. Not everybody likes every part of Python syntax.
But I think it is time for you to give up: you cannot win this battle.
> If Python will *someday* be able to optimise
> the simple A = A + 1 to in-place without
> need of special syntax (by type annotation probably?),
I really, really hope not.
Type annotations are irrelevant here. Python doesn't need type
annotations to know the type of A at runtime.
The advantage of augmented assignment is that it allows me, the
programmer, to decide whether or not I want in-place array operations.
It is *my* choice, not the compiler's, whether I create a new list:
A = B = [1, 2, 3]
A = A + 
assert B == [1, 2, 3]
or make the change in-place:
A = B = [1, 2, 3]
A += 
assert B == [1, 2, 3, 4]
> Binary_mask += numpy.sum(B, C)
> 1). prefix keyword approach examples:
> incal Binary_mask + numpy.sum(B, C)
> inc Binary_mask + numpy.sum(B, C)
> calc Binary_mask + numpy.sum(B, C)
> exec Binary_mask + numpy.sum(B, C)
Those suggestions waste perfectly good and useful variable names as
keywords (I have code that uses inc and calc as names, and exec is the
name of a built-in function). And not one of those examples makes it
clear that this is an assignment.
Augmented assignment does make it clear: it uses a variation on the =
binding operator. (Its not actually an operator, but for lack of a
better name, I'll call it one.) It follows the same basic syntax as
target = expression
except that he augmented operator is inserted before the equals sign:
target -= expression
target += expression
target *= expression
At this point, I think it is a waste of time to continue discussing
alternatives to augmented assignment syntax. I'm happy to discuss the
culture of Python and how we decide on making changes to the language,
but I am not interested in discussing augmented assignment itself.
Here is a simplification of a problem that's been happening in my code:
g = f()
This code prints 1 and then 2, not just 1 like you might expect. This is
because when the generator is garbage-collected, it gets `GeneratorExit`
sent to it.
This has been a problem in my code since in some instances, I tell a
context manager not to do its `__exit__` function. (I do this by using
`ExitStack.pop_all()`. However the `__exit__` is still called here.
I worked around this problem by adding `except GeneratorExit: raise` in my
context manager, but that's an ugly solution.
Do you think that something could be done so I won't have to add `except
GeneratorExit: raise` to each context manager to get the desired behavior?