I am sure this has been discussed before, and this might not even be the
best place for this discussion, but I just wanted to make sure this has
been thought about.
What if pypi.org supported private repos at a cost, similar to npm?
This would be able to help support the cost of pypi, and hopefully make it
better/more reliable, thus in turn improving the python community.
If this discussion should happen somewhere else, let me know.
On 5 April 2018 at 07:58, Jannis Gebauer <ja.geb(a)me.com> wrote:
> What if there was some kind of “blessed” entity that runs these services and puts the majority of the revenue into a fund that funds development on PyPi (maybe trough the PSF)?
Having a wholly owned for-profit subsidiary that provides commercial
services as a revenue raising mechanism is certainly one way to
approach something like this without alienating sponsors or tax
authorities (although it may still alienate the vendors of now
competing services). It would require a big time commitment on the PSF
side to get everything set up though, as well as interest from key
folks in joining what would essentially be a single-language-focused
start up in an already crowded cross-language developer tools
marketplace. When the PSF as a whole is still operating with only a
handful of full or part time employees, it's far from clear that
setting something like that up would be the most effective possible
use of their time and energy.
At a more basic level, that kind of arrangement technically doesn't
require anyone's blessing, it could be as straightforward as
downstream tooling vendors signing up as PSF sponsors and saying
"please allocate our sponsorship contribution to the Packaging WG's
budget so that PyPI keeps operating well and the PyPA tooling keeps
improving, increasing the level of demand for our commercial Python
repository management services".
Historically that wouldn't have helped much, since the PSF itself has
struggled with effective project management (for a variety of
reasons), but one of the things I think the success of the MOSS grant
has shown is the significant strides that the PSF has made in budget
management in recent years, such that if funding is made available, it
can and will be spent effectively.
P.S. PyPA contributors are also free agents in their own right, so
folks offering Python-centric developer workflow management tools or
features may decide that it's worth their while to invest more
directly in smoothing out some of the rough edges that currently still
exist. It's a mercenary way of looking at things, but in many cases,
it is *absolutely* possible to pay for the time and attention of
existing contributors, and if you can persuade them that your
proposals are reasonable, they'll often have an easier time than most
convincing other community contributors that it's a good way to go :)
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I normally wouldn't bring something like this up here, except I think
that there is possibility of something to be done--a language
documentation clarification if nothing else, though possibly an actual
code change as well.
I've been having an argument with a colleague over the last couple
days over the proper way order of statements when setting up a
try/finally to perform cleanup of some action. On some level we're
both being stubborn I think, and I'm not looking for resolution as to
who's right/wrong or I wouldn't bring it to this list in the first
place. The original argument was over setting and later restoring
os.environ, but we ended up arguing over
threading.Lock.acquire/release which I think is a more interesting
example of the problem, and he did raise a good point that I do want
to bring up.
My colleague's contention is that given
lock = threading.Lock()
this is simply *wrong*:
whereas this is okay:
Ignoring other details of how threading.Lock is actually implemented,
assuming that Lock.__enter__ calls acquire() and Lock.__exit__ calls
release() then as far as I've known ever since Python 2.5 first came
out these two examples are semantically *equivalent*, and I can't find
any way of reading PEP 343 or the Python language reference that would
However, there *is* a difference, and has to do with how signals are
handled, particularly w.r.t. context managers implemented in C (hence
we are talking CPython specifically):
If Lock.__enter__ is a pure Python method (even if it maybe calls some
C methods), and a SIGINT is handled during execution of that method,
then in almost all cases a KeyboardInterrupt exception will be raised
from within Lock.__enter__--this means the suite under the with:
statement is never evaluated, and Lock.__exit__ is never called. You
can be fairly sure the KeyboardInterrupt will be raised from somewhere
within a pure Python Lock.__enter__ because there will usually be at
least one remaining opcode to be evaluated, such as RETURN_VALUE.
Because of how delayed execution of signal handlers is implemented in
the pyeval main loop, this means the signal handler for SIGINT will be
called *before* RETURN_VALUE, resulting in the KeyboardInterrupt
exception being raised. Standard stuff.
However, if Lock.__enter__ is a PyCFunction things are quite
different. If you look at how the SETUP_WITH opcode is implemented,
it first calls the __enter__ method with _PyObjet_CallNoArg. If this
returns NULL (i.e. an exception occurred in __enter__) then "goto
error" is executed and the exception is raised. However if it returns
non-NULL the finally block is set up with PyFrame_BlockSetup and
execution proceeds to the next opcode. At this point a potentially
waiting SIGINT is handled, resulting in KeyboardInterrupt being raised
while inside the with statement's suite, and finally block, and hence
Lock.__exit__ are entered.
Long story short, because Lock.__enter__ is a C function, assuming
that it succeeds normally then
always guarantees that Lock.__exit__ will be called if a SIGINT was
handled inside Lock.__enter__, whereas with
there is at last a small possibility that the SIGINT handler is called
after the CALL_FUNCTION op but before the try/finally block is entered
(e.g. before executing POP_TOP or SETUP_FINALLY). So the end result
is that the lock is held and never released after the
KeyboardInterrupt (whether or not it's handled somehow).
Whereas, again, if Lock.__enter__ is a pure Python function there's
less likely to be any difference (though I don't think the possibility
can be ruled out entirely).
At the very least I think this quirk of CPython should be mentioned
somewhere (since in all other cases the semantic meaning of the
"with:" statement is clear). However, I think it might be possible to
gain more consistency between these cases if pending signals are
checked/handled after any direct call to PyCFunction from within the
Sorry for the tl;dr; any thoughts?
I would be very interested to bring design-by-contract into python 3. I
find design-by-contract particularly interesting and indispensable for
larger projects and automatic generation of unit tests.
I looked at some of the packages found on pypi and also we rolled our own
solution (https://github.com/Parquery/icontract/). I also looked into
However, all the current solutions seem quite clunky to me. The decorators
involve an unnecessary computational overhead and the implementation of
icontract became quite tricky once we wanted to get the default values of
the decorated function.
Could somebody update me on the state of the discussion on this matter?
I'm very grateful for any feedback on this!
the PEP 420 (Implicit Namespace Packages) is quite descriptive about the
problem and the solution implemented back in Python 3.3 but I feel there
may be a part missing (but maybe this is categorized as an
As I understand, for a package to allow being extended in this way, it
must be a namespace package and not contain a marker file. As a matter
of fact, no sub-package until the top level package can have a marker file:
However, what is not discussed is "implicit namespace sub-package". In
Python 3.6 (I guess since the first implementation), if you have this
parent # Regular package
child # Namespace package
you get "parent" as a regular package and "parent.child" as a namespace
package and it works (although now, every package data directory became
namespace packages and are importable, which may or may not be
desirable). The point is, does that add any value? I wasn't able to find
any discussion about this and, as far as I can see, there is actually no
use case for this as there is no possible way to contribute to the
"parent.child" namespace. Is that an intended behavior of PEP 420?
Again, I may have missed something or misinterpreted PEP 420 but this
contributes to the "Implicit package directories introduce ambiguity
into file system layouts." point by Nick Coghlan that was supposed to be
addressed in PEP 395.
Wouldn't it be more appropriate to enforce a sub-package to be a regular
package if the parent package is a regular package?
*Gallian Colombeau *
*Software engineer *
*Centre INRA PACA - UMR EMMAH*
/228, route de l'aérodrome - CS 40509
84914 Avignon - Cedex 9 - France /
For technical reasons, many functions of the Python standard libraries
implemented in C have positional-only parameters. Example:
Python 3.7.0a0 (default, Feb 25 2017, 04:30:32)
replace(self, old, new, count=-1, /) # <== notice "/" at the end
>>> "a".replace("x", "y") # ok
>>> "a".replace(old="x", new="y") # ERR!
TypeError: replace() takes at least 2 arguments (0 given)
When converting the methods of the builtin str type to the internal
"Argument Clinic" tool (tool to generate the function signature,
function docstring and the code to parse arguments in C), I asked if
we should add support for keyword arguments in str.replace(). The
answer was quick: no! It's a deliberate design choice.
Quote of Yury Selivanov's message:
I think Guido explicitly stated that he doesn't like the idea to
always allow keyword arguments for all methods. I.e. `str.find('aaa')`
just reads better than `str.find(needle='aaa')`. Essentially, the idea
is that for most of the builtins that accept one or two arguments,
positional-only parameters are better.
I just noticed a module on PyPI to implement this behaviour on Python functions:
My question is: would it make sense to implement this feature in
Python directly? If yes, what should be the syntax? Use "/" marker?
Use the @positional() decorator?
Do you see concrete cases where it's a deliberate choice to deny
passing arguments as keywords?
Don't you like writing int(x="123") instead of int("123")? :-) (I know
that Serhiy Storshake hates the name of the "x" parameter of the int
By the way, I read that "/" marker is unknown by almost all Python
developers, and [...] syntax should be preferred, but
inspect.signature() doesn't support this syntax. Maybe we should fix
signature() and use [...] format instead?
Replace "replace(self, old, new, count=-1, /)" with "replace(self,
old, new[, count=-1])" (or maybe even not document the default
Python 3.5 help (docstring) uses "S.replace(old, new[, count])".
I'd rather have functools.partial() to be added as a new method on
> fromfunctools importpartial
> def add(x:int,y:int)->int:
> returnx +y
> add_2 = partial(add,2)
add_2 = add.partial(2)
Nothing to change on the parser, no obscure syntax for future readers,
and we can get the opportunity of rewriting partial() in C as right now
it is amazingly way, way slower than a lambda.
> As Matthew points out, you could use numpy.array. Or code your own
> class, by providing __add__ and __iadd__ methods.
> >>> import numpy
> >>> a = numpy.array([1, 2])
> >>> b = numpy.array([3, 4])
> >>> a + b
> array([4, 6])
> >>> a += b
> >>> a
> array([4, 6])
I could, but I don't think that justifies not having this functionality in
standard. From the language experience perspective, numpy is often a
pain to install on most systems. If I'm designing card games and I
just want to run a quick monte carlo simulation, the experience should be
as smooth as possible.
This is something I think most students will expect while learning python,
especially if they're implementing algorithms.
On Mon, Aug 27, 2018 at 4:24 AM <python-ideas-request(a)python.org> wrote:
> Send Python-ideas mailing list submissions to
> To subscribe or unsubscribe via the World Wide Web, visit
> or, via email, send a message with subject or body 'help' to
> You can reach the person managing the list at
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Python-ideas digest..."
> Today's Topics:
> 1. Re: Unpacking iterables for augmented assignment (Matthew Einhorn)
> 2. Re: Unpacking iterables for augmented assignment (Jonathan Fine)
> 3. Re: Pre-conditions and post-conditions (Jacco van Dorp)
> 4. Re: Pre-conditions and post-conditions (Ivan Levkivskyi)
> Message: 1
> Date: Mon, 27 Aug 2018 01:29:14 -0400
> From: Matthew Einhorn <moiein2000(a)gmail.com>
> To: python-ideas(a)python.org
> Subject: Re: [Python-ideas] Unpacking iterables for augmented
> Content-Type: text/plain; charset="utf-8"
> On Sun, Aug 26, 2018, 9:24 PM James Lu <jamtlu(a)gmail.com> wrote:
> > Hi Johnathan
> > I echo your points. Indeed, the PEP referenced to refers to a "tuple
> > expression" in the grammatical and not the programmatic sense.
> > Finally, here's something that surprised me a little bit
> > >>> x = [1, 2]; id(x)
> > 140161160364616
> > >>> x += [3, 4]; id(x)
> > 140161160364616
> > >>> x = (1, 2); id(x)
> > 140161159928520
> > >>> x += (3, 4); id(x)
> > 140161225906440
> > Notice that '+=' creates uses the same object when the object is
> > a
> > list, but creates a new object. This raises the question: Why and
> > how
> > does Python behave in this way?
> > It's because lists are mutable are tuples are immutable.
> > There's a dunder iadd method and a dunder add method.
> > iadd magic methods, operating on the left hand side, return None and
> > modify the object in-place. add magic methods return the result and
> > don't modify the object it's called on.
> > iadd is mutable add, whereas add is "return a copy with the result
> > added"
> > >>> tuple.__iadd__
> > Traceback (most recent call last):
> > File "<stdin>", line 1, in <module>
> > AttributeError: type object 'tuple' has no attribute '__iadd__'
> > type object 'tuple' has no attribute '__iadd__'
> > >>> tuple.__add__
> > <slot wrapper '__add__' of 'tuple' objects>
> > >>> list.__iadd__
> > <slot wrapper '__iadd__' of 'list' objects>
> > >>> list.__add__
> > <slot wrapper '__add__' of 'list' objects>
> > tuple1 = tuple1.__add__(tuple2)
> > list1.__iadd__(list2)
> > > Does it IN PRACTICE bring sufficient benefits to users?
> > I found myself needing this when I was writing a monte-carlo
> > simulation in python that required incrementing a tallying counter
> > from a subroutine.
> Wouldn't a numpy array be very suited for this kind of task?
> Date: Thu, 30 Aug 2018 00:07:04 +0200
> From: Marko Ristin-Kaufmann <marko.ristin(a)gmail.com>
> I think we got entangled in a discussion about whether design-by-contract
> is useful or not. IMO, the personal experience ("I never used/needed this
> feature") is quite an inappropriate rule whether something needs to be
> introduced into the language or not.
> There seems to be evidence that design-by-contract is useful. Let me cite
> Bertrand Meyer from his article "Why not program right?" that I already
> mentioned before:
I don't think that being useful by itself should be enough. I think new features
should also be "Pythonic" and I don't see design by contract notation as a
For example, C has the useful & operator which lets you pass &foo as
a pointer/array argument despite foo being a scalar, so assignment to
bar in the called function actually sets the value of foo. It might be
possible to create some kind of aliasing operator for Python so that two
or more variables were bound to the same location, but would we want
it? No, because Python is not intended for that style of programming.
For another example, GPU shading languages have the special keywords
uniform and varying for distinguishing definitions that won't change across
parallel invocations and definitions that will. Demonstrably very useful in
computer games and supercomputer number crunching, so why doesn't
Python have those keywords? Because it's not designed to be used for
For design by contract, as others have noted Python assert statements
work fine for simple preconditions and postconditions. I don't see any
significant difference in readability between existing
def foo(x, y):
assert(x > 0)
# Do stuff
assert(x == y)
and new style
def foo(x, y):
x > 0
# Do stuff
x == y
Yes there's more to design by contract than simple assertions, but it's not
just adding syntax. Meyer often uses the special "old" construct in his post
condition examples, a trivial example being
ensure count = old.count + 1
How do we do that in Python? And another part of design by contract (at
least according to Meyer) is that it's not enough to just raise an exception,
but there must be a guarantee that it is handled and the post conditions
and/or invariants restored. So there's more syntax for "rescue" and "retry"
If you want to do simple pre and post conditions, Python already has assert.
If you want to go full design by contract, there's no law saying that Python
is the only programming language allowed. Instead of trying to graft new
and IMHO alien concepts onto Python, what's wrong with Eiffel?
Why shouldn't Python be better at implementing Domain Specific Languages?
>From Johnathan Fine:
> I really do wish we could have language that had all of Ruby's
> strengths, and also all of Python's. That would be really nice. Quite
> something indeed.
> Languages do influence each other. Ruby is good at internal Domain
> Specific Languages (DSLs). And there's Perrotta's influential book on
> Ruby Metaprogramming. That's something I think Python could learn
> But I don't see any need (or even benefit) in adding new language
> features to Python, so it can do better at DSLs.
It would be nice if there was a DSL for describing neural networks (Keras).
The current syntax looks like this:
model.add(Dense(units=64, activation='relu', input_dim=100))