Frequently, while globbing, one needs to work with multiple extensions. I’d
like to propose for fnmatch.filter to handle a tuple of patterns (while
preserving the single str argument functionality, alas str.endswith), as a
first step for glob.i?glob to accept multiple patterns as well.
Here is the implementation I came up with:
https://github.com/python/cpython/compare/master...andresdelfino:fnmatch-mu…
If this is deemed reasonable, I’ll write tests and documentation updates.
Any opinion?
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Forgive me if this idea has been discussed before, I searched the mailing lists, the CPython repo, and the issue tracker and was unable to find anything.
I have found myself a few times in a position where I have a repeated argument that uses the `append` action, along with some convenience arguments that append a specific const to that same dest (eg: `--filter-x` being made equivalent to `--filter x` via `append_const`). This is particularly useful in cli apps that expose some kind of powerful-but-verbose filtering capability, while also providing shorter aliases for common invocations. I'm sure there are other use cases, but this is the one I'm most familiar with.
The natural extension to this filtering idea are convenience args that set two const values (eg: `--filter x --filter y` being equivalent to `--filter-x-y`), but there is no `extend_const` action to enable this.
While this is possible (and rather straight forward) to add via a custom action, I feel like this should be a built-in action instead. `append` has `append_const`, it seems intuitive and reasonable to expect `extend` to have `extend_const` too (my anecdotal experience the first time I came across this need was that I simply tried using `extend_const` without checking the docs, assuming it already existed).
Please see this gist for a working example that may help explain the idea and intended use case more clearly: https://gist.github.com/roganartu/7c2ec129d868ecda95acfbd655ef0ab2
Hi Ilya,
I'm not sure that this mailing list (Python-Dev) is the right place for
this discussion, I think that Python-Ideas (CCed) is the correct place.
For the benefit of Python-Ideas, I have left your entire post below, to
establish context.
[Ilya]
> I needed reversed(enumerate(x: list)) in my code, and have discovered
> that it wound't work. This is disappointing because operation is well
> defined.
It isn't really well-defined, since enumerate can operate on infinite
iterators, and you cannot reverse an infinite stream. Consider:
def values():
while True:
yield random.random()
a, b = reversed(enumerate(values())
What should the first pair of (a, b) be?
However, having said that, I think that your idea is not unreasonable.
`enumerate(it)` in the most general case isn't reversable, but if `it`
is reversable and sized, there's no reason why `enumerate(it)` shouldn't
be too.
My personal opinion is that this is a fairly obvious and straightforward
enhancement, one which (hopefully!) shouldn't require much, if any,
debate. I don't think we need a new class for this, I think enhancing
enumerate to be reversable if its underlying iterator is reversable
makes good sense.
But if you can show some concrete use-cases, especially one or two from
the standard library, that would help your case. Or some other languages
which offer this functionality as standard.
On the other hand, I think that there is a fairly lightweight work
around. Define a helper function:
def countdown(n):
while True:
yield n
n -= 1
then call it like this:
# reversed(enumerate(seq))
zip(countdown(len(seq)-1), reversed(seq)))
So it isn't terribly hard to work around this. But I agree that it would
be nice if enumerate encapsulated this for the caller.
One potentially serious question: what should `enumerate.__reversed__`
do when given a starting value?
reversed(enumerate('abc', 1))
Should that yield...?
# treat the start value as a start value
(1, 'c'), (0, 'b'), (-1, 'a')
# treat the start value as an end value
(3, 'c'), (2, 'b'), (1, 'a')
Something else?
My preference would be to treat the starting value as an ending value.
Steven
On Wed, Apr 01, 2020 at 08:45:34PM +0200, Ilya Kamenshchikov wrote:
> Hi,
>
> I needed reversed(enumerate(x: list)) in my code, and have discovered that
> it wound't work. This is disappointing because operation is well defined.
> It is also well defined for str type, range, and - in principle, but not
> yet in practice - on dictionary iterators - keys(), values(), items() as
> dictionaries are ordered now.
> It would also be well defined on any user type implementing __iter__,
> __len__, __reversed__ - think numpy arrays, some pandas dataframes, tensors.
>
> That's plenty of usecases, therefore I guess it would be quite useful to
> avoid hacky / inefficient solutions like described here:
> https://code.activestate.com/lists/python-list/706205/.
>
> If deemed useful, I would be interested in implementing this, maybe
> together with __reversed__ on dict keys, values, items.
>
> Best Regards,
> --
> Ilya Kamen
>
> -----------
> p.s.
>
> *Sketch* of what I am proposing:
>
> class reversible_enumerate:
>
> def __init__(self, iterable):
> self.iterable = iterable
> self.ctr = 0
>
> def __iter__(self):
> for e in self.iterable:
> yield self.ctr, e
> self.ctr += 1
>
> def __reversed__(self):
> try:
> ri = reversed(self.iterable)
> except Exception as e:
> raise Exception(
> "enumerate can only be reversed if iterable to
> enumerate can be reversed and has defined length."
> ) from e
>
> try:
> l = len(self.iterable)
> except Exception as e:
> raise Exception(
> "enumerate can only be reversed if iterable to
> enumerate can be reversed and has defined length."
> ) from e
>
> indexes = range(l-1, -1, -1)
> for i, e in zip(indexes, ri):
> yield i, e
>
> for i, c in reversed(reversible_enumerate("Hello World")):
> print(i, c)
>
> for i, c in reversed(reversible_enumerate([11, 22, 33])):
>
> print(i, c)
> _______________________________________________
> Python-Dev mailing list -- python-dev(a)python.org
> To unsubscribe send an email to python-dev-leave(a)python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/NDDKDUD…
> Code of Conduct: http://python.org/psf/codeofconduct/
Consider the following example:
import unittest
def foo():
for x in [1, 2, 'oops', 4]:
print(x + 100)
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertIs(foo(), None)
if __name__ == '__main__':
unittest.main()
If we were calling `foo` directly we could enter post-mortem debugging via `python -m pdb test.py`.
However since `foo` is wrapped in a test case, `unittest` eats the exception and thus prevents post-mortem debugging. `--failfast` doesn't help, the exception is still swallowed.
Since I am not aware of a solution that enables post-mortem debugging in such a case (without modifying the test scripts, please correct me if one exists), I propose adding a command-line option to `unittest` for [running test cases in debug mode](https://docs.python.org/3/library/unittest.html#unittest.TestCase.deb… so that post-mortem debugging can be used.
P.S.: There is also [this SO question](https://stackoverflow.com/q/4398967/3767239) on a similar topic.
There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
Hi,
In Python, there are multiple [compound
statements](https://docs.python.org/3/reference/compound_stmts.html)
with the `else` keyword.
For example:
```
for x in iterable:
if x == sentinel:
break
else:
print("Sentinel not found.")
```
or:
```
try:
do_something_sensitive()
except MyError:
print("Oops!")
else:
print("We're all safe.")
```
In my situation, I would like to mix the `with` statement with `else`.
In this case I would like that if no exception is raised within the
`with` to run the `else` part.
For example:
```
with my_context():
do_something_sensitive()
else:
print("We're all safe.")
```
Now imagine that in my `try .. except` block I have some heavy setup
to do before `do_something_sensitive()` and some heavy cleanup when
the exception occurs.
I'd like my context manager to do the preparation work, execute the
body, and cleanup. Or execute my else block only if there is no
exception.
Is there already a way to accomplish this in Python or can this be a
nice to have?
Regards,
Jimmy
As an aside I have a perfect example to back up what Paul is saying below. I work for a large corporation where developers are permitted to install python modules on their development machines, (subject to some licence restrictions), but the proxy settings required to get to PyPi vary from user to user, site to site, day to day (sometimes hour to hour) and connection type to connection type, e.g. If you are connected via WiFi you need to use a different proxy to that needed if you are on a wired connection and on a VPN (we have more than one in use) different again.
To help with this I have put together a script that will try pip, if it fails will check the current location for pac.pac, get and parse it for proxies, correct a common incompatibility and then try each discovered proxy and if sill none work fall back to a list of common proxies. Once it finds a working setting it advises the user what to use in the current situation.
Obviously, since the script is intended to work out how to get to PyPi, I needed to stick 100% to the standard libraries.
-----Original Message-----
From: Paul Moore <p.f.moore(a)gmail.com>
Sent: 06 April 2020 09:11
To: Stephen J. Turnbull <turnbull.stephen.fw(a)u.tsukuba.ac.jp>
Cc: Pete Wicken <petewicken(a)gmail.com>; Python-Ideas <python-ideas(a)python.org>
Subject: [Python-ideas] Re: macaddress or networkaddress module
On Mon, 6 Apr 2020 at 04:14, Stephen J. Turnbull <turnbull.stephen.fw(a)u.tsukuba.ac.jp> wrote:
> That's OK for developers on their "own" machines, but I think there
> are still a lot of folks who need approval from corporate (though I
> haven't heard from Paul Moore on that theme in quite a while, maybe
> it's gotten better?)
Howdy :-)
It's actually got a bit worse, to the point where I don't bother trying to fight the system as much any more, so things have gone quieter because of that :-) But yes, I still believe that there are important use cases for code that only uses the stdlib, and "not being able to get to the internet" is a valid use case that we shouldn't just dismiss.
On the other hand, with regard to the comment you were replying to:
> Up to around a decade ago, installing third-party libraries > was a huge mess, but nowadays, PyPI works, Python comes with pip > pre-installed, and telling developers they need internet access > when they first start a new project isn’t considered onerous.
I think all of that is true. What I think is more important, though, is around things that aren't so much "projects" as "scripts", and that
*distributing* such scripts that depend on 3rd party libraries is still problematic. Here, I'm not talking about making full-fledged packages, or py2exe style fully bundled apps, but "here's this monitoring script I wrote, let's ship it across 10 production systems and why don't you try it out on your dev boxes?" There's a
*significant* step change in complexity between doing that if the script depends on just the stdlib, versus if it depends on a 3rd party module. And we don't have a really good answer to that yet (zipapps, and tools like shiv that wrap zipapps, are pretty good in practice, but they are not well known or commonly used).
Paul
_______________________________________________
Python-ideas mailing list -- python-ideas(a)python.org To unsubscribe send an email to python-ideas-leave(a)python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/5BN5D…
Code of Conduct: http://python.org/psf/codeofconduct/
Hi
is there any interest (or anyone has done it before), building the python
interpreter using docker?
The basic idea is building the toolchain (gcc) and on top of that the
python interpreter. On mac/linux it can build natively, but it can use
docker to target linux from mac/windows.
Thanks
TL;DR: should we make `del x` an expression that returns the value of `x`.
## Motivation
I noticed yesterday that `itertools.combinations` has an optimization for when the returned tuple has no remaining ref-counts, and reuses it - namely, the following code:
>>> for v in itertools.combinations([1, 2, 3], 1):
... print(id(v))
... del v # without this, the optimization can't take place
2500926199840
2500926199840
2500926199840
will print the same id three times. However, when used as a list comprehension, the optimization can't step in, and I have no way of using the `del` keyword
>>> [id(v) for v in itertools.combinations([1, 2, 3], 1)]
[2500926200992, 2500926199072, 2500926200992]
`itertools.combinations` is not the only place to make this optimization - parts of numpy use it too, allowing
a = (b * c) + d
to elide the temporary `b*c`. This elision can't happen with the spelling
bc = b * c
a = bc + d
My suggestion would be to make `del x` an expression, with semantics "unbind the name `x`, and evaluate to its value".
This would allow:
>>> [id(del v) for v in itertools.combinations([1, 2, 3], 1)]
[2500926200992, 2500926200992, 2500926200992]
and
bc = b * c
a = (del bc) + d # in C++, this would be `std::move(bc) + d`
## Why a keyword
Unbinding a name is not something that a user would expect a function to do. Functions operate on values, not names, so `move(bc)` would be surprising.
## Why `del`
`del` already has "unbind this name" semantics, and allow it to be used as an expression would break no existing code.
`move x` might be more understandable, but adding new keywords is expensive
## Optional extension
For consistency, `x = (del foo.attr)` and `x = (del foo[i])` could also become legal expressions, and `__delete__`, `__delattr__`, and `__delitem__` would now have return values. Existing types would be free to continue to return `None`.