Frequently, while globbing, one needs to work with multiple extensions. I’d
like to propose for fnmatch.filter to handle a tuple of patterns (while
preserving the single str argument functionality, alas str.endswith), as a
first step for glob.i?glob to accept multiple patterns as well.
Here is the implementation I came up with:
https://github.com/python/cpython/compare/master...andresdelfino:fnmatch-mu…
If this is deemed reasonable, I’ll write tests and documentation updates.
Any opinion?
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Consider the following example:
import unittest
def foo():
for x in [1, 2, 'oops', 4]:
print(x + 100)
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertIs(foo(), None)
if __name__ == '__main__':
unittest.main()
If we were calling `foo` directly we could enter post-mortem debugging via `python -m pdb test.py`.
However since `foo` is wrapped in a test case, `unittest` eats the exception and thus prevents post-mortem debugging. `--failfast` doesn't help, the exception is still swallowed.
Since I am not aware of a solution that enables post-mortem debugging in such a case (without modifying the test scripts, please correct me if one exists), I propose adding a command-line option to `unittest` for [running test cases in debug mode](https://docs.python.org/3/library/unittest.html#unittest.TestCase.deb… so that post-mortem debugging can be used.
P.S.: There is also [this SO question](https://stackoverflow.com/q/4398967/3767239) on a similar topic.
There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
Another idea I've had that may be of use:
PYTHONLOGGING environment variable.
Setting PYTHONLOGGING to any log level or level name will initialize
logging.basicConfig() with that appropriate level.
Another option would be that -x dev or a different -x logging will
initialize basic config.
Will be useful mostly for debugging purposes instead of temporarily
modifying the code.
Kinda surprised it doesn't exist tbh.
Bar Harel
I'd like to propose an improvement to `concurrent.futures`. The library's
ThreadPoolExecutor and ProcessPoolExecutor are excellent tools, but there
is currently no mechanism for configuring which type of executor you want.
Also, there is no duck-typed class that behaves like an executor, but does
its processing in serial. Often times a develop will want to run a task in
parallel, but depending on the environment they may want to disable
threading or process execution. To address this I use a utility called a
`SerialExecutor` which shares an API with
ThreadPoolExecutor/ProcessPoolExecutor but executes processes sequentially
in the same python thread:
```python
import concurrent.futures
class SerialFuture( concurrent.futures.Future):
"""
Non-threading / multiprocessing version of future for drop in
compatibility
with concurrent.futures.
"""
def __init__(self, func, *args, **kw):
super(SerialFuture, self).__init__()
self.func = func
self.args = args
self.kw = kw
# self._condition = FakeCondition()
self._run_count = 0
# fake being finished to cause __get_result to be called
self._state = concurrent.futures._base.FINISHED
def _run(self):
result = self.func(*self.args, **self.kw)
self.set_result(result)
self._run_count += 1
def set_result(self, result):
"""
Overrides the implementation to revert to pre python3.8 behavior
"""
with self._condition:
self._result = result
self._state = concurrent.futures._base.FINISHED
for waiter in self._waiters:
waiter.add_result(self)
self._condition.notify_all()
self._invoke_callbacks()
def _Future__get_result(self):
# overrides private __getresult method
if not self._run_count:
self._run()
return self._result
class SerialExecutor(object):
"""
Implements the concurrent.futures API around a single-threaded backend
Example:
>>> with SerialExecutor() as executor:
>>> futures = []
>>> for i in range(100):
>>> f = executor.submit(lambda x: x + 1, i)
>>> futures.append(f)
>>> for f in concurrent.futures.as_completed(futures):
>>> assert f.result() > 0
>>> for i, f in enumerate(futures):
>>> assert i + 1 == f.result()
"""
def __enter__(self):
return self
def __exit__(self, ex_type, ex_value, tb):
pass
def submit(self, func, *args, **kw):
return SerialFuture(func, *args, **kw)
def shutdown(self):
pass
```
In order to make it easy to choose the type of parallel (or serial) backend
with minimal code changes I use the following "Executor" wrapper class
(although if this was integrated into concurrent.futures the name would
need to change to something better):
```python
class Executor(object):
"""
Wrapper around a specific executor.
Abstracts Serial, Thread, and Process Executor via arguments.
Args:
mode (str, default='thread'): either thread, serial, or process
max_workers (int, default=0): number of workers. If 0, serial is
forced.
"""
def __init__(self, mode='thread', max_workers=0):
from concurrent import futures
if mode == 'serial' or max_workers == 0:
backend = SerialExecutor()
elif mode == 'thread':
backend = futures.ThreadPoolExecutor(max_workers=max_workers)
elif mode == 'process':
backend = futures.ProcessPoolExecutor(max_workers=max_workers)
else:
raise KeyError(mode)
self.backend = backend
def __enter__(self):
return self.backend.__enter__()
def __exit__(self, ex_type, ex_value, tb):
return self.backend.__exit__(ex_type, ex_value, tb)
def submit(self, func, *args, **kw):
return self.backend.submit(func, *args, **kw)
def shutdown(self):
return self.backend.shutdown()
```
So in summary, I'm proposing to add a SerialExecutor and SerialFuture class
as an alternative to the ThreadPool / ProcessPool executors, and I'm also
advocating for some sort of "ParamatrizedExecutor", where the user can
construct it in "thread", "process", or "serial" model.
--
-Jon
What are your thoughts about adding a new imath module for integer
mathematics? It could contain the following functions:
* factorial(n)
Is just moved from the math module, but non-integer types are rejected.
Currently math.factorial() accepts also integer floats like 3.0. It
looks to me, the rationale was that at the time when math.factorial()
was added, all function in the math module worked with floats. But now
we can revise this decision.
* gcd(n, m)
Is just moved from the math module.
* as_integer_ration(x)
Equivalents to:
def as_integer_ration(x):
if hasattr(x, 'as_integer_ration'):
return x.as_integer_ration()
else:
return (x.numerator, x.denominator)
* binom(n, k)
Returns factorial(n) // (factorial(k) * factorial(n-k)), but uses more
efficient algorithm.
* sqrt(n)
Returns the largest integer r such that r**2 <= n and (r+1)**2 > n.
* isprime(n)
Tests if n is a prime number.
* primes()
Returns an iterator of prime numbers: 2, 3, 5, 7, 11, 13,...
Are there more ideas?
Following the discussion here (https://link.getmailspring.com/link/7D84D131-65B6-4EF7-9C43-51957F9DFAA9@ge…) I propose to add 3 new string methods: str.trim, str.ltrim, str.rtrim
Another option would be to change API for str.split method to work correctly with sequences.
In [1]: def ltrim(s, seq):
...: return s[len(seq):] if s.startswith(seq) else s
...:
In [2]: def rtrim(s, seq):
...: return s[:-len(seq)] if s.endswith(seq) else s
...:
In [3]: def trim(s, seq):
...: return ltrim(rtrim(s, seq), seq)
...:
In [4]: s = 'mailto:maria@gmail.com'
In [5]: ltrim(s, 'mailto:')
Out[5]: 'maria(a)gmail.com'
In [6]: rtrim(s, 'com')
Out[6]: 'mailto:maria@gmail.'
In [7]: trim(s, 'm')
Out[7]: 'ailto:maria@gmail.co'
The idea is to add a new string prefix 's' for SQL string. This string doesn't do anything in Python, unlike b"" or f"" strings, but interactive Python shells like IPython or Jupyter can parse the following characters as SQL syntax instead of Python syntax and give SQL syntax highlighting and autocompletion, and if they are configured correctly, they can do column name autocompletion. Unfortunately when I try to type s"select * from table" it gave me syntax error instead, so I think this need to be implemented in Python language itself instead of module
I didn't think of this when we were discussing 448. I ran into this today,
so I agree with you that it would be nice to have this.
Best,
Neil
On Monday, December 4, 2017 at 1:02:09 AM UTC-5, Eric Wieser wrote:
>
> Hi,
>
> I've been thinking about the * unpacking operator while writing some numpy
> code. PEP 448 allows the following:
>
> values = 1, *some_tuple, 2
> object[(1, *some_tuple, 2)]
>
> It seems reasonable to me that it should be extended to allow
>
> item = object[1, *some_tuple, 2]
> item = object[1, *some_tuple, :]
>
> Was this overlooked in the original proposal, or deliberately rejected?
>
> Eric
>