Consider the following example:
import unittest
def foo():
for x in [1, 2, 'oops', 4]:
print(x + 100)
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertIs(foo(), None)
if __name__ == '__main__':
unittest.main()
If we were calling `foo` directly we could enter post-mortem debugging via `python -m pdb test.py`.
However since `foo` is wrapped in a test case, `unittest` eats the exception and thus prevents post-mortem debugging. `--failfast` doesn't help, the exception is still swallowed.
Since I am not aware of a solution that enables post-mortem debugging in such a case (without modifying the test scripts, please correct me if one exists), I propose adding a command-line option to `unittest` for [running test cases in debug mode](https://docs.python.org/3/library/unittest.html#unittest.TestCase.deb… so that post-mortem debugging can be used.
P.S.: There is also [this SO question](https://stackoverflow.com/q/4398967/3767239) on a similar topic.
There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
What are your thoughts about adding a new imath module for integer
mathematics? It could contain the following functions:
* factorial(n)
Is just moved from the math module, but non-integer types are rejected.
Currently math.factorial() accepts also integer floats like 3.0. It
looks to me, the rationale was that at the time when math.factorial()
was added, all function in the math module worked with floats. But now
we can revise this decision.
* gcd(n, m)
Is just moved from the math module.
* as_integer_ration(x)
Equivalents to:
def as_integer_ration(x):
if hasattr(x, 'as_integer_ration'):
return x.as_integer_ration()
else:
return (x.numerator, x.denominator)
* binom(n, k)
Returns factorial(n) // (factorial(k) * factorial(n-k)), but uses more
efficient algorithm.
* sqrt(n)
Returns the largest integer r such that r**2 <= n and (r+1)**2 > n.
* isprime(n)
Tests if n is a prime number.
* primes()
Returns an iterator of prime numbers: 2, 3, 5, 7, 11, 13,...
Are there more ideas?
Following the discussion here (https://link.getmailspring.com/link/7D84D131-65B6-4EF7-9C43-51957F9DFAA9@ge…) I propose to add 3 new string methods: str.trim, str.ltrim, str.rtrim
Another option would be to change API for str.split method to work correctly with sequences.
In [1]: def ltrim(s, seq):
...: return s[len(seq):] if s.startswith(seq) else s
...:
In [2]: def rtrim(s, seq):
...: return s[:-len(seq)] if s.endswith(seq) else s
...:
In [3]: def trim(s, seq):
...: return ltrim(rtrim(s, seq), seq)
...:
In [4]: s = 'mailto:maria@gmail.com'
In [5]: ltrim(s, 'mailto:')
Out[5]: 'maria(a)gmail.com'
In [6]: rtrim(s, 'com')
Out[6]: 'mailto:maria@gmail.'
In [7]: trim(s, 'm')
Out[7]: 'ailto:maria@gmail.co'
I didn't think of this when we were discussing 448. I ran into this today,
so I agree with you that it would be nice to have this.
Best,
Neil
On Monday, December 4, 2017 at 1:02:09 AM UTC-5, Eric Wieser wrote:
>
> Hi,
>
> I've been thinking about the * unpacking operator while writing some numpy
> code. PEP 448 allows the following:
>
> values = 1, *some_tuple, 2
> object[(1, *some_tuple, 2)]
>
> It seems reasonable to me that it should be extended to allow
>
> item = object[1, *some_tuple, 2]
> item = object[1, *some_tuple, :]
>
> Was this overlooked in the original proposal, or deliberately rejected?
>
> Eric
>
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Pickling uses an extensible protocol that lets any class determine how its instances can be deconstructed and reconstructed. Both `pickle` and `copy` use this protocol, but it could be useful more generally. Unfortunately, to use it more generally requires relying on undocumented details. I think we should expose a couple of helpers to fix that:
# Return the same (shallow) reduction tuple that pickle.py, copy.py, and _pickle.c would use pickle.reduce(obj) -> (callable, args[, state[, litems[, ditem[, statefunc]]]])
# Return a callable and arguments to construct a (shallow) equivalent object # Raise a TypeError when that isn't possible pickle.deconstruct(obj) -> callable, args, kw
So, why do you want these?
There are many cases where you want to "deconstruct" an object if possible. Pattern matching depends on being able to deconstruct objects like this. Auto-generating a `__repr__` as suggested in Chris's thread. Quick&dirty REPL stuff, and deeper reflection stuff using `inspect.Signature` and friends.
Of course not every type tells `pickle` what to do in an appropriate way that we can use, but a pretty broad range of types do, including (I think; I haven't double-checked all of them) `@dataclass`, `namedtuple`, `(a)attr.s`, many builtin and extension types, almost all reasonable types that use `copyreg`, and any class that pickles via the simplest customization hook `__getnewargs[_ex]__`. That's more than enough to be useful. And, just as important, it won't (except in intentionally pathological cases) give us a false positive, where a type is correctly pickleable and we think we can deconstruct it but the deconstruction is wrong. (For some uses, you are going to want to fall back to heuristics that are often right but sometimes misleadingly wrong, but I don't think the `pickle` module should offer anything like that. Maybe `inspect` should.)
The way to get the necessary information isn't fully documented, and neither is the way to interpret it. And I don't think it _should_ be documented, because it changes every so often, and for good reasons; we don't want anyone writing third-party code that relies on those details. Plus, a different Python implementation might conceivably do it differently. Public helpers exposed from `pickle` itself won't have those problems.
Here's a first take at the code.
def reduce(obj, proto=pickle.DEFAULT_PROTOCOL): """reduce(obj) -> (callable, args[, state[, litems[, ditem[, statefunc]]]]) Return the same reduction tuple that the pickle and copy modules use """ cls = type(obj) if reductor := copyreg.dispatch_table.get(cls): return reductor(obj) # Note that this is not a special method call (not looked up on the type) if reductor := getattr(obj, "__reduce_ex__"): return reductor(proto) if reductor := getattr(obj, "__reduce__"): return reductor() raise TypeError(f"{cls.__name__} objects are not reducible")
def deconstruct(obj): """deconstruct(obj) -> callable, args, kw callable(*args, **kw) will construct an equivalent object """ reduction = reduce(obj) # If any of the optional members are included, pickle/copy has to # modify the object after construction, so there is no useful single # call we can deconstruct to. if any(reduction[2:]): raise TypeError(f"{type(obj).__name__} objects are not deconstrutable") func, args, *_ = reduction # Most types (including @dataclass, namedtuple, and many builtins) # use copyreg.__newobj__ as the constructor func. The args tuple is # the type (or, when appropriate, some other registered # constructor) followed by the actual args. However, any function # with the same name will be treated the same way (because under the # covers, this is optimized to a special opcode). if func.__name__ == "__newobj__": return args[0], args[1:], {} # Mainly only used by types that implement __getnewargs_ex__ use # copyreg.__newobj_ex__ as the constructor func. The args tuple # holds the type, *args tuple, and **kwargs dict. Again, this is # special-cased by name. if func.__name__ == "__newobj_ex__": return args # If any other special copyreg functions are added in the future, # this code won't know how to handle them, so bail. if func.__module__ == 'copyreg': raise TypeError(f"{type(obj).__name__} objects are not deconstrutable") # Otherwise, the type implements a custom __reduce__ or __reduce_ex__, # and whatever it specifies as the constructor is the real constructor. return func, args, {}
Actually looking at that code, I think it makes a better argument for why we don't want to make all the internal details public. :)
Here are some quick (completely untested) examples of other things we could build on it.
# in inspect def deconstruct(obj): """deconstruct(obj) -> callable, bound_args Calling the callable on the bound_args would construct an equivalent object """ func, args, kw = pickle.deconstruct(obj) sig = inspect.signature(func) return func, sig.bind(*args, **kw)
# in reprlib, for your __repr__ to delegate to def auto_repr(obj): func, bound_args = inspect.deconstruct(obj) args = itertools.chain( map(repr, bound_args.args), (f"{key!r}={value!r}" for key, value in bound_args.kwargs.items())) return f"{func.__name__}({', '.join(args)})"
# or maybe as a class decorator def auto_repr(cls): def __repr__(self): func, bound_args = inspect.deconstruct(self) args = itertools.chain( map(repr, bound_args.args), (f"{key!r}={value!r}" for key, value in bound_args.kwargs.items())) return f"{func.__name__}({', '.join(args)})" cls.__repr__ = __repr__ return cls
I have worked with both C# and Python for a while now and there is one feature of C# I'm missing in the Python language.
This feature is the "nameof" operator. (https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators…).
The place I use this the most in C# is in `ToString()` methods or logging messages.
This makes sure the developer cannot forget to update the name of a member.
As an example I created this `Person` class.
```
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __repr__(self):
return f"Person(name: {self.name}, age: {self.age})"
```
With the `nameof` operator this would look like the following:
```
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __repr__(self):
return f"{nameof(Person)}({nameof(self.name)}: {self.name}, {nameof(self.age)}: {self.age})"
```
What do you think about this?
Hi,
I think it would be very helpful to have an additional argument (cancel for example) added to Executor.shutdown that cancels all pending futures submitted to the executor. Then context manager would gain the ability to abort all futures incase of a exception, additionally this would also implement the missing cuterpart of the multiprocessing module terminate, currently we only have close.