AFAICT, there is no way to filter warnings based on anything but their most immediate caller (please correct me if I'm wrong). This can lead to situations where it's impossible to write a warnings filter that's not brittle. For example:
Say I want to filter a certain warnings `sphinx.ext.autodoc` during my build process (e.g. to ignore deprecation warnings when autodoc imports deprecated modules that I need to keep documented). I can filter:
```python
warnings.filterwarnings('ignore', category=MatplotlibDeprecationWarning,
message=r'(\n|.)*module was deprecated.*')
```
but then when sphinx is then building my gallery of examples, this hides the `DeprecationWarnings` that I do want to see!
What I'd like to do is write
```python
warnings.filterwarnings('ignore', category=MatplotlibDeprecationWarning,
module='sphinx.ext.autodoc',
message=r'(\n|.)*module was deprecated.*')
```
however the warning is not actually triggered by the `autodoc` module itself, but instead let's say hypothetically it's triggered by using `imp` internally to load the module. Then I would have to instead do something like the following:
```python
warnings.filterwarnings('ignore', category=MatplotlibDeprecationWarning,
module='imp',
message=r'(\n|.)*module was deprecated.*')
```
but now let's say in the past couple of years, Sphinx, being the well-maintained library that it is, moves to using `importlib.import_module`. Now suddenly warnings are appearing and I have to go edit this guy to fix CI:
```python
warnings.filterwarnings('ignore', category=MatplotlibDeprecationWarning,
module='importlib',
message=r'(\n|.)*module was deprecated.*')
```
This is (at worst) a minor inconvenience in this silly example case, but in general one could imagine that if you want to use an upstream library in your code internally, in a way that raises a warning (that you want to ignore), it's currently impossible to do that without relying on what *should be* implementation details of the library you're using.
One obvious solution in my mind is to allow `filterwarnings` to inspect the current stack at the time that a warning is raised, since this contains all of the information needed in principle for the user to decide whether or not to ignore a warning. Has this kind of thing been discussed before? I tried my best to search but couldn't find anything, and while I'm a decades-long *user* of Python I don't tend to keep up with the development side of things.
Index heap is essential in efficient implementation of Dijkstra's algorithm, Prim's algorithm and other variants, I have implemented a concise version at https://github.com/yutao-li/libheap , I think it would be useful to have it in stdlib
TL;DR Changes may be coming to Enum str() and repr() -- your (informed) opinion requested. :-)
Python-Dev thread [0], summary below:
> As you may have noticed, Enums are starting to pop up all over the stdlib [1].
>
> To facilitate transforming existing module constants to IntEnums there is
> `IntEnum._convert_`. In Issue36548 [2] Serhiy modified the __repr__ of RegexFlag:
>
> >>> import re
> >>> re.I
> re.IGNORECASE
>
> I think for converted constants that that looks nice. For anyone that wants the
> actual value, it is of course available as the `.value` attribute:
>
> >>> re.I.value
> 2
>
> I'm looking for arguments relating to:
>
> - should _convert_ make the default __repr__ be module_name.member_name?
>
> - should _convert_ make the default __str__ be the same, or be the
> numeric value?
After discussions with Guido I made a (largely done) PR [3] which:
for stdlib global constants (such as RE)
- repr() -> uses `module.member_name`
- str() -> uses `member_name`
for stdlib non-global constants, and enums in general
- repr() -> uses `class.member_name`
- str() -> uses `member_name`
The questions I would most appreciate an answer to at this point:
- do you think the change has merit?
- why /shouldn't/ we make the change?
As a reminder, the underlying issue is trying to keep at least the stdlib Enum
representations the same for those that are replacing preexisting constants.
--
~Ethan~
[0] https://mail.python.org/archives/list/python-dev@python.org/message/CHQW6TH…
[1] I'm working on making their creation faster. If anyone wanted to convert EnumMeta to C I would be grateful.
[2] https://bugs.python.org/issue36548
[3] https://github.com/python/cpython/pull/22392
heapq module contains all the function to implement a Heap structure,
Main functions required to implement Heap data structure are:
function heappush - to push an element in Heap
function heappop - to pop an element from Heap
for implementing Minheap this functions are present in the module as :
heappush - for adding element into Minheap
heappop - to pop an element from Minheap
for implementing Maxheap only one of this two required functions is present:
_heappop_max - to pop an element from Maxheap
I suggest adding a Maxheap version of heappush into heapq module.
_heappush_max - for adding an element into Maxheap.
Comments are welcome.
I would like to propose dict (or mapping) unpacking assignment. This is
inspired in part by the Python-Ideas thread "f-strings as assignment
targets", dict unpacking in function calls, and iterable unpacking
assignment.
Comments welcome.
Background
----------
Iterable unpacking assignment:
values = (1, 2, 3)
a, b, c = *values
is a very successful and powerful technique in Python. Likewise dict
unpacking in function calls and dict displays:
kwargs = {'a': 1, 'b': 2}
func(**kwargs)
d = {**kwargs, 'c': 3}
There have been various requests for allowing dict unpacking on the
right-hand side of an assignment [citation required] but in my opinion
none of them have had a good justification.
Motivated by the idea of scanning text strings with a scanf-style
function, I propose the following behaviour for dict unpacking
assignment:
items = {'eggs': 2, 'cheese': 3, 'spam': 1}
spam, eggs, cheese = **items
assert spam == 1 and eggs == 2 and cheese == 3
Syntax
------
target_list [, **target] = **expression
`target_list` is a comma-separated list of targets. Targets may be:
- simple names, e.g. `spam` and `eggs`
- dotted names, e.g. `spam.eggs`
- numbered subscripts, e.g. `spam[1]`
but is not required to support arbitrary complex targets such as:
spam(*args).eggs(2*x + y)[1].cheese # not supported
Likewise only int literals are supported for subscripts. (These
restrictions may be lifted.)
This is similar to the limited range of fields acceptabled by the string
format mini-language. The same restrictions apply to `**target`.
Each target must be unique.
`expression` must evaluate to a dict or other mapping.
Assignment proceeds by matching up targets from the left to keys on the
right:
1. Every target must be matched exactly by a key. If there is a target
without a corresponding key, that is an error.
2. Any key which does not match up to a target is an error, unless a
`**target` is given.
3. If `**target` is given, it will collect any excess key:value pairs
remaining into a dict.
4. If the targets and keys match up, then the bindings are applied from
left to right, binding the target to the value associated with that key.
Examples:
# Targets are not unique
a, b, a = **items
=> SyntaxError
# Too many targets
a, b, c = **{'a': 1, 'b': 2}
=> raises a runtime exception
# Too few targets
a = **{'a': 1, 'b': 2}
=> raises a runtime exception
a, **extras = **{'a': 1, 'b': 2}
assert a == 1
assert extras == {'b': 2}
# Equal targets and keys
a, b, **extras = **{'a': 1, 'b': 2}
assert a == 1
assert b == 2
assert extras == {}
# Dotted names
from types import SimpleNamespace
obj = SimpleNamespace()
obj.spam = **{'obj.spam': 1}
assert obj.spam == 1
# Subscripts
arr = [None]*5
arr[1], arr[3] = **{'arr[3]': 33, 'arr[1]': 11}
assert arr == [None, 11, None, 33, None]
Assignments to dotted names or subscripts may fail, in which case the
assignment may only partially succeed:
spam = 'something'
eggs = None
spam, eggs.attr = {'spam': 1, 'eggs.attr': 2}
# raises AttributeError: 'NoneType' object has no attribute 'attr'
# but `spam` may have already been bound to 1
(I think that this is undesirable but unavoidable.)
Motivating use-cases
--------------------
The motivation comes from the discussion for scanf-like functionality.
The addition of dict unpacking assignment would allow something like
this:
pattern = "I'll have {main} and {extra} with {colour} coffee."
string = "I'll have spam and eggs with black coffee."
main, extra, colour = **scanf(pattern, string)
assert main == 'spam'
assert extra == 'eggs'
assert colour == 'black'
But the possibilities are not restricted to string scanning. This will
allow functions that return multiple values to choose between returning
them by position or by name:
height, width = get_dimensions(window) # returns a tuple
height, width = **get_dimensions(window) # returns a mapping
Developers can choose whichever model best suits their API.
Another use-case is dealing with kwargs inside functions and methods:
def method(self, **kwargs):
spam, eggs, **kw = **kwargs
process(spam, eggs)
super().method(**kw)
--
Steve