When using a custom classdict to implement a DSL or use in the body of a class definition, from what I can tell by experiment, the classdict takes priority, and the surrounding context is only referenced if trying to get an item from the classdict raises `KeyError`.
There is at least one case in which I would like to do the reverse, and have the classdict be secondary to (masked by) any variables defined in the context surrounding the execution of the class body.
I have been able to at least unmask builtins by having the classdict object first try to get a result from `__builtins__` and then fall back to itself. Also, in the actual class body, declaring a variable as `global` seems to be a workaround for global variables, but that has to be done explicitly in the body of any subclass that needs it. Also, that does not handle non-globals in its surrounding context.
I'm not sure what the best way to deal with this would be, but my first thought is to maybe have a property that can be set on the metaclass type such as `metacls.relictant_classdict = True`.
Good day all,
as a continuation of thread "OS related file operations (copy, move,
delete, rename...) should be placed into one module"
https://mail.python.org/pipermail/python-ideas/2017-January/044217.html
please consider making pathlib to a central file system module with putting
file operations (copy, move, delete, rmtree etc) into pathlib.
BR,
George
Per Guido's suggestion, I am starting a new thread on this.
The itertools module documentation has a bunch of recipes that give various
ways to combine the existing functions for useful tasks. [1]
The issue is that these recipes are part of the documentation, and although
IANAL, as far as I can tell this means they fall under the Python-2.0
license. [2] Normally using the Python-2.0 license is not a big deal
because it is non-copyleft. You have to include the license text and
explain what changes you made, which isn't a problem for any sizable use of
Python code.
The problem occurs for such small code snippets. Again IANAL, but it seems
you have to add the license text to the project, identify which parts of
the code fall under that license, and document any changes you made to it.
This is a lot of work to use, in many cases, just one or two lines of
code.
I personally use one of the projects, like more-itertools, that implement
these recipes together under the Python-2.0 license and thus segregate the
license issues from the rest of my code base, but at least in my opinion
this mostly defeats the purpose of making code snippets like available.
As I am not a lawyer, so I don't know the best approach to deal with this
issue (if it is even desirable to deal with it). And I know that there are
other modules with recipes and other sorts of documentation with useful
code, although the itertools ones are the ones I see mentioned the most.
[1] https://docs.python.org/library/itertools.html#itertools-recipes
[2]
https://docs.python.org/license.html#psf-license-agreement-for-python-relea…
There are many cases in which it is awkward that testing whether an object is a sequence returns `True` for instances of of `str`, `bytes`, etc.
This proposal is a serious breakage of backward compatibility, so would be something for Python 4.x, not 3.x.
Instead of those objects _being_ sequences, have them provide views that are sequences using a method named something like `members` or `items`.
Hello everyone,
I hope that E-Mail reaches someone. Since its the first time I am using
a thing like a mailing list, I am saying sorry in advance for any
inconvience caused ;-)
However, I am writing you because of a small idea / feature request for
the python import mechanism, which caused me some (in my opinion
unnecessary) trouble yesterday:
I noticed that when using an import statement, the following seems to be
true:
Current state:
* Python will search for the first TOP-LEVEL hit when resolving an
import statement and search inside there for the remainder part of
the import. If it cannot find the symbols it will fail. (Tested on
Python 3.8)
Proposed Change:
* If the import fails at some point after finding the first level
match: The path is evaluated further until it eventually may be able
to resolve the statement completely-
o --> Fail later
My use case scenario:
* I have a bunch of different projects built using Python
* I want to use parts of it within a new project
* I place them withina sub-folder (for example libs) within the new
project using git submodule or just copy / link them there, whatever
* I append the libs to path (sys.path.append)
* Python WILL find the packages and basically import everything right
* Problem:
o if themain package does actually contain a toplevel folder that
is named the same like one within the other modules (for example
a "ui" submodule) python will search withon one and only one of
these ui modules within exactly one project
o Name clashes can only be avoided by renaming
I know that this is propably not the suggested and best way to reuse
existing code. But its the most straight-forward and keeps the fastest
development cycle, which is I think is a reason Python grew so fast. At
the time of writing I cannot think of any place where this change would
destroy or change any already working code and I don't see a reason why
the import would completely fail under this circumstances showing a
different behaviour for top level fails and nth-level fails.
What do you think about that hopefully very small change of the import
behaviour?
Thanks for your time reading and Best wishes,
Richard
This is obviously a small thing, but it would be nice to be able to say:
>>> Path(r"C:\x.txt").with_stem("y")
WindowsPath('C:/y.txt')
Rather than having to do some variation of:
>>> old_path = Path(r"C:\x.txt")
>>> old_path.with_name("y"+old_path.suffix)
WindowsPath('C:/y.txt')
Or:
>>> old_path = Path(r"C:\x.txt")
>>> old_path.with_name("y"+old_path.suffix)
WindowsPath('C:/y.txt')
Or (god forbid):
>>> (old_path := Path(r"C:\x.txt")).with_name("y"+old_path.suffix)
WindowsPath('C:/y.txt')
There are already .with_suffix() and with_name() methods. Including
with_stem() seems obvious, no?
---
Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home
or actually going home." - Happy Chandler
After messing around with `Enum` for a while, there's one small thing that I'd like to see improved. It seems limiting to me that the only way to trigger `_generate_next_value` is to pass `auto()`.
What if, for a particular `Enum`, I would like to be able to use `()` as a shorthand for `auto()`? How about a more complex auto-generation that determines the final value based on both the given value and the name/key
As an example of the second case, start with an `Enum` subclas in which each item, by default, has the name/key as its value and a prettified version of the name as its label, but one can also specify the value and label using a 2-item tuple for the value. Now, let's say we want to be able to specify a custom label and still use the name/key as the value either as a None/label tuple or as a string starting with a comma.
Using a custom test for auto, one could identify those cases, Passing the assigned value to the `_generate_next_value_` function would allow it to make use of that information. For backward compatibility, the signature of the `_generate_next_value_` function can be checked to make sure it can accept the extra argument for that before passing that.
Here's a working example of what I'm talking about. In this example, `EnumX` is a tweak of `Enum` to that has the capability that I'm talking about, and I have also pasted the code for `EnumX` later in this email.
Please note that the important thing in this example is NOT what this `ChoiceEnum` class does but the ability to create custom enums that can do custom processing of assigned value/key such as what `ChoiceEnum` does, using a documented feature of `Enum`, without having to either write and debug a complicated hacks (that might not work with future updates to `Enum`) or replace hunks of `Enum` (as my `EnumX` shown here does) in ways that might not work or might not be feature-complete with future `Enum` updates.
# Example usage of enhanced Enum with _preprocess_value_ support.
def _label_from_key(key):
return key.replace('_', ' ').title()
class _ChoiceValue(str):
"""Easy way to have an object with immutable string content that is
identified for equality by its content with a label attribute that is
not significant for equality.
"""
def __new__(cls, content, label):
obj = str.__new__(cls, content)
obj.label = label
return obj
def __repr__(self):
content_repr = str.__repr__(self)
return '%s(%r, %r)' % (
self.__class__.__name__, content_repr, self.label)
def get_value_label_pair(self):
return (f'{self}', self.label)
class ChoiceEnum(EnumX):
def _preprocess_value_(key, src):
value = None
label = None
if src == ():
pass
elif isinstance(src, tuple) and src != ():
value, label = src
elif isinstance(str, str) and src.startswith(','):
label = src[1:]
else:
value = src
if value is None:
value = key
label = label or _label_from_key(key)
return _ChoiceValue(value, label)
def __getitem__(self, key):
return (f'{self._value_}', self._value_.label).__getitem__(key)
def __len__(self):
return 2
@property
def value(self):
return self._value_.get_value_label_pair()
@property
def label(self):
return self._value_.label
class Food(ChoiceEnum):
APPLE = ()
CHEESE = ()
HAMBURGER = 'BURGER'
SOUFFLE = ',Soufflé'
CHICKEN_MCNUGGETS = ('CHX_MCNUG', 'Chicken McNuggets')
DEFAULT = 'APPLE'
for food in Food:
print(repr(food))
# Prints...
# <Food.APPLE: _ChoiceValue("'APPLE'", 'Apple')>
# <Food.CHEESE: _ChoiceValue("'CHEESE'", 'Cheese')>
# <Food.HAMBURGER: _ChoiceValue("'BURGER'", 'Hamburger')>
# <Food.SOUFFLE: _ChoiceValue("',Soufflé'", 'Souffle')>
# <Food.CHICKEN_MCNUGGETS: _ChoiceValue("'CHX_MCNUG'", 'Chicken McNuggets')>
print(f'Food.DEFAULT is Food.APPLE: {Food.DEFAULT is Food.APPLE}')
# Prints...
# Food.DEFAULT is Food.APPLE: True
Here's the implementation of `EnumX` that is being used for the above. There are just a handful of those lines that represent changes to the implementation at https://github.com/python/cpython/blob/3.8/Lib/enum.py and I there are inline comments to draw attention to those.
from enum import (
Enum, EnumMeta, auto, _is_sunder, _is_dunder, _is_descriptor, _auto_null)
# Copy of EnumDict with tweaks to support _preprocess_value_
class _EnumDictX(dict):
def __init__(self):
super().__init__()
self._member_names = []
self._last_values = []
self._ignore = []
def __setitem__(self, key, value):
"""Duplicate all of _EnumDict.__setitem__ in order to insert a hook
"""
if _is_sunder(key):
if key not in (
'_order_', '_create_pseudo_member_',
'_generate_next_value_', '_missing_', '_ignore_',
'_preprocess_value_', # <--
):
raise ValueError('_names_ are reserved for future Enum use')
if key == '_generate_next_value_':
setattr(self, '_generate_next_value', value)
# ====================
if key == '_preprocess_value_':
setattr(self, '_preprocess_value', value)
# ====================
elif key == '_ignore_':
if isinstance(value, str):
value = value.replace(',', ' ').split()
else:
value = list(value)
self._ignore = value
already = set(value) & set(self._member_names)
if already:
raise ValueError(
'_ignore_ cannot specify already set names: %r' % (
already, ))
elif _is_dunder(key):
if key == '__order__':
key = '_order_'
elif key in self._member_names:
# descriptor overwriting an enum?
raise TypeError('Attempted to reuse key: %r' % key)
elif key in self._ignore:
pass
elif not _is_descriptor(value):
if key in self:
# enum overwriting a descriptor?
raise TypeError('%r already defined as: %r' % (key, self[key]))
# ====================
value = self._preprocess_value(key, value)
# ====================
if isinstance(value, auto):
if value.value == _auto_null:
value.value = self._generate_next_value(
key, 1, len(self._member_names), self._last_values[:])
value = value.value
self._member_names.append(key)
self._last_values.append(value)
dict.__setitem__(self, key, value)
# Subclass of EnumMeta with tweak to support _preprocess_value_
class EnumMetaX(EnumMeta):
# Copy of EnumMeta.__prepare__ with tweak to support _preprocess_value_
@classmethod
def __prepare__(metacls, cls, bases):
# create the namespace dict
enum_dict = _EnumDictX()
# inherit previous flags and _generate_next_value_ function
member_type, first_enum = metacls._get_mixins_(bases)
if first_enum is not None:
# ====================
enum_dict['_preprocess_value_'] = getattr(
first_enum, '_preprocess_value_', None)
# ====================
enum_dict['_generate_next_value_'] = getattr(first_enum, '_generate_next_value_', None)
return enum_dict
# Subclass of Enum using EnumMetaX as metaclass and with default
# implementation of _preprocess_value_.
class EnumX(Enum, metaclass=EnumMetaX):
def _preprocess_value_(key, value):
return value
At long last, Steven D'Aprano and I have pushed a second draft of PEP 584 (dictionary addition):
https://www.python.org/dev/peps/pep-0584/
The accompanying reference implementation is on GitHub:
https://github.com/brandtbucher/cpython/tree/addiction
This new draft incorporates much of the feedback that we received during the first round of debate here on python-ideas. Most notably, the difference operators (-/-=) have been dropped from the proposal, and the implementations have been updated to use "new = self.copy(); new.update(other)" semantics, rather than "new = type(self)(); new.update(self); new.update(other)" as proposed before. It also includes more background information and summaries of major objections (with rebuttals).
Please let us know what you think – we'd love to hear any *new* feedback that hasn't yet been addressed in the PEP or the related discussions it links to! We plan on updating the PEP at least once more before review.
Thanks!
Brandt