Thanks for the feedback Xavier
Indeed, this “last resort” strategy only works if your project source is a git repository. I rely on the excellent setuptools_scm to handle that special case. By the way if someone knows an equivalent for other popular scm systems, please let me know (or better suggest it here: https://github.com/smarie/python-getversion/issues )
I forgot to add something in my previous email: the original reason for developing this library was to provide a version-aware persistency layer based on json. Basically when you deserialize an object, your code would receive the associated version, which allows for legacy-aware deserialization (typically storing a machine learning model for 6 months and needing to upgrade your code while still being able to deserialize it).
I guess that this use case could also apply to pickle – from what I remember, pickle does not like it too much when the object to deserialize does not correspond to the same versions of the classes used.
Best
Sylvain
De : Xavier Combelle <xavier.combelle(a)gmail.com>
Envoyé : samedi 6 juillet 2019 11:34
À : Sylvain MARIE <sylvain.marie(a)se.com>
Objet : Re: [Python-ideas] Getting the version number for a package or module, reliably
[External email: Use caution with links and attachments]
________________________________
I'm certainly wrong, but version of a development version of a typical library is probably not reliable, as typically the version number is bumped when a new version is shipped so the code can be very different of the version given.
Le sam. 6 juil. 2019 11:13, Sylvain MARIE via Python-ideas <python-ideas(a)python.org<mailto:python-ideas@python.org>> a écrit :
Dear python enthusiasts,
For some industrial project a few years ago we needed a reliable way to get the version number for a package or module.
I found out that there was actually no method working in all edge cases, including:
* Built-in modules
* Unzipped wheels and eggs added to the python patch
* Non-installed project under development with version control information available
* Packages both installed and added to the python path (typically a developer working on a new version)
So I created one, and finally found the time to publish it.
No rocket science here, but you may find this new package useful: https://smarie.github.io/python-getversion/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsmarie.gi…>
It works with any imported module for now, including submodules.
Along with the version you get details about why a given version number is returned (is it because of the __version__ attribute that was found, or because of the Version metadata in the installed package, etc.)
Also, if one edge case is missing, it is fairly easy to add it.
If I missed something in the stdlib (I acme across the importlib.metadata initiative but as of now it does not seem to cover all of the above cases), please let me know so that I can cite it in the documentation and even redirect to it if it happens to already cover all the cases.
Happy summer to all !
--
Sylvain
_______________________________________________
Python-ideas mailing list -- python-ideas(a)python.org<mailto:python-ideas@python.org>
To unsubscribe send an email to python-ideas-leave(a)python.org<mailto:python-ideas-leave@python.org>
https://mail.python.org/mailman3/lists/python-ideas.python.org/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.pyth…>
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/OEXCP…<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.pyth…>
Code of Conduct: http://python.org/psf/codeofconduct/<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpython.org…>
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________
Dear python enthusiasts,
For some industrial project a few years ago we needed a reliable way to get the version number for a package or module.
I found out that there was actually no method working in all edge cases, including:
* Built-in modules
* Unzipped wheels and eggs added to the python patch
* Non-installed project under development with version control information available
* Packages both installed and added to the python path (typically a developer working on a new version)
So I created one, and finally found the time to publish it.
No rocket science here, but you may find this new package useful: https://smarie.github.io/python-getversion/
It works with any imported module for now, including submodules.
Along with the version you get details about why a given version number is returned (is it because of the __version__ attribute that was found, or because of the Version metadata in the installed package, etc.)
Also, if one edge case is missing, it is fairly easy to add it.
If I missed something in the stdlib (I acme across the importlib.metadata initiative but as of now it does not seem to cover all of the above cases), please let me know so that I can cite it in the documentation and even redirect to it if it happens to already cover all the cases.
Happy summer to all !
--
Sylvain
Currently, when creating a virtualenv with the PEP-405 `venv` module, the python-config executable will not be copied/symlinked to the virtualenv.
That means for projects that link against the python interpreter, *and* which are built in a venv virtual environment, some custom "magic" has to be done to find the correct `python-config` executable in order to get the full name of libpython.
A concrete case I have described in this issue of the cocotb project https://github.com/potentialventures/cocotb/issues/978
Note that finding the correct `python-config` is not trivial because of version and/or ABI version tags that are appended.
While it seems we have found a workaround for now, I think it makes sense to make `python-config` itself.
My "wish" is to include this feature in the standard library venv module PEP-405. But for reference I'll also link this issue, of the `virtualenv` tool, which discusses a similar wish. Unfortunatly this was closed without progress: https://github.com/pypa/virtualenv/issues/169
Now that Python is beginning to embrace type annotations, is it worth
revisiting the idea of having extended integers and an integer infinity?
I found myself trying to annotate this line:
events_to_do: Union[int, float] = math.inf
where I am only including float in the union to accommodate math.inf.
I'm interested in exploring this concrete proposal:
Add a class to the numeric hierarchy
(https://www.python.org/dev/peps/pep-3141/) ExtendedIntegral whereby Real
:> ExtendedIntegral :> Integral.
Add a sentinel math.int_inf that obeys all of the same kinds of rules as
math.inf does.
Then, I could annotate more simply:
events_to_do: ExtendedIntegral = math.int_inf
With respect to Python, this is discussed somewhat
here https://stackoverflow.com/questions/24587994/infinite-integer-in-python/357….
The name "extended integer" is discussed somewhat
here https://math.stackexchange.com/questions/1442961/extended-integers. A
quick search of papers shows that it is sometimes used in this
sense: https://scholar.google.com/scholar?q=%22extended+integer
Best,
Neil
>
> This was quite extensively discussed on python-ideas recently:
>
>
> https://mail.python.org/archives/list/python-ideas@python.org/thread/RJARZS…
>
> (I'm finding it hard to find a good thread view in the new interface --
> but that will get you started)
>
> My memory of that thread is that there was a lot of bike shedding, and
> quite a lot of resistance to adding a couple new methods, which I
> personally never understood (Not why we don't want to add methods
> willy-nilly, but why there was this much resistance to what seems like an
> low-disruption, low maintenance, and helpful addition)
>
> I think it just kind of petered out, rather than being rejected, so if
> someone wants to take up the mantle, that would be great -- and some
> support from a core dev or two would probably help.
>
> -CHB
>
>
>
>
> On Fri, Jun 28, 2019 at 10:44 AM Brett Cannon <brett(a)python.org> wrote:
>
>> Glenn Linderman wrote:
>> > On 6/27/2019 3:09 PM, Brett Cannon wrote:
>> > > My guess is that without Guido to just ask this will
>> > > have to go to a PEP as it changes a built-in.
>> > > How does adding two new methods change a built-in?
>> > > Now if an extra parameter were added to modify lstrip, rstrip, and
>> strip
>> > to make them do something different, yes.
>> > But adding new methods doesn't change anything, unless someone is
>> > checking for their existence.
>>
>> Sure, but the built-ins are so widely used that we don't want to blindly
>> add every method idea that someone comes up with either. We all very much
>> share ownership of the built-ins, so we should all agree to changes to
>> them, and getting agreement means either clear consensus and/or a PEP.
>>
>> -Brett
>>
>> > My preferred color is pstrip and sstrip (prefix and suffix strip)
>> since
>> > lstrip and rstrip mean left and right.
>> > And maybe there should be a psstrip, that takes two parameters, the
>> > prefix and the suffix to strip.
>> > Such functions would certainly reduce code in a lot of places where I do
>> > if string.startswith('foo'):
>> > string = string[ 3: ];
>> > as well as making it more robust, because the string and its length
>> have
>> > to stay synchronized when changes are made.
>>
>
I think this type of discussion better fits here. Quickly looking through
the aforementioned branch - I want to say that that branch died due to
bike-shedding :)
I'm sure this has already been proposed, but it seems to me that the most
natural would be to expand the functionality of existing `.lstrip`,
`.rstrip` and maybe `.strip` methods. Probably many do not like what
_names_ were given to these methods. But nothing can be done about it, they
are what they are and will not go anywhere. Adding two more methods with
similar functionality will only confuse. Moreover since we already have a
precedent with other string methods like `.startswith` and `.endswith` that
accepts as an argument both a string and a tuple of strings.
On the other hand, this suggests that this functionality makes sense only
for `.lstrip` and `.rstrip` variants. Is such a discrepancy acceptable?
If it is, I will suggest to allow `.lstrip` and `.rstrip` to accept tuple
of strings. The _stripping_ will be carried out in accordance with the
linear probing and only one stripping at a time.
>>> s = 'paparazzi'
>>> s.lstrip(('pa', 'ra'))
'parazzi'
Which also deviates from the existing semantics of strinst.split(strinst). Is
this also permissible?
Perhaps all these discrepancies lead to the idea of having new names for
these methods. But for me personally, all these deviations are an
inevitable price for practicality and, of course, echoes of past decisions.
I would like to have such a functionality and find it usedul. Just today, I
needed to remove the “model_” substring from beginnings of the set of file
filenames, but this does not work because some of the files have names,
like "model_ma...", "model_me...", "model_or..." and so on.
with kind regards,
-gdg
The thread on operators as first-class citizens keeps getting vague ideas about assignment overloading that wouldn't actually work, or don't even make sense. I think it's worth writing down the simplest design that would actually work, so people can see why it's not a good idea (or explain why they think it would be anyway).
in pseudocode, just as x += y means this:
xval = globals()['x'] try: result = xval.__iadd__(y) except AttributeError: result = xval.__add__(y) globals()['x'] = result
… x = y would mean this:
try:
xval = globals()['x'] result = xval.__iassign__(y)
except (LookupErrorr, AttributeError): result = y globals()['x'] = result
If you don't understand why this would work or why it wouldn't be a great idea (or want to nitpick details), read on; otherwise, you can skip the rest of this message.
---
First, why is there even a problem? Because Python doesn't even have "variables" in the same sense that languages like C++ that allow assignment overloading do.
In C++, a variable is an "lvalue", a location with identity and type, and an object is just a value that lives in a location. So assignment is an operation on variables: x = 2 is the same as XClass::operator=(&x, y).
In Python, an object is a value that lives wherever it wants, with identity and type, and a variable is just a name that can be bound to a value in a namespace. So assignment is an operation on namespaces, not on variables: x = 2 is the same as dict.__settem__(globals(), 'x', 2).
The same thing is true for more complicated assignments. For example, a.x = 2 is just an operation on a's namespace instead of the global namespace: type(a).__setattr__(a, 'x', 2). Likewise, a.b['x'] = 2 is type(a.b).__setitem__(a.b, 'x', 2), And so on,
---
But Python allows overloading augmented assignment. How does that work? There's a perfectly normal namespace lookup at the start and namespace store at the end—but in between, the existing value of the target gets to specify the value being assigned.
Immutable types like int don't define __iadd__, and __add__ creates and returns a new object. So, x += y ends up the same as x = x + y.
But mutable types like list define an __iadd__ that mutates self in-place and then returns self, so x gets harmlessly rebound to the same object it was already bound to. So x += y ends up the same as x.extend(y); x = x.
The exact same technique would work for overloading normal assignment. The only difference is that x += y is illegal if x is unbound, while x = y obviously has to be legal (and mean there is no value to intercept the assignment). So, the fallback happens when xval doesn't define __iassign__, but also when x isn't bound at all.
So, for immutable types like eint, and almost all mutable types like list—and when x is unbound—x = y does the same thing it always did.
But special types that want to act like transparent mutable handles define an __iassign__ that mutates self in place and returns self, so x gets harmlessly rebound to the same object. So x = y ends up the same as, say, x.set_target(y); x = x.
This all works the same if the variables are local rather than global, or for more complicated targets like attribution or subscription, and even for target lists; the intercept still happens the same way, between the (more complicated) lookup and storage steps.
---
Now, why is this a bad idea?
First, the benefit of __iassign__ is a lot smaller than __iadd__. A sizable fraction of "x += y" statements are for mutable "x" values, but only a rare handful of "x = y" statements would be for special handle "x" values. Even the same cost for a much smaller benefit would be a much harder sell.
But the runtime performance cost difference is huge. If augmented assignment weren't overloadable, it would still have to lookup the value, lookup and call a special method on it, and store the value. The only cost overloading adds is trying two special methods instead of one, which is tiny. But regular assignment doesn't have to do a value lookup or a special method call at all, only a store; adding those steps would roughly double the cost of every new variable assignment, and even more for every reassignment. And assignments are very common in Python, even within inner loops, so we're talking about a huge slowdown to almost every program out there.
Also, the fact that assignment always means assignment makes Python code easier both for humans to skim, and for automated programs to process. Consider, for example, a static type checker like mypy. Today, x = 2 means that x must now be an int, always. But if x could be a Signal object with an overloaded __iassign__, then, x = 2 might mean that x must now be an int, or it might mean that x must now be whatever type(x).__iassign__ returns.
Finally, the complexity of __iassign__ is at least a little higher than __iadd__. Notice that in my pseudocode above, I cheated—obviously the xval = and result = lines are not supposed to recursively call the same pseudocode, but to directly store a value in new temporary local variable. In the real implementation, there wouldn't even be such a temporary variable (in CPython, the values would just be pushed on the stack), but for documenting the behavior, teaching it to students, etc., that doesn't matter. Being precise here wouldn't be hugely difficult, but it is a little more difficult than with __iadd__, where there's no similar potential confusion even possible. On Wednesday, June 19, 2019, 10:54:04 AM PDT, Andrew Barnert via Python-ideas <python-ideas(a)python.org> wrote:
On Jun 18, 2019, at 12:43, nate lust <natelust(a)linux.com> wrote:
I have been following this discussion for a long time, and coincidentally I recently started working on a project that could make use of assignment overloading. (As an aside it is a configuration system for a astronomical data analysis pipeline that makes heavy use of descriptors to work around historical decisions and backward compatibility). Our system makes use of nested chains of objects and descriptors and proxy object to manage where state is actually stored. The whole system could collapse down nicely if there were assignment overloading. However, this works OK most of the time, but sometimes at the end of the chain things can become quite complicated. I was new to this code base and tasked with making some additions to it, and wished for an assignment operator, but knew the data binding model of python was incompatible from p.
This got me thinking. I didnt actually need to overload assignment per-say, data binding could stay just how it was, but if there was a magic method that worked similar to how __get__ works for descriptors but would be called on any variable lookup (if the method was defined) it would allow for something akin to assignment.
What counts as “variable lookup”? In particular:
For example:
class Foo: def __init__(self): self.value = 6 self.myself = weakref.ref(self) def important_work(self): print(self.value)
… why doesn’t every one of those “self” lookups call self.__get_self__()? It’s a local variable being looked up by name, just like your “foo” below, and it finds the same value, which has the same __get_self__ method on its type.
The only viable answer seems to that it does. So, to avoid infinite circularity, your class needs to use the same kind of workaround used for attribute lookup in classes that define __getattribute__ and/or __setattr__:
def important_work(self): print(object.__get_self__(self).value)
def __get_self__(self): return object.__get_self__(self).myself
But even that won’t work here, because you still have to look up self to call the superclass method on it. I think it would require some new syntax, or at least something horrible involving locals(), to allow you to write the appropriate methods.
def __get_self__(self): return self.myself
Besides recursively calling itself for that “self” lookup, why doesn’t this also call weakref.ref.__get_self__ for that “myself” lookup? It’s an attribute lookup rather than a local namespace lookup, but surely you need that to work too, or as soon as you store a Foo instance in another object it stops overloading.
For this case there’s at least an obvious answer: because weakref.ref doesn’t override that method, the variable lookup doesn’t get intercepted. But notice that this means every single value access in Python now has to do an extra special-method lookup that almost always does nothing, which is going to be very expensive.
def __setattr__(self, name, value): self.value = value
You can’t write __setattr__ methods this way. That assignment statement just calls self.__setattr__(‘value’, value), which will endlessly recurse. That’s why you need something like the object method call to break the circularity.
Also, this will take over the attribute assignments in your __init__ method. And, because it ignores the name and always sets the value attribute, it means that self.myself = is just going to override value rather than setting myself.
To solve both of these problems, you want a standard __setattr__ body here:
def __setattr__(self, name, value): object.__setattr__(self, name, value)
But that immediately makes it obvious that your __setattr__ isn’t actually doing anything, and could just be left out entirely.
foo = Foo() # Create an instancefoo # The interpreter would return foo.myselffoo.value # The interpreter would return foo.myself.value
foo = 19 # The interpreter would run foo.myself = 6 which would invoke foo.__setattr__('myself', 19)
For this last one, why would it do that? There’s no lookup here at all, only an assignment.
The only way to make this work would be for the interpreter to lookup the current value of the target on every assignment before assigning to it, so that lookup could be overloaded. If that were doable, then assignment would already be overloadable, and this whole discussion wouldn’t exist.
But, even if you did add that, __get_self__ is just returning the value self.myself, not some kind of reference to it. How can the interpreter figure out that the weakref.ref value it got came from looking up the name “myself” on the Foo instance? (This is the same reason __getattr__ can’t help you override attribute setting, and a separate method __setattr__ is needed.) To make this work, you’d need a __set_self__ to go along with __get_self__. Otherwise, your changes not only don’t provide a way to do assignment overloading, they’d break assignment overloading if it existed.
Also, all of the extra stuff you’re trying to add on top of assignment overloading can already be done today. You just want a transparent proxy: a class whose instances act like a reference to some other object, and delegate all methods (and maybe attribute lookups and assignments) to it. This is already pretty easy; you can define __getattr__ (and __setattr__) to do it dynamically, or you can do some clever stuff to create static delegating methods (and properties) explicitly at object-creation or class-creation time. Then foo.value returns foo.myself.value, foo.important_work() calls the Foo method but foo.__str__() calls foo.myself.__str__(), you can even make it pass isinstance checks if you want. The only thing it can’t do is overload assignment.
I think the real problem here is that you’re thinking about references to variables rather than values, and overloading operators on variables rather than values, and neither of those makes sense in Python. Looking up, or assigning to, a local variable named “foo” is not an operation on “the foo variable”, because there is no such thing; it’s an operation on the locals namespace._______________________________________________
Python-ideas mailing list -- python-ideas(a)python.org
To unsubscribe send an email to python-ideas-leave(a)python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/4JMNZ…
Code of Conduct: http://python.org/psf/codeofconduct/