There was the discussion about vector, etc...
I think I have a frustration about chaining things easily in python in
the stdlib where many libs like orm do it great.
Here an example :
The code is useless, just to show the idea
>>> a = [1,2,3]
>>> c = max(a) + 1
I would be happy to have
>>> [1,2,3].append(4)::sort()::max() +1
It makes things very easy to read: first create list, then append 4,
then sort, then get the max.
To resume, the idea is to apply via a new operator (::, .., etc...) the
following callable on the previous object. It's clearly for standalone
object or after a method call when the return is None (there is fluent
`.` when there is a return value)
>> object::callable() = callable(object)
>> object(arg)::callable = callable(object(arg))
>> object::callable(arg) == callable(object, arg)
The idea is to use quite everything as first argument of any callable.
I do not know if it was already discussed, and if it would be
First attempt at this, so please be gentle. :)
I am very interested in the overall goal of not needing virtualenvs, but
I'm curious about the motivations behind pep-582.
Could someone help me understand if this has previously been discussed and
in that case why it was decided against?
1: Why only look in CWD and not traverse root-wise?
2: It feels to me like there is a larger story here about sprucing up
About 2) I would prefer if we had a mechanism to automatically look for
.pythonpath.pth (or something with a better name, but to that effect). Then
we were not limited to a specific directory or location. This would
streamline cross project repos, for instance: you could have docker images
that came preloaded with common libraries and just add a layer with your
specifics in a different directory earlier in the paths specified by
Using pip to install into your directories of choice is already supported,
although perhaps we should have some way of telling it to use the first
entry in .pythonpath.pth.
I would be very grateful for some interaction on this. If there is general
interest I could submit some working code.
*"Without data you are just another person with an opinion" -- W. Edwards
The issue here is that statistics.mode, as initially designed, raises an
exception for the case of multiple equal most-frequent data points. This
is true to the way mode is taught in schools, but it may not be the most
Raymond has suggested:
- keep the status quo;
- change mode() to return "the first tie" instead of raising an
exception (with or without a deprecation warning for one release);
- add a flag to specify the behaviour.
I'm especially interested in opinions from those who use the
function. What would be useful for you? How do you use it?
interactively or in scripts?
(When I designed this, I mostly imagined that mode() would be used
interactively, using the interpreter as a calculator.)
Would changing the behaviour break your code?
Note that this question is seperate from that of whether or not there
should be a multimode function.
Python’s decline is in not growing.
Sent from my iPhone
> On Feb 3, 2019, at 11:20 AM, Ned Batchelder <ned(a)nedbatchelder.com> wrote:
> James, you say below, "This kind of readability issue, datetime.now, is an example of what’s contributing to Python’s decline."
> Do you have any evidence of Python's decline? Lots of metrics (albeit simplistic ones) point to Python growing in popularity:
> Are there indicators we are missing?
>> On 2/2/19 11:56 PM, James Lu wrote:
>> Sent from my iPhone
>>>> On Feb 2, 2019, at 3:41 AM, Steven D'Aprano <steve(a)pearwood.info> wrote:
>>>>> On Sat, Feb 02, 2019 at 12:06:47AM +0100, Anders Hovmöller wrote:
>>>>> - the status quo means "no change", so there is no hassle there;
>>>> Not quite true. There is a constant hassle of "do I need to write
>>>> datetime.datetime.now() or datetime.now()?"
>>> My point was that there is no hassle from *making a change* if you don't
>>> actually make a change. (There may, or may not, be other, unrelated
>>> Besides, I'm not seeing that this is any worse than any other import. Do
>>> I call spam.Eggs.make() or Eggs.make()? If you don't remember what you
>>> imported, the names don't make much difference.
>>> I accept that datetime.datetime reads a bit funny and is a bit annoying.
>>> If we had the keys to the time machine and could go back a decade to
>>> version 3.0, or even further back to 1.5 or whenever the datetime module
>>> was first created, it would be nice to change it so that the class was
>>> DateTime. But changing it *now* is not free, it has real, serious costs
>>> which are probably greater than the benefit gained.
>> Why can’t we put “now” as a property of the module itself, reccomend that, and formally deprecate but never actually remove datetime.datetime.now?
>>>> I solved this at work by changing all imports to follow the "from
>>>> datetime import datetime" pattern and hard banning the other
>>>> statically in CI. But before that people suffered for years.
>>> Oh how they must have suffered *wink*
>>> I'm surprised that you don't do this:
>>> from datetime import datetime as DateTime
>>>> I have a colleague who likes to point that the future is longer than
>>>> the past. It's important to keep that perspective.
>>> Actually, no, on average, the projected lifespan of technologies,
>>> companies and cultural memes is about the same as their current age. It
>>> might last less, or it might last more, but the statistical expectation
>>> is about the same as the current age. So on average, "the future" is
>>> about the same as "the past".
>>> Python has been around not quite 30 years now, so we can expect that it
>>> will probably last another 30 years. But chances are not good that it
>>> will be around in 300 years.
>> A big reason why projects last as long as you say they last is that the maintainers get un-ambitious, they get used to relaxing in the language they know so well, they are no longer keen on change.
>> This kind of readability issue, datetime.now, is an example of what’s contributing to Python’s decline.
>> Bottom line: if someone submits a PR for this, will anyone merge it?
>>> Python-ideas mailing list
>>> Code of Conduct: http://python.org/psf/codeofconduct/
>> Python-ideas mailing list
>> Code of Conduct: http://python.org/psf/codeofconduct/
reposting -- things go to heck when posts are forwarded through google
On Sun, Feb 17, 2019 at 8:32 AM Christopher Barker <pythonchb(a)gmail.com>
> On Sun, Feb 17, 2019 at 2:32 AM Neil Girdhar <mistersheik(a)gmail.com>
>> Alternatively, the need for an overriding implementation to call super
>> could be marked by a different decorator.
> Looking back on the old "Super considered [Harmful | Super]" discussions,
> it was clear that the fact that a class hierarchy uses super() is part if
> its API, and:
> and every occurrence of the method needs to use super()
> So +1 on having an explicit way to specify that super should be used in
> subclasses, rather than having to look in documentation or the source code
> to figure that out.
Christopher Barker, PhD
Python Language Consulting
- Scientific Software Development
- Desktop GUI and Web Development
- wxPython, numpy, scipy, Cython
Marking a method M declared in C with abstractmethod indicates that M needs
to be *implemented* in any subclass D of C for D to be instantiated.
We usually think of overriding a method N to mean replacing one
implementation in some class E with another in some subclass of E, F.
Often, the subclass implementation calls super to add behavior rather than
I think that this concept of *implementing* is different than *overriding*.
However, abstract classes can have reasonable definition, and should
sometimes be overridden in the sense that subclasses should call super.
For example, when inheriting from AbstractContextManager, you need to
*override* the abstractmethod (!) __exit__, and if you want your class to
work polymorphically, you should call super.
This is extremely weird. Understandably, the pylint people are confused by
it (https://github.com/PyCQA/pylint/issues/1594) and raise bad warnings.
It also makes it impossible for me to raise warnings in my ipromise
(https://github.com/NeilGirdhar/ipromise) project. See, for
classes Y and W, which ought to raise, but that would raise on reasonable
My suggestion is to add a rarely used flag to abstractmethod:
def __exit__(self, exc_type, exc_value, traceback):
This would set a flag on the method like __abstractmethod_overrideable__,
which could be checked by ipromise's @overrides decorator, pylint's call
check, and anyone else that wants to know that a method should be
I have an idea for a new special method in a class called __first__(). The
main purpose of it would be to setup somethings a class needs to run before
it is first initiated, probably only really in modules because in your code
you could simply put this setup stuff before the class however if you are
calling something like 'from foo import Bar', where Bar is a class and Bar
needs some specific imports you would need something to run the first time
Bar is initiated.
I have two possible ideas for how this could be done. To look at how this
is going to work I will make an example class which would be in the class
def __init__(self, n):
self.a = math.log(2, n)
The first possible way is that if you just call Bar (without brackets or
any attributes) this is when it will run __first__ so you can just do this
after you run 'from foo import Bar'. However I don't particularly like this
idea because you could just create a staticmethod setup or something
instead of __first__ and then rather than running Bar you would just run
Bar.setup(), not much harder.
Personally I prefer my second idea. This is that when you first run
Bar.__new__() it will check if it has a __first__() and if so it will run
__first__ and then it will run the normal __new__. However, if you then
create another instance, so run __new__ again, it will not run __first__
again so it is still an efficient place to run imports or setup of things.
As an example of where this could be useful, I made a module 'temperature'
to help me neatly read one-wire temperature sensors with a raspberry pi.
This included a class Sensor which you gave the code of the sensor and then
when you called temp of that instance it will return what temperature that
sensor is reading. I would like in my other code to be able to run 'from
temperature import Sensor' because that would then mean I could just call
Sensor('28-000648372f') to create a sensor however at the start of this
module it imports os and glob and it also runs some setup for how it reads
the sensors and has such I have to just do 'import temperature' and then
every time I create a sensor I have to call temperature.Sensor which isn't
I was wondering what you thought of this idea.
During the last 10 years, Python has made steady progress in convenience to
assemble strings. However, it seems to me that joining is still, when
possible, the cleanest way to code string assembly.
However, I'm still sometimes confused between the different syntaxes used
by join methods:
0. os.path.join takes *args
1. str.join takes a list argument, this inconsistence make it easy to
mistake with the os.path.join signature
Also, I still think that:
Would be more readable as such:
Not only this would fix both of my issues with the current status-quo, but
this would also be completely backward compatible, and probably not very
hard to implement: just add a join method to list.
Thanks in advance for your reply
Have a great day
Sometimes I see threads briefly go into topics that are unrelated to new features in Python. For example: talking about a writer’s use of “inhomogeneous” vs “heterogenous” vs “anhomogenous.” We get what the original author meant, there is no need to fiddle with the little details of language at this point, even if it is fun.
These extra emails, though harmless, impose a visual, time, and navigational burden on the readers of the thread. It’s okay if the problem is little but not if it’s big.
How often does off-topic discussion occur? Does it need to be find ways to reduce the amount?
How can we find ways to reduce the amount?
this is my very first approach to suggest a Python improvement I'd think
At some point, maybe with Dart 2.0 or a little earlier, Dart is now
supporting multiline strings with "proper" identation (tried, but I can't
find the according docs at the moment. probably due to the rather large
changes related to dart 2.0 and outdated docs.)
What I have in mind is probably best described with an Example:
I am a
the closing quote defines the "margin indentation" - so in this example all
lines would get reduces by their leading 4 spaces, resulting in a "clean"
and unintended string.
anyways, if dart or not, doesn't matter - I like the Idea and I think
python3.x could benefit from it. If that's possible at all :)
I could also imagine that this "indentation cleanup" only is applied if the
last quotes are on their own line? Might be too complicated though, I can't
estimated or understand this...
thx for reading,