I'm wondering if it's possible and consistent that loosen 'as' assignment, for example:
>>> import psycopg2 as pg
>>> import psycopg2.extensions as pg.ex
You can't now assign to an attribute in as statement but are there some reasons?
To be honest, I'll be satisfied if the statement above become valid, but also interested in general design decisions about 'as' functionality, I mean, it can be applicable to all expression that can be left side of '=' such as 'list[n]' one, and also other statement than 'import' such as 'with'.
Currently when working with interactive sessions using the dir() or
dir(module) built in is incredibly useful for exploring what
functionality is available in a module. (Especially the regrettable
libraries or modules that add really valuable functionality but have no
or limited docstrings).
However I often find that when a module adds a lot of functions I need
to filter those entries to be able to find the one that I need, e.g.:
>>> import mpmath
>>> dir(mpmath) # This produces 390+ lines of output but
>>> for name in dir(mpmath):
... if 'sin' in name:
... print(name) # gives me a mere 13 to consider as candidates
What I would really like to do is:
However, I know that the interpreter will hit problems with one or more
operators being embedded in the module name.
What I would like to suggest is extending the dir built-in to allow an
optional filter parameter that takes fnmatch type wild card as an
optional filter. Then I could use:
>>> dir(mpmath, "*sin*")
To narrow down the candidates.
Ideally, this could have a recursive variant that would also include
listing, (and filtering), any sub-packages.
Steve (Gadget) Barnes
Any opinions in this message are my personal opinions and do not reflect
those of my employer.
This email has been checked for viruses by AVG.
Regexes are really useful in many places, and to me it's sad to see the
builtin "re" module having to resort to requiring a source string as an
argument. It would be much more elegant to simply do "s.search(pattern)"
than "re.search(pattern, s)".
I suggest building all regex operations into the str class itself, as well
as a new syntax for regular expressions.
Thus a "findall" for any lowercase letter in a string would look like this:
['a', 'c', 'e', 'g', 'i']
A "findall" for any letter, case insensitive:
['A', 'c', 'E', 'g', 'I']
A substitution of any letter for the string " WOOF WOOF ":
>>> "1a3c5e7g9i".sub(!%[a-z]% WOOF WOOF %)
'1 WOOF WOOF 3 WOOF WOOF 5 WOOF WOOF 7 WOOF WOOF 9 WOOF WOOF '
A substitution of any letter, case insensitive, for the string "hovercraft":
You may wonder why I chose the regex delimiters as "!%" ... "%" [ ... "%" ]
The choice of "%" was purely arbitrary; I just thought of it since there
seems to be a convention to use "%" in PHP regex patterns. The "!" is in
front to disambiguate it from the "%" modulo operator or the "%" string
formatting operator, and because "!" is currently not used in Python.
Another potential idea is to simply use "!" to denote the start of a regex,
and use the character immediately following it to delimit the regex. Thus
all of the following would be regexes matching a single lowercase letter:
And all of the following would be substitution regexes replacing a single
case-insensitive letter with "@":
Some examples of how to use this:
['eu', 'o', 'ou', 'a', 'i', 'o', 'o', 'i', 'i', 'i', 'o', 'o', 'a',
'o', 'o', 'io', 'i']
<regex_match; span=(11, 20); match='qIQNlQSLi'>
>>> "My name is Joanne.".findall(!%[A-Z][a-z]+%)
Just changing the subject line here, to keep things on topic
---------- Forwarded message ---------
Date: Thu, 14 Jun 2018 17:29:03 +1000
From: Steven D'Aprano <steve(a)pearwood.info>
Subject: Re: [Python-ideas] Give regex operations more sugar
Content-Type: text/plain; charset=us-ascii
On Thu, Jun 14, 2018 at 06:33:14PM +1200, Greg Ewing wrote:
> Steven D'Aprano wrote:
> >- should targets match longest first or shortest first? or a flag
> > to choose which you want?
> >- what if you have multiple targets and you need to give some longer
> > ones priority, and some shorter ones?
> I think the suggestion made earlier is reasonable: match
> them in the order they're given. Then the user gets
> complete control over the priorities.
"Explicit is better than implicit" -- the problem with having the order
be meaningful is that it opens us up to silent errors when we neglect to
consider the order.
replace((spam, eggs, cheese) ...)
*seems* like it simply means "replace any of spam, eggs or cheese" and
it is easy to forget that that the order of replacement is *sometimes*
meaningful. But not always. So this is a bug magnet in waiting.
So I'd rather have to explicitly specify the order with a parameter
rather than implicitly according to how I happen to have built the
# remove duplicates
targets = tuple(set(targets))
newstring = mystring.replace(targets, replacement)
That's buggy, but it doesn't look buggy, and you could test it until the
cows come home and never notice the bug.
Subject: Digest Footer
Python-ideas mailing list
End of Python-ideas Digest, Vol 139, Issue 70
Le 09/06/2018 à 12:33, Andrew Svetlov a écrit :
> If we consistently apply the idea of hook for internal python structure
> modification too many things should be changed. Import
> machinery, tracemalloc, profilers/tracers, name it.
> If your code (I still don't see the real-life example) wants to check a
> policy change -- just do it.
> # on initialization
> policy = asyncio.get_event_loop_policy()
> # somewhere in code
> if policy is not asyncio.get_event_loop_policy():
> raise RuntimeError("Policy was changed")
You need to repeat this check everywhere because nothing guaranty a code
hasn't change it after the check.
While a call back allow you to catch it every time.
You can set either raise, warn, monkey patch, disable features, etc.
I'm testing "Data Classes" for Python 3.7. Awesome new feature,
BTW. The PEP is the first search result when I lookup "dataclass
python". Given that the PEP is not the best documentation for an
end user, I wonder if we should have a link in the header section of
the PEP that goes to better documentation. We could link accepted
PEPs to their section of the whatsnew. Or, link to the language
documentation that describes the feature.
One of the features I miss from languages such as C# is namespaces that
work across files - it makes it a lot easier to organize code IMO.
Here's an idea I had - it might not be the best idea, just throwing this
out there: a "within" keyword that lets you execute code inside a
namespace. For example:
cool_namespace = types.SimpleNamespace()
foo() # prints "foo run"
Given 2 callables checking when a condition arises and returning True:
To be equivalent to:
from itertools import dropwhile
list(dropwhile(lambda x: not starting_when(x), a_list))
from itertools import takewhile
list(takewhile(lambda x: not ending_when(x), a_list))
Steven D'Arpano wrote:
> On Fri, Jun 08, 2018 at 11:11:09PM +0200, Adam Bartoš wrote:
>>* But if there are both sin and dsin, and you ask about the difference
*>>* between them, the obvious answer would be that one takes radians and the
*>>* other takes degrees. The point that the degrees version is additionally
*>>* exact on special values is an extra benefit.
> No, that's not an extra benefit, it is the only benefit!
> If we can't make it exact for the obvious degree angles, there would be
> no point in doing this. We'd just tell people to write their own
> two-line functions:
> def sindeg(angle):
> return math.sin(math.radians(angle))
> The only reason to even consider making this a standard library function
> is if we can do better than that.
I agree completely, I just think it doesn't look obvious.
>>* It would be nice to also fix the original sin,
> The sin function is not broken and does not need fixing.
> (Modulo quirks of individual platform maths libraries.)
>>* or more precisely to provide a way to give it a
*>*> fractional multiple of pi. How about a special class PiMultiple that would
*>*> represent a fractional multiple of pi?
> What is the point of that? When you pass it to math.sin, it still needs
> to be converted to a float before sin can operate on it.
> Unless you are proposing a series of dunder methods __sin__ __cos__ and
> __tan__ to allow arbitrary classes to be passed to sin, cos and tan, the
> following cannot work.
The idea was that the functions could handle the PiMultiple instances
in a special way and fall back to float only when a special value is
not detected. It would be like the proposed dsin functionality, but
with a magic class instead of a new set of functions, and without a
particular choice of granularity (360 degrees).
But maybe it isn't worth it. Also what about acos(0)? Should it return
PiMultiple(1, 2) and confuse people or just 1.5707963267948966 and