*# First simple idea:*
TL;DR: Let's show a inspect.Signature instance to the help messages instead
of just writing its actual "vague" signature [usually (*args, **kwargs)].
Suppose we have a decorator that returns a wrapper:
def wrapper(*args, **kwargs):
... # some code
And suppose this decorator doesn't change the wrapped function signature
(e.g. it just build some context or check some inputs). The code might have
something like this (before returning the wrapper):
wrapper.__signature__ = inspect.signature(func)
It would help a lot if the list of valid arguments would be kept in the
help message of the wrapped function.
If the wrapper function changes the signature, one could just create
another inspect.Signature object to properly document its behavior. Setting
the __signature__ would suffice for that. Actually, the same applies to
other functions, not just wrappers. This might not be possible beforehand
as the parameter names/kinds/defaults might be dynamic (e.g. the distinct
signatures of the wrapped functions).
*# Second idea:*
The partial objects might include an automatic propagation of the
__signature__, if there's one already set. That can be lazy (the partial
object __signature__ is created only when requested).
*# Third idea:*
Setting the function signature itself from an inspect.Signature object.
That might be a new functools.wraps keyword argument, like:
def wrapper_func(*args, **kwargs):
... # some code
Or perhaps a new syntax, like:
def wrapper_func(*args, **kwargs) with wrapped_func.__signature__:
... # some code
Where the signature object is an inspect.Signature instance, or perhaps a
callable to have its signature copied from.
Externally, this would behave like a function with the given (more
restrict) signature (e.g. raising TypeError if it doesn't match the given
wrapped_func.__signature__ in the example). Internally, the function would
just use the explicit args and kwargs (or whatever the explicit signature
happen to be).
As an example, suppose the wrapped_func has the (a, b, c=3, *, d=4)
signature, the wrapper_func of the proposed syntax would behave like this
for this example:
def wrapper_func(a, b, c=3, *, d=4):
def internal_wrapper_func(*args, **kwargs):
... # some code
return internal_wrapper_func(a, b, c=3, *, d=4)
The most obvious use cases are the creation of simple function signature
validators (to avoid losing a parameter left untouched in the kwargs dict,
to "fail fast" and fix the argument name in code) and the propagation of
help messages documentation (e.g. to avoid showing just a "**kwargs" when
the valid keyword arguments can be grabbed from the function that uses
Danilo J. S. Bellini
"*It is not our business to set up prohibitions, but to arrive at
conventions.*" (R. Carnap)
An ideal improvement to python would be the introduction of fixed point decimals. Maybe make them adjustable with os.system(). To adjust, you would call os.system with the parameters ‘fixed’, 3, 15 to dictate fixed point math, three integer places, and 15 decimal places. Might be good for python 4 if this doesn’t gain enough traction before the 3.9 release. This would make them usable with python's generic math operators.
I hope that his hasn't been discussed to death in the past but I quite often encounter cases where developers have unwittingly created a file or directory that has a name clash with either a system import or a pip installed library import. This can be very confusing to new users! And, of course, as the number of core libraries and the installable ecosystem grow this potential problem is growing all of the time.
I know that we have use absolute imports to force importing local packages but I personally would like to have a notation to allow me to specify:
1. Import this name from the python system libraries only
2. Import this name from installed libraries only
3. Import this mane from the current project, i.e. the local directory or those above it
4. Import this name from local package only, i.e. only the current directory I know that this is possible with the .name notation but a lot od people seem to struggle with it
5. Import this name from wherever you can find it
In C/C++ projects that I have worked on it has been possible to do something similar, (providing the correct include flags have been set in the build environment), by distinguishing between `#include <name>` and `#include "name"` where the first will only consider system libraries and the latter only local.
I am not sure of what the notation would look like, (and of course the user would still have to take steps to avoid namespace clashes), but possibly something like:
from python import os, sys, glob # Only system libraries as candidates
from installed import fred
from local import os as myos # Personally I think that this would be clearer than from .package install ...
import some_other_package # The current behaviour
Hopefully nobody would have a file or directory called os but you never know.
Throwing this idea out there for discussion.
Steve (Gadget) Barnes
*Currently, the `in` operator (also known as `__contains__`) always uses
the rightmost argument's implementation.*
> * status = obj in "xylophone" *
*Is similar to:*
* status = "xylophone".__contains__( obj )*
*The current implementation of `__contains__` is similar to the way that
`+` used to only look to the leftmost argument for implementation. *
* total = 4 + obj*
> * total = int.__add__(4, obj)*
*However, these days, `__radd__` gives us the following:*
> * total = type(4).__add__(4, obj)*
> * except NotImplementedError:*
> * total = type(obj).__radd__(obj, 4) *
*We propose something similar for `__contains__`: That a new dunder/magic
method `__lcontains__` be created and that the `in` operator be implemented
similarly to the following:*
* # IMPLEMENTATION OF*
> * # status = obj in "xylophone"`*
> * try:*
> * status = "xylophone".__contains__(obj)*
> * except NotImplementedError:*
> * status = False *
> * if not status: try: status =
> obj.__lcontains__(“xylophone”) except AttributeError: # type(obj)
> does not have an `__lcontains__` method with io.StringIO() as
> string_stream: print( "unsupported operand
> type(s) for `in`:", repr(type(4).__name__),
> "and", repr(type(obj).__name__),
> file=string_stream ) msg = string_stream.getvalue()
> raise TypeError(msg) from None*
*The proposed enhancement would be backwards compatible except in the event
that a user already wrote a class having an `__lcontains__` method.*
* With our running example of the string “xylophone”, writers of
user-defined classes would be able to decide whether their objects are
elements of “xylophone” or not. Programmer would do this by writing an
*As an example application, one might develope a tree in which each node
represents a string (the strings being unique within the tree). A property
of the tree might be that node `n` is a descendant of node `m` if and only
if `n` is a sub-string of `m`. For example the string "yell" is a
descendant of "yellow." We might want the root node of the tree to be a
special object, `root` such that every string is in `root` and that `root`
is in no string. That is, the code `root in "yellow"` should return
`False`. If ` __lcontains__ ` were implemented, then we could implement the
node as follows:*
> *class RootNode(Node): *
> * def __contains__(container, element):*
> * return True *
> * def __lcontains__(element, container):*
> * return False*
Many fields develop their own specific short-hand notation and vocabulary
for communicating the complex yet recurring ideas within that field easily.
In many cases, this is a modification of an existing language or, in some
cases, many different languages.
This idea is a bit half-baked, but what if we could define a meta language
that allows people to describe how to alter Python into some
domain-specific language. Code written in the meta-language would be called
a "domain specification" and it would describe new reserved-words and
syntax for that DSL. It might even release some reserved words from
This might provide an interesting way to try new syntax proposals. I don't
know how something written in a DSL would interoperate with regular Python,
they may have to use separate interpreters and pass messages between the
two. I also don't know how you would link a script to a domain
specification. Maybe this idea is less than half-baked. 1/4th baked?
That way someone could write a version of Python where the assignment
operator is "→" or "->" or "(╯°□°)╯︵ ┻━┻".
Per Guido's suggestion, I am starting a new thread on this.
The itertools module documentation has a bunch of recipes that give various
ways to combine the existing functions for useful tasks. 
The issue is that these recipes are part of the documentation, and although
IANAL, as far as I can tell this means they fall under the Python-2.0
license.  Normally using the Python-2.0 license is not a big deal
because it is non-copyleft. You have to include the license text and
explain what changes you made, which isn't a problem for any sizable use of
The problem occurs for such small code snippets. Again IANAL, but it seems
you have to add the license text to the project, identify which parts of
the code fall under that license, and document any changes you made to it.
This is a lot of work to use, in many cases, just one or two lines of
I personally use one of the projects, like more-itertools, that implement
these recipes together under the Python-2.0 license and thus segregate the
license issues from the rest of my code base, but at least in my opinion
this mostly defeats the purpose of making code snippets like available.
As I am not a lawyer, so I don't know the best approach to deal with this
issue (if it is even desirable to deal with it). And I know that there are
other modules with recipes and other sorts of documentation with useful
code, although the itertools ones are the ones I see mentioned the most.
In recent years, python has become very popular due to the rise of data science and machine learning. This is mainly because Python is easy to learn and has a large number of third-party libraries, thus accumulating a large number of users.
When Python is applied to scientific computing, there are two problems. One is that Python itself is not fast enough, and the other is that matrix is not a basic data type. The first problem can be well solved by rewriting key codes in C/C++, or by using numba. For the second one, people have invented Numpy which has become the actual matrix computing standard in Python. Although it can do linear algebra, limited by the syntax of Python, using Numpy to initialize a matrix is always not simple enough. We have to do it like this:
import numpy as np
While, you know, in Matlab and Julia(A new ambitious and interesting language) it is in this way:
a=[1,2,3] or a=[1 2 3]
b=[1,2,3;4,5,6] or b=[1 2 3;4 5 6]
Of course, python, as a general-purpose language, is not limited to scientific computing, but also used for crawlers, web development, and even writing GUI programs. Therefore, many developers do not need matrix operations, nor need to use numpy as a standard library.
Since numpy has become the cornerstone of Python scientific computing, there is no need to reinvent another wheel, that is, to design new matrix data types. I suggest adding some parsing rules to the List data type to facilitate the initialization of a matrix.
(1) Keeping the original syntax of List unchanged，for example：
a = [1,2,3] # will be parsed to a normal list.
b = [[1,2,3],[4,5,6]] # will be parsed to a normal list，too.
Simply put, all the original list syntax remains unchanged.
(2) Using semicolons as a flag to make a better integration of a List-like data type and Numpy. The Python interpreter will check whether the numpy library is installed. If not, it will stop running and remind the user to install it. The expected syntax:
c = [1,2,3;] or c = [1 2 3;] or c = [1 2 3]
Notice the semicolon after the last number. If numpy is found, c will be parsed as a Numpy ndarray. All these forms are equivelent to c = np.array([1,2,3]). For a vector, the semicolon is the key for Python to parse it as a Numpy ndarray.
d=[1,2,3;4,5,6] or d=[1,2,3;4,5,6;] or d=[1 2 3;4 5 6] or d=[1 2 3;4 5 6;]
Notice the semicolons. If numpy is found, d will be parsed as a Numpy ndarray. All these forms are equivelent to d=np.array([[1,2,3],[4,5,6]])
You see，for defining a matrix or a vector，it will be nearly as simple as Matalab or Julia!
What should one do when one wants to put a zero width space or other invisible character in code?
These are often not displayed in editors, which can lead to confusion. I see two solutions:
- include it but add a comment noting it
- use the chr function to get the character (and add a comment saying what it is)
Where should a guideline for this go (PEP 8?)? What should the guideline be (one of these or something else?)?
This is my first time contributing in any way to Python, please excuse me if I've done it in the wrong place/format.
Recurrently, files are referenced when calling subprocesses.
Just recently, I wanted to execute a C program to efficiently process an image and, what I had in my program, was a Path.
This idea would allow turning this (imports ommitted):
`Popen(('/path/to/program', '-o', fspath(outputPath), fspath(inputPath)`
`Popen(('/path/to/program', '-o', outputPath, inputPath`
I believe the 2nd one is more readable and easier to use than the first one.
This idea can also be expanded into allowing a Path as the executable.
This change is supposed to affect all the convenient functions that end up calling Popen in their pipeline.
When str are provided, no changes should happen.