Having followed Yury Selivanov yselivanov.ml at gmail.com <http://gmail.com/> proposal to add async/await to Python (PEP 492 Coroutines with async and await syntax and (PEP 525 Asynchronous Generators) and and especially the discussion about PEP 530: Asynchronous Comprehensions I would like to add some concerns about the direction Python is taking on this.
As Sven R. Kunze srkunze at mail.de <http://mail.de/> mentions the is a risk of having to double a lot of methods/functions to have an Async implementation. Just look at the mess in .NET when Microsoft introduced async/await in their library, a huge number of functions had to be implemented with a Async version of each member. Definitely not the DRY principle.
While I think parallelism and concurrency are very important features in a language, I feel the direction Python is taking right now is getting to complicated, being difficult to understand and implement correct.
I thought it might be worth to look at using async at a higher level. Instead of making methods, generators and lists async, why not make the object itself async? Meaning that the method call (message to object) is async
Example:
class SomeClass(object):
def some_method(self):
return 42
o = async SomeClass() # Indicating that the user want’s an async version of the object
r = o.some_method() # Will implicit be a async/await “wrapped” method no matter impl.
# Here other code could execute, until the result (r) is referenced
print r
I think above code is easier to implement, use and understand, while it handles some of the use cases handled by defining a lot of methods as async/await.
I have made a small implementation called PYWORKS (https://github.com/pylots/pyworks <https://github.com/pylots/pyworks>), somewhat based on the idea above. PYWORKS has been used in several real world implementation and seams to be fairly easy for developers to understand and use.
br
/Rene
PS. This is my first post to python-ideas, please be gentle :-)
I once again had a use for heaps, and after rewrapping the heapq.heap*
methods for the umpteenth time, figured I'd try my hand at freezing
off that wart.
Some research turned up an older thread by Facundo Batista
(https://mail.python.org/pipermail/python-ideas/2009-April/004173.html),
but it seems like interest petered out. I shoved an initial pass at a
spec, implementation, and tests (robbed from
<cpython>/Lib/test/test_heapq.py mostly) into a repo at
https://github.com/nicktimko/heapo My spec is basically:
1. Provide all existing heapq.heap* functions provided by the heapq
module as methods with identical semantics
2. Provide limited magic methods to the underlying heap structure
a. __len__ to see how big it is, also for boolean'ing
b. __iter__ to allow reading out to something else (doesn't consume elements)
3. Add peek method to show, but not consume, lowest heap value
4. Allow custom comparison/key operation (to be implemented/copy-pasted)
Open Questions
* Should __init__ shallow-copy the list or leave that up to the
caller? Less memory if the heap object just co-opts it, but user might
accidentally reuse the reference and ruin the heap. If we make our own
list then it's easier to just suck in any arbitrary iterable.
* How much should the underlying list be exposed? Is there a use case
for __setitem__, __delitem__?
* Should there be a method to alter the priority of elements while
preserving the heap invariant? Daniel Stutzbach mentioned dynamically
increasing/decreasing priority of some list elements...but I'm
inclined to let that be a later addition.
* Add some iterable method to consume the heap in an ordered fashion?
Cheers,
Nick
Hello all,
I want to share my thoughts about syntax improvements regarding
character representation in Python.
I am new to the list so if such a discussion or a PEP exists already,
please let me know.
So in short:
Currently Python uses hexadecimal notation
for characters for input and output.
For example let's take a unicode string "абв.txt"
(a file named with first three Cyrillic letters).
Now printing it we get:
u'\u0430\u0431\u0432.txt'
So one sees that we have hex numbers here.
Same is for typing in the strings which obviously also uses hex.
Same is for some parts of the Python documentation,
especially those about unicode strings.
PROPOSAL:
1. Remove all hex notation from printing functions, typing,
documention.
So for printing functions leave the hex as an "option",
for example for those who feel the need for hex representation,
which is strange IMO.
2. Replace it with decimal notation, in this case e.g:
u'\u0430\u0431\u0432.txt' becomes
u'\u1072\u1073\u1074.txt'
and similarly for other cases where raw bytes must be printed/inputed
So to summarize: make the decimal notation standard for all cases.
I am not going to go deeper, such as what digit amount (leading zeros)
to use, since it's quite secondary decision.
MOTIVATION:
1. Hex notation is hardly readable. It was not designed with readability
in mind, so for reading it is not appropriate system, at least with the
current character set, which is a mix of digits and letters (curious who
was that wize person who invented such a set?).
2. Mixing of two notations (hex and decimal) is a _very_ bad idea,
I hope no need to explain why.
So that's it, in short.
Feel free to discuss and comment.
Regards,
Mikhail
On 16 October 2016 at 04:10, Steve Dower <steve.dower(a)python.org> wrote:
>> I posted output with Python2 and Windows 7
>> BTW , In Windows 10 'print' won't work in cmd console at all by default
>> with unicode but thats another story, let us not go into that.
>> I think you get my idea right, it is not only about printing.
> FWIW, Python 3.6 should print this in the console just fine. Feel free to
> upgrade whenever you're ready.
>
> Cheers,
> Steve
Thanks, that is good, sure I'll do that since I need that
right now (a lot of work with Cyrillic data).
Mikhail
I have an idea to improve indenting guidelines for dictionaries for better
readability: If a value in a dictionary literal is placed on a new line, it
should have (or at least be allowed to have) a n additional hanging indent.
Below is an example:
mydict = {'mykey':
'a very very very very very long value',
'secondkey': 'a short value',
'thirdkey': 'a very very very '
'long value that continues on the next line',
}
As opposed to this IMHO much less readable version:
mydict = {'mykey':
'a very very very very very long value',
'secondkey': 'a short value',
'thirdkey': 'a very very very '
'long value that continues on the next line',
}
As you can see it is much harder in the second version to distinguish
between keys and values.
On Wed, Oct 12, 2016 at 5:41 PM, Nick Coghlan <ncoghlan(a)gmail.com> wrote:
> However, set builder notation doesn't inherently include the notion of
> flattening lists-of-lists. Instead, that's a *consumption* operation
> that happens externally after the initial list-of-lists has been
> built, and that's exactly how it's currently spelled in Python:
> "itertools.chain.from_iterable(subiter for subiter in iterable)".
On Wed, Oct 12, 2016 at 5:42 PM, Steven D'Aprano <steve(a)pearwood.info> wrote:
> The fundamental design principle of list comps is that they are
> equivalent to a for-loop with a single append per loop:
>
> [expr for t in iterable]
>
> is equivalent to:
>
> result = []
> for t in iterable:
> result.append(expr)
>
>
> If I had seen a list comprehension with an unpacked loop variable:
>
> [t for t in [(1, 'a'), (2, 'b'), (3, 'c')]]
>
>
As it happens, python does have an external consumption operation that
happens externally with an iteration implied:
for t in iterable:
yield t
For your example [t for t in [(1, 'a'), (2, 'b'), (3, 'c')]] that would mean:
for t in [(1, 'a'), (2, 'b'), (3, 'c')]:
yield t
And accordingly, for the latter case [*t for t in [(1, 'a'), (2, 'b'),
(3, 'c')]] it would be:
for item in [(1, 'a'), (2, 'b'), (3, 'c')]:
for t in item:
yield t
cheers!
mar77i
On Oct 15, 2016 6:42 PM, "Steven D'Aprano" <steve(a)pearwood.info> wrote:
> doesn't make sense, it is invalid. Call it something else: the new
> "flatten" operator:
>
> [^t for t in iterable]
>
> for example, which magically adds an second invisible for-loop to your
list comps:
This thread is a lot of work to try to save 8 characters in the spelling of
`flatten(it)`. Let's just use the obvious and intuitive spelling.
We really don't need to be Perl. Folks who want to write Perl have a
perfectly good interpreter available already.
The recipes in itertools give a nice implementation:
def flatten(listOfLists):
"Flatten one level of nesting"
return chain.from_iterable(listOfLists)
Hello,
In the Code lay-out\Indentation section of PEP8 it is stated that
"
The closing brace/bracket/parenthesis on multi-line constructs may either
line up under the first non-whitespace character of the last line of list,
as in:
my_list = [
1, 2, 3,
4, 5, 6,
]
result = some_function_that_takes_arguments(
'a', 'b', 'c',
'd', 'e', 'f',
)
or it may be lined up under the first character of the line that starts the
multi-line construct, as in:
my_list = [
1, 2, 3,
4, 5, 6,
]
result = some_function_that_takes_arguments(
'a', 'b', 'c',
'd', 'e', 'f',
)
"
however, right before that location, there are several examples that do not
comply, like these:
"
# Aligned with opening delimiter.
foo = long_function_name(var_one, var_two,
var_three, var_four)
# More indentation included to distinguish this from the rest.
def long_function_name(
var_one, var_two, var_three,
var_four):
print(var_one)
# Hanging indents should add a level.
foo = long_function_name(
var_one, var_two,
var_three, var_four)
"
That should be corrected but it isn't the main point of this topic.
Assuming that a multi-line function definition is considered a multi-line
construct, I would like to propose an exception to the closing
brace/bracket/parenthesis indentation for multi-line constructs rule in
PEP8.
I my view all multi-line function definitions should only be allowed
options "usual" and "acceptable" shown below, due to better readability.
I present 3 examples (usual, acceptable, horrible) where only the last 2
comply with the current existing rule:
def do_something(parameter_one, parameter_two, parameter_three,
parameter_four,
parameter_five, parameter_six, parameter_seven,
last_parameter):
"""Do something."""
pass
def do_something(parameter_one, parameter_two, parameter_three,
parameter_four,
parameter_five, parameter_six, parameter_seven,
last_parameter
):
"""Do something."""
pass
def do_something(parameter_one, parameter_two, parameter_three,
parameter_four,
parameter_five, parameter_six, parameter_seven,
last_parameter
):
"""Do something."""
pass
The same 3 examples in the new 3.5 typing style:
def do_something(parameter_one: List[str], parameter_two: List[str],
parameter_three: List[str], parameter_four: List[str],
parameter_five: List[str], parameter_six: List[str],
parameter_seven: List[str],
last_parameter: List[str]) -> bool:
"""Do something."""
pass
def do_something(parameter_one: List[str], parameter_two: List[str],
parameter_three: List[str], parameter_four: List[str],
parameter_five: List[str], parameter_six: List[str],
parameter_seven: List[str], last_parameter: List[str]
) -> bool:
"""Do something."""
pass
def do_something(parameter_one: List[str], parameter_two: List[str],
parameter_three: List[str], parameter_four: List[str],
parameter_five: List[str], parameter_six: List[str],
parameter_seven: List[str], last_parameter: List[str]
) -> bool:
"""Do something."""
pass
Best regards,
JM
On Sat, Oct 15, 2016 at 10:09 AM, Steven D'Aprano <steve(a)pearwood.info> wrote:
> Not everything is a function. What's your point?
>
> As far as I can see, in *every* other use of sequence unpacking, *t is
> conceptually replaced by a comma-separated sequence of items from t. If
> the starred item is on the left-hand side of the = sign, we might call
> it "sequence packing" rather than unpacking, and it operates to collect
> unused items, just like *args does in function parameter lists.
>
You brush over the fact that *t is not limited to a replacement by a
comma-separated sequence of items from t, but *t is actually a
replacement by that comma-separated sequence of items from t INTO an
external context. For func(*t) to work, all the elements of t are kind
of "leaked externally" into the function argument list's context, and
for {**{'a': 1, 'b': 2, ...}} the inner dictionary's items are kind of
"leaked externally" into the outer's context.
You can think of the */** operators as a promotion from append to
extend, but another way to see this is as a promotion from yield to
yield from. So if you want to instead of append items to a
comprehension, as is done with [yield_me for yield_me in iterator],
you can see this new piece as a means to [*yield_from_me for
yield_from_me in iterator]. FWIW, I think it's a bit confusing
that yield needs a different keyword if these asterisk operators
already have this outspoken promotion effect.
Besides, [*thing for thing in iterable_of_iters if cond] has this cool
potential for the existing any() and all() builtins for cond, where a
decision can be made based on the composition of the in itself
iterable thing.
cheers!
mar77i