Hi all,
I would very much appreciate your opinion on my proposal for improvement of
*concurrent.futures* package.
Comparing to other languages such as Scala and C#, Python’s futures
significantly fall behind in functionality especially in ability to chain
computations and compose different futures without blocking and waiting for
result. New packages continue to emerge (*asyncio*) which provide their own
futures implementation, making composition even more difficult.
Proposed improvement implements Scala-like Future as a monadic construct.
It allows performing multiple kinds of operations on Future’s result
without blocking, enabling reactive programming in Python. It implements
common pattern separating *Future* and *Promise* interface, making it very
easy for 3rd party systems to use futures in their API.
Please have a look at this PEP
draft<https://rawgithub.com/mikhtonyuk/rxpython/master/pep-0000.html>,
and reference implementation <https://github.com/mikhtonyuk/rxpython> (as
separate library).
I’m very interested in:
- How PEPable is this?
- What are your thoughts on backward compatibility (current implementation
does not sacrifice any design points for it, but better compatibility can
be achieved)?
- Thoughts on Future-based APIs in other packages?
Thanks,
Sergii
On Thu Dec 26 2013 at 8:57:42 PM, Gregory P. Smith <greg(a)krypto.org> wrote:
> Such idioms are common. Though I don't think we should encourage their
use.
Is this a case of "we shouldn't encourage optional dependencies" or "we
shouldn't encourage this kind of idiom as a means of implementing optional
dependencies" - and if the latter, what alternative would you point people
at instead?
Hi,
I wonder why nobody asked on bugs.python.org for rather obvious
functionality of being able to reflow a paragraph to one line?
Meaning, that any paragraph would be stripped of all whitespace
(etc. ... whatever is configured by the additional parameters of
the TextWrapper class) and then joined into long line. I know
that
''.join(text.splitlines())
does something similar, but
a) it doesn't handle all whitespace munging,
b) it just seems like an obvious functionality for
TextWrapper to have.
Any thoughts on it? Should I just file a bug?
Best,
Matěj
--
http://www.ceplovi.cz/matej/, Jabber: mcepl<at>ceplovi.cz
GPG Finger: 89EF 4BC6 288A BF43 1BAB 25C3 E09F EF25 D964 84AC
[...] a superior pilot uses his superior judgment to avoid having to exercise
his superior skill.
-- http://www.jwz.org/blog/2009/09/that-duct-tape-silliness/#comment-10653
One of the tools to reduce language complexity is "explicitness" or
the direct link to help/tutorial/documentation from the concept. The
problem with most concepts in computer languages that they don't have
distinct markers by which you can recognize one feature or another.
For example, you can't recognize that code is generator based or uses
metaclass magic without searching for yield or some references to
metaclass through the source file.
One of the ways to reduce language complexity for new people who read
you code, is to prepare them for advanced concepts that your code uses
beforehand. For example, with the following section:
using generators as yield
^^^ name of this language feature and also help reference
^^^ distinct keywords and feature
markers that you enable
--
anatoly t.
First of all: thank you, Steven and everyone else involved, for taking on
the task of starting to implement this long-missed (at least by me) feature
!
I really hope the module will be a success and grow over time.
I have two thoughts at the moment about the implementation that I think may
be worth discussing, if it hasn't happened yet (I have to admit I did not
go through all previous posts on this topic, only read the PEP):
First: I am not entirely convinced by when the module raises Errors. In
some places its undoubtedly justified to raise StatisticsError (like when
empty sequences are passed to mean()).
On the other hand, should there really be an error, when for example no
unique value for the mode can be found?
Effectively, that would force users to guard every (!) call to the function
with try/except. In my opinion, a better choice would be to return
float('nan') or even better a module-specific object (call it Undefined or
something) that one can check for. This behavior could, in general, be
implemented for cases, where input can actually be handled and a result be
calculated (like a list of values in the mode example), but this result is
considered "undefined" by the algorithm.
Second: I am not entirely happy with the three different flavors of the
median function. I *do* know that this has been discussed before, but I'm
not sure whether *all* alternatives have been considered (the PEP only
talks about the median.low, median.high syntax, which, in fact, I wouldn't
like that much either. My suggestion would be to have a resolve parameter,
by which the behavior of a single median function can be modified.
My main argument here is that as the module will grow in the future there
will be many more such situations, in which different ways of calculating
statistics are all totally acceptable and you would want to leave the
choice to the user (the mode function can already be considered as an
example: maybe the user would want to have the list of "modes" returned in
case that no unambiguous value can be calculated; actually the current code
seems to be prepared for later implementation of this feature because it
does generate the list, just is not returning it). Now if, in all such
situations, the solution is to have extra functions the module will soon
end up completely cluttered with them. If, on the other hand, every
function that will foreseeably have to handle ambiguous situations had a
resolve parameter the module structure would be much clearer. In the median
example you would then call median(data) for the default behavior, arguably
the interpolation, but median(data, resolve='low') or median(data,
resolve='high') for the alternative calculations. Statistically educated
users could then guess, relatively easily, which functions have the resolve
parameter and a quick look at the function's help could tell them, which
arguments are accepted.
Finally, let me just point out that these are really just first thoughts
and I do understand that these are design decisions about which different
people will have different opinions, but I think now is still a good time
to discuss them, while with an established and (hopefully :) ) much larger
module you will not be able to change things that easily anymore.
Hoping for a lively discussion,
Wolfgang
It seems to me that POSITIONAL_OR_KEYWORD is the most often used kind of
Parameter (after all, this is the "default" kind of Parameter), so perhaps
the constructor for Parameter could be changed from
def __init__(self, name, kind, *, default=_empty, annotation=_empty,
_partial_kwarg=False):
to
def __init__(self, name, kind=Parameter.POSITIONAL_OR_KEYWORD, *,
default=_empty, annotation=_empty, _partial_kwarg=False):
Any thoughts on that?
Best,
Antony
Hello,
usually logger instances are retrieved and initialized with the module
name, using the well-known pattern:
logger = logging.getLogger(__name__)
In Java's very popular log4j library (which is cited as an influence by PEP
282), loggers are usually retrieved and initialized in basically an
identical way:
private final static Logger LOG = Logger.getLogger(<name-of-class>.class);
However, in the upcoming log4j 2 library, a new way is available:
private final static Logger LOG = LogManager.getLogger(); // Returns a
Logger with the name of the calling class. [1]
Basically the method throws an exception, catches it and fishes out the
class name from the stack trace. This is a little less explicit, but
still a more convenient (and less annoying) way of accomplishing an
extremely common pattern.
I was wondering what the core devs make of it:
- is it a good idea? (In general, and in Python) Is it worth it?
- is it feasible in Python? (taking into account other implementations
too)
- are there any gotchas that would make it worse than the current
standard?
[1]:
https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/log4j-api/src/m…
After seeing yet another person asking how to do this on #python (and
having needed to do it in the past myself), I'm wondering why itertools
doesn't have a function to break an iterator up into N-sized chunks.
Existing possible solutions include both the "clever" but somewhat
unreadable...
batched_iter = zip(*[iter(input_iter)]*n)
...and the long-form...
def batch(input_iter, n):
input_iter = iter(input_iter)
while True:
yield [input_iter.next() for _ in range(n)]
There doesn't seem, however, to be one clear "right" way to do this. Every
time I come up against this task, I go back to itertools expecting one of
the grouping functions there to cover it, but they don't.
It seems like it would be a natural fit for itertools, and it would
simplify things like processing of file formats that use a consistent
number of lines per entry, et cetera.
~Amber