There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
I just spent a few minutes staring at a bug caused by a missing comma
-- I got a mysterious argument count error because instead of foo('a',
'b') I had written foo('a' 'b').
This is a fairly common mistake, and IIRC at Google we even had a lint
rule against this (there was also a Python dialect used for some
specific purpose where this was explicitly forbidden).
Now, with modern compiler technology, we can (and in fact do) evaluate
compile-time string literal concatenation with the '+' operator, so
there's really no reason to support 'a' 'b' any more. (The reason was
always rather flimsy; I copied it from C but the reason why it's
needed there doesn't really apply to Python, as it is mostly useful
inside macros.)
Would it be reasonable to start deprecating this and eventually remove
it from the language?
--
--Guido van Rossum (python.org/~guido)
tl;dr Let's exploit multiple cores by fixing up subinterpreters,
exposing them in Python, and adding a mechanism to safely share
objects between them.
This proposal is meant to be a shot over the bow, so to speak. I plan
on putting together a more complete PEP some time in the future, with
content that is more refined along with references to the appropriate
online resources.
Feedback appreciated! Offers to help even more so! :)
-eric
--------
Python's multi-core story is murky at best. Not only can we be more
clear on the matter, we can improve Python's support. The result of
any effort must make multi-core (i.e. parallelism) support in Python
obvious, unmistakable, and undeniable (and keep it Pythonic).
Currently we have several concurrency models represented via
threading, multiprocessing, asyncio, concurrent.futures (plus others
in the cheeseshop). However, in CPython the GIL means that we don't
have parallelism, except through multiprocessing which requires
trade-offs. (See Dave Beazley's talk at PyCon US 2015.)
This is a situation I'd like us to solve once and for all for a couple
of reasons. Firstly, it is a technical roadblock for some Python
developers, though I don't see that as a huge factor. Regardless,
secondly, it is especially a turnoff to folks looking into Python and
ultimately a PR issue. The solution boils down to natively supporting
multiple cores in Python code.
This is not a new topic. For a long time many have clamored for death
to the GIL. Several attempts have been made over the years and failed
to do it without sacrificing single-threaded performance.
Furthermore, removing the GIL is perhaps an obvious solution but not
the only one. Others include Trent Nelson's PyParallels, STM, and
other Python implementations..
Proposal
=======
In some personal correspondence Nick Coghlan, he summarized my
preferred approach as "the data storage separation of multiprocessing,
with the low message passing overhead of threading".
For Python 3.6:
* expose subinterpreters to Python in a new stdlib module: "subinterpreters"
* add a new SubinterpreterExecutor to concurrent.futures
* add a queue.Queue-like type that will be used to explicitly share
objects between subinterpreters
This is less simple than it might sound, but presents what I consider
the best option for getting a meaningful improvement into Python 3.6.
Also, I'm not convinced that the word "subinterpreter" properly
conveys the intent, for which subinterpreters is only part of the
picture. So I'm open to a better name.
Influences
========
Note that I'm drawing quite a bit of inspiration from elsewhere. The
idea of using subinterpreters to get this (more) efficient isolated
execution is not my own (I heard it from Nick). I have also spent
quite a bit of time and effort researching for this proposal. As part
of that, a number of people have provided invaluable insight and
encouragement as I've prepared, including Guido, Nick, Brett Cannon,
Barry Warsaw, and Larry Hastings.
Additionally, Hoare's "Communicating Sequential Processes" (CSP) has
been a big influence on this proposal. FYI, CSP is also the
inspiration for Go's concurrency model (e.g. goroutines, channels,
select). Dr. Sarah Mount, who has expertise in this area, has been
kind enough to agree to collaborate and even co-author the PEP that I
hope comes out of this proposal.
My interest in this improvement has been building for several years.
Recent events, including this year's language summit, have driven me
to push for something concrete in Python 3.6.
The subinterpreter Module
=====================
The subinterpreters module would look something like this (a la
threading/multiprocessing):
settrace()
setprofile()
stack_size()
active_count()
enumerate()
get_ident()
current_subinterpreter()
Subinterpreter(...)
id
is_alive()
running() -> Task or None
run(...) -> Task # wrapper around PyRun_*, auto-calls Task.start()
destroy()
Task(...) # analogous to a CSP process
id
exception()
# other stuff?
# for compatibility with threading.Thread:
name
ident
is_alive()
start()
run()
join()
Channel(...) # shared by passing as an arg to the subinterpreter-running func
# this API is a bit uncooked still...
pop()
push()
poison() # maybe
select() # maybe
Note that Channel objects will necessarily be shared in common between
subinterpreters (where bound). This sharing will happen when the one
or more of the parameters to the function passed to Task() is a
Channel. Thus the channel would be open to the (sub)interpreter
calling Task() (or Subinterpreter.run()) and to the new
subinterpreter. Also, other channels could be fed into such a shared
channel, whereby those channels would then likewise be shared between
the interpreters.
I don't know yet if this module should include *all* the essential
pieces to implement a complete CSP library. Given the inspiration
that CSP is providing, it may make sense to support it fully. It
would be interesting then if the implementation here allowed the
(complete?) formalisms provided by CSP (thus, e.g. rigorous proofs of
concurrent system models).
I expect there will also be a _subinterpreters module with low-level
implementation-specific details.
Related Ideas and Details Under Consideration
====================================
Some of these are details that need to be sorted out. Some are
secondary ideas that may be appropriate to address in this proposal or
may need to be tabled. I have some others but these should be
sufficient to demonstrate the range of points to consider.
* further coalesce the (concurrency/parallelism) abstractions between
threading, multiprocessing, asyncio, and this proposal
* only allow one running Task at a time per subinterpreter
* disallow threading within subinterpreters (with legacy support in C)
+ ignore/remove the GIL within subinterpreters (since they would be
single-threaded)
* use the GIL only in the main interpreter and for interaction between
subinterpreters (and a "Local Interpreter Lock" for within a
subinterpreter)
* disallow forking within subinterpreters
* only allow passing plain functions to Task() and
Subinterpreter.run() (exclude closures, other callables)
* object ownership model
+ read-only in all but 1 subinterpreter
+ RW in all subinterpreters
+ only allow 1 subinterpreter to have any refcounts to an object
(except for channels)
* only allow immutable objects to be shared between subinterpreters
* for better immutability, move object ref counts into a separate table
* freeze (new machinery or memcopy or something) objects to make them
(at least temporarily) immutable
* expose a more complete CSP implementation in the stdlib (or make the
subinterpreters module more compliant)
* treat the main interpreter differently than subinterpreters (or
treat it exactly the same)
* add subinterpreter support to asyncio (the interplay between them
could be interesting)
Key Dependencies
================
There are a few related tasks/projects that will likely need to be
resolved before subinterpreters in CPython can be used in the proposed
manner. The proposal could implemented either way, but it will help
the multi-core effort if these are addressed first.
* fixes to subinterpreter support (there are a couple individuals who
should be able to provide the necessary insight)
* PEP 432 (will simplify several key implementation details)
* improvements to isolation between subinterpreters (file descriptors,
env vars, others)
Beyond those, the scale and technical scope of this project means that
I am unlikely to be able to do all the work myself to land this in
Python 3.6 (though I'd still give it my best shot). That will require
the involvement of various experts. I expect that the project is
divisible into multiple mostly independent pieces, so that will help.
Python Implementations
===================
They can correct me if I'm wrong, but from what I understand both
Jython and IronPython already have subinterpreter support. I'll be
soliciting feedback from the different Python implementors about
subinterpreter support.
C Extension Modules
=================
Subinterpreters already isolate extension modules (and built-in
modules, including sys). PEP 384 provides some help too. However,
global state in C can easily leak data between subinterpreters,
breaking the desired data isolation. This is something that will need
to be addressed as part of the effort.
This idea is already casually mentioned, but sank deep into the threads
of the discussion. Raise it up.
Currently reprs of classes and functions look as:
>>> int
<class 'int'>
>>> int.from_bytes
<built-in method from_bytes of type object at 0x826cf60>
>>> open
<built-in function open>
>>> import collections
>>> collections.Counter
<class 'collections.Counter'>
>>> collections.Counter.fromkeys
<bound method Counter.fromkeys of <class 'collections.Counter'>>
>>> collections.namedtuple
<function namedtuple at 0xb6fc4adc>
What if change default reprs of classes and functions to just full
qualified name __module__ + '.' + __qualname__ (or just __qualname__ if
__module__ is builtins)? This will look more neatly. And such reprs are
evaluable.
Hi Python-Ideas ML,
To resume quickly the idea: I wish to add "extra" attribute to LogMessage,
to facilitate structured logs generation.
For more details with use case and example, you can read message below.
Before to push the patch on bugs.python.org, I'm interested in by your
opinions: the patch seems to be too simple to be honest.
Regards.
--
Ludovic Gasc (GMLudo)
http://www.gmludo.eu/
---------- Forwarded message ----------
From: Guido van Rossum <guido(a)python.org>
Date: 2015-05-24 23:44 GMT+02:00
Subject: Re: [Python-Dev] An yocto change proposal in logging module to
simplify structured logs support
To: Ludovic Gasc <gmludo(a)gmail.com>
Ehh, python-ideas?
On Sun, May 24, 2015 at 10:22 AM, Ludovic Gasc <gmludo(a)gmail.com> wrote:
> Hi,
>
> 1. The problem
>
> For now, when you want to write a log message, you concatenate the data
> from your context to generate a string: In fact, you convert your
> structured data to a string.
> When a sysadmin needs to debug your logs when something is wrong, he must
> write regular expressions to extract interesting data.
>
> Often, he must find the beginning of the interesting log and follow the
> path. Sometimes, you can have several requests in the same time in the log,
> it's harder to find interesting log.
> In fact, with regular expressions, the sysadmin tries to convert the log
> lines strings to structured data.
>
> 2. A possible solution
>
> You should provide a set of regular expressions to your sysadmins to help
> them to find the right logs, however, another approach is possible:
> structured logs.
> Instead of to break your data structure to push in the log message, the
> idea is to keep the data structure, to attach that as metadata of the log
> message.
> For now, I know at least Logstash and Journald that can handle structured
> logs and provide a query tool to extract easily logs.
>
> 3. A concrete example with structured logs
>
> As most Web developers, we build HTTP daemons used by several different
> human clients in the same time.
> In the Python source code, to support structured logs, you don't have a
> big change, you can use "extra" parameter for that, example:
>
> [handle HTTP request]
> LOG.debug('Receive a create_or_update request', extra={'request_id':
> request.request_id,
>
> 'account_id': account_id,
>
> 'aiohttp_request': request,
>
> 'payload': str(payload)})
> [create data in database]
> LOG.debug('Callflow created', extra={'account_id': account_id,
> 'request_id':
> request.request_id,
> 'aiopg_cursor': cur,
> 'results': row})
>
> Now, if you want, you can enhance the structured log with a custom logging
> Handler, because the standard journald handler doesn't know how to handle
> aiohttp_request or aiopg_cursor.
> My example is based on journald, but you can write an equivalent version
> with python-logstash:
> ####
> from systemdream.journal.handler import JournalHandler
>
> class Handler(JournalHandler):
> # Tip: on a system without journald, use socat to test:
> # socat UNIX-RECV:/run/systemd/journal/socket STDIN
> def emit(self, record):
> if record.extra:
> # import ipdb; ipdb.set_trace()
> if 'aiohttp_request' in record.extra:
> record.extra['http_method'] =
> record.extra['aiohttp_request'].method
> record.extra['http_path'] =
> record.extra['aiohttp_request'].path
> record.extra['http_headers'] =
> str(record.extra['aiohttp_request'].headers)
> del(record.extra['aiohttp_request'])
> if 'aiopg_cursor' in record.extra:
> record.extra['pg_query'] =
> record.extra['aiopg_cursor'].query.decode('utf-8')
> record.extra['pg_status_message'] =
> record.extra['aiopg_cursor'].statusmessage
> record.extra['pg_rows_count'] =
> record.extra['aiopg_cursor'].rowcount
> del(record.extra['aiopg_cursor'])
> super().emit(record)
> ####
>
> And you can enable this custom handler in your logging config file like
> this:
> [handler_journald]
> class=XXXXXXXXXX.utils.logs.Handler
> args=()
> formatter=detailed
>
> And now, with journalctl, you can easily extract logs, some examples:
> Logs messages from 'lg' account:
> journalctl ACCOUNT_ID=lg
> All HTTP requests that modify the 'lg' account (PUT, POST and DELETE):
> journalctl ACCOUNT_ID=lg HTTP_METHOD=PUT
> HTTP_METHOD=POST HTTP_METHOD=DELETE
> Retrieve all logs from one specific HTTP request:
> journalctl REQUEST_ID=130b8fa0-6576-43b6-a624-4a4265a2fbdd
> All HTTP requests with a specific path:
> journalctl HTTP_PATH=/v1/accounts/lg/callflows
> All logs of "create" function in the file "example.py"
> journalctl CODE_FUNC=create CODE_FILE=/path/example.py
>
> If you already do a troubleshooting on a production system, you should
> understand the interest of this:
> In fact, it's like to have SQL queries capabilities, but it's logging
> oriented.
> We use that since a small time on one of our critical daemon that handles
> a lot of requests across several servers, it's already adopted from our
> support team.
>
> 4. The yocto issue with the Python logging module
>
> I don't explain here a small part of my professional life for my pleasure,
> but to help you to understand the context and the usages, because my patch
> for logging is very small.
> If you're an expert of Python logging, you already know that my Handler
> class example I provided above can't run on a classical Python logging,
> because LogRecord doesn't have an extra attribute.
>
> extra parameter exists in the Logger, but, in the LogRecord, it's merged
> as attributes of LogRecord:
> https://github.com/python/cpython/blob/master/Lib/logging/__init__.py#L1386
>
> It means, that when the LogRecord is sent to the Handler, you can't
> retrieve the dict from the extra parameter of logger.
> The only way to do that without to patch Python logging, is to rebuild by
> yourself the dict with a list of official attributes of LogRecord, as is
> done in python-logstash:
>
> https://github.com/vklochan/python-logstash/blob/master/logstash/formatter.…
> At least to me, it's a little bit dirty.
>
> My quick'n'dirty patch I use for now on our CPython on production:
>
> diff --git a/Lib/logging/__init__.py b/Lib/logging/__init__.py
> index 104b0be..30fa6ef 100644
> --- a/Lib/logging/__init__.py
> +++ b/Lib/logging/__init__.py
> @@ -1382,6 +1382,7 @@ class Logger(Filterer):
> """
> rv = _logRecordFactory(name, level, fn, lno, msg, args, exc_info,
> func,
> sinfo)
> + rv.extra = extra
> if extra is not None:
> for key in extra:
> if (key in ["message", "asctime"]) or (key in
> rv.__dict__):
>
> At least to me, it should be cleaner to add "extra" as parameter
> of _logRecordFactory, but I've no idea of side effects, I understand that
> logging module is critical, because it's used everywhere.
> However, except with python-logstash, to my knowledge, extra parameter
> isn't massively used.
> The only backward incompatibility I see with a new extra attribute of
> LogRecord, is that if you have a log like this:
> LOG.debug('message', extra={'extra': 'example'})
> It will raise a KeyError("Attempt to overwrite 'extra' in LogRecord")
> exception, but, at least to me, the probability of this use case is near to
> 0.
>
> Instead of to "maintain" this yocto patch, even it's very small, I should
> prefer to have a clean solution in Python directly.
>
> Thanks for your remarks.
>
> Regards.
> --
> Ludovic Gasc (GMLudo)
> http://www.gmludo.eu/
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev(a)python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>
--
--Guido van Rossum (python.org/~guido)
Hi Python Ideas folks,
(I previously posted a similar message on Python-Dev, but it's a
better fit for this list. See that thread here:
https://mail.python.org/pipermail/python-dev/2015-May/140063.html)
Enabling access to the AST for compiled code would make some cool
things possible (C# LINQ-style ORMs, for example), and not knowing too
much about this part of Python internals, I'm wondering how possible
and practical this would be.
Context: PonyORM (http://ponyorm.com/) allows you to write regular
Python generator expressions like this:
select(c for c in Customer if sum(c.orders.price) > 1000)
which compile into and run SQL like this:
SELECT "c"."id"
FROM "Customer" "c"
LEFT JOIN "Order" "order-1" ON "c"."id" = "order-1"."customer"
GROUP BY "c"."id"
HAVING coalesce(SUM("order-1"."total_price"), 0) > 1000
I think the Pythonic syntax here is beautiful. But the tricks PonyORM
has to go to get it are ... not quite so beautiful. Because the AST is
not available, PonyORM decompiles Python bytecode into an AST first,
and then converts that to SQL. (More details on all that from author's
EuroPython talk at http://pyvideo.org/video/2968)
PonyORM needs the AST just for generator expressions and
lambda functions, but obviously if this kind of AST access feature
were in Python it'd probably be more general.
I believe C#'s LINQ provides something similar, where if you're
developing a LINQ converter library (say LINQ to SQL), you essentially
get the AST of the code ("expression tree") and the library can do
what it wants with that.
(I know that there's the "ast" module and ast.parse(), which can give
you an AST given a *source string*, but that's not very convenient
here.)
What would it take to enable this kind of AST access in Python? Is it
possible? Is it a good idea?
-Ben
Hello,
I had a generator producing pairs of values and wanted to feed all the
first members of the pairs to one consumer and all the second members to
another consumer. For example:
def pairs():
for i in range(4):
yield (i, i ** 2)
biconsumer(sum, list)(pairs()) -> (6, [0, 1, 4, 9])
The point is I wanted the consumers to be suspended and resumed in a
coordinated manner: The first producer is invoked, it wants the first
element. The coordinator implemented by biconsumer function invokes
pairs(), gets the first pair and yields its first member to the first
consumer. Then it wants the next element, but now it's the second
consumer's turn, so the first consumer is suspended and the second consumer
is invoked and fed with the second member of the first pair. Then the
second producer wants the next element, but it's the first consumer's turn…
and so on. In the end, when the stream of pairs is exhausted, StopIteration
is thrown to both consumers and their results are combined.
The cooperative asynchronous nature of the execution reminded me asyncio
and coroutines, so I thought that biconsumer may be implemented using them.
However, it seems that it is imposible to write an "asynchronous generator"
since the "yielding pipe" is already used for the communication with the
scheduler. And even if it was possible to make an asynchronous generator,
it is not clear how to feed it to a synchronous consumer like sum() or
list() function.
With PEP 492 the concepts of generators and coroutines were separated, so
asyncronous generators may be possible in theory. An ordinary function has
just the returning pipe – for returning the result to the caller. A
generator has also a yielding pipe – used for yielding the values during
iteration, and its return pipe is used to finish the iteration. A native
coroutine has a returning pipe – to return the result to a caller just like
an ordinary function, and also an async pipe – used for communication with
a scheduler and execution suspension. An asynchronous generator would just
have both yieling pipe and async pipe.
So my question is: was the code like the following considered? Does it make
sense? Or are there not enough uses cases for such code? I found only a
short mention in
https://www.python.org/dev/peps/pep-0492/#coroutine-generators, so possibly
these coroutine-generators are the same idea.
async def f():
number_string = await fetch_data()
for n in number_string.split():
yield int(n)
async def g():
result = async/await? sum(f())
return result
async def h():
the_sum = await g()
As for explanation about the execution of h() by an event loop: h is a
native coroutine called by the event loop, having both returning pipe and
async pipe. The returning pipe leads to the end of the task, the async pipe
is used for cummunication with the scheduler. Then, g() is called
asynchronously – using the await keyword means the the access to the async
pipe is given to the callee. Then g() invokes the asyncronous generator f()
and gives it the access to its async pipe, so when f() is yielding values
to sum, it can also yield a future to the scheduler via the async pipe and
suspend the whole task.
Regards, Adam Bartoš
Could we do that? Is there is reason it's not already a namedtuple?
I always forget what the read-end and what the write-end of the pipe is,
and I use it quite regularly.
Jonathan
Hello from MicroPython, a lean Python implementation
scaling down to run even on microcontrollers
(https://github.com/micropython/micropython).
Our target hardware base oftentimes lacks floating point support, and
using software emulation is expensive. So, we would like to have
versions of some timing functions, taking/returning millisecond and/or
microsecond values as integers.
The most functionality we're interested in:
1. Delays
2. Relative time (from an arbitrary starting point, expected to be
wrapped)
3. Calculating time differences, with immunity to wrap-around.
The first presented assumption is to use "time.sleep()" for delays,
"time.monotonic()" for relative time as the base. Would somebody gave
alternative/better suggestions?
Second question is how to modify their names for
millisecond/microsecond versions. For sleep(), "msleep" and "usleep"
would be concise possibilities, but that doesn't map well to
monotonic(), leading to "mmonotonic". So, better idea is to use "_ms"
and "_us" suffixes:
sleep_ms()
sleep_us()
monotonic_ms()
monotonic_us()
Point 3 above isn't currently addressed by time module at all.
https://www.python.org/dev/peps/pep-0418/ mentions some internal
workaround for overflows/wrap-arounds on some systems. Due to
lean-ness of our hardware base, we'd like to make this matter explicit
to the applications and avoid internal workarounds. Proposed solution
is to have time.elapsed(time1, time2) function, which can take values
as returned by monotonic_ms(), monotonic_us(). Assuming that results of
both functions are encoded and wrap consistently (this is reasonable
assumption), there's no need for 2 separate elapsed_ms(), elapsed_us()
function.
So, the above are rough ideas we (well, I) have. We'd like to get wider
Python community feedback on them, see if there're better/alternative
ideas, how Pythonic it is, etc. To clarify, this should not be construed
as proposal to add the above functions to CPython.
--
Best regards,
Paul mailto:pmiscml@gmail.com