Frequently, while globbing, one needs to work with multiple extensions. I’d
like to propose for fnmatch.filter to handle a tuple of patterns (while
preserving the single str argument functionality, alas str.endswith), as a
first step for glob.i?glob to accept multiple patterns as well.
Here is the implementation I came up with:
https://github.com/python/cpython/compare/master...andresdelfino:fnmatch-mu…
If this is deemed reasonable, I’ll write tests and documentation updates.
Any opinion?
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Consider the following example:
import unittest
def foo():
for x in [1, 2, 'oops', 4]:
print(x + 100)
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertIs(foo(), None)
if __name__ == '__main__':
unittest.main()
If we were calling `foo` directly we could enter post-mortem debugging via `python -m pdb test.py`.
However since `foo` is wrapped in a test case, `unittest` eats the exception and thus prevents post-mortem debugging. `--failfast` doesn't help, the exception is still swallowed.
Since I am not aware of a solution that enables post-mortem debugging in such a case (without modifying the test scripts, please correct me if one exists), I propose adding a command-line option to `unittest` for [running test cases in debug mode](https://docs.python.org/3/library/unittest.html#unittest.TestCase.deb… so that post-mortem debugging can be used.
P.S.: There is also [this SO question](https://stackoverflow.com/q/4398967/3767239) on a similar topic.
There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
Hi,
In Python, there are multiple [compound
statements](https://docs.python.org/3/reference/compound_stmts.html)
with the `else` keyword.
For example:
```
for x in iterable:
if x == sentinel:
break
else:
print("Sentinel not found.")
```
or:
```
try:
do_something_sensitive()
except MyError:
print("Oops!")
else:
print("We're all safe.")
```
In my situation, I would like to mix the `with` statement with `else`.
In this case I would like that if no exception is raised within the
`with` to run the `else` part.
For example:
```
with my_context():
do_something_sensitive()
else:
print("We're all safe.")
```
Now imagine that in my `try .. except` block I have some heavy setup
to do before `do_something_sensitive()` and some heavy cleanup when
the exception occurs.
I'd like my context manager to do the preparation work, execute the
body, and cleanup. Or execute my else block only if there is no
exception.
Is there already a way to accomplish this in Python or can this be a
nice to have?
Regards,
Jimmy
TL;DR: should we make `del x` an expression that returns the value of `x`.
## Motivation
I noticed yesterday that `itertools.combinations` has an optimization for when the returned tuple has no remaining ref-counts, and reuses it - namely, the following code:
>>> for v in itertools.combinations([1, 2, 3], 1):
... print(id(v))
... del v # without this, the optimization can't take place
2500926199840
2500926199840
2500926199840
will print the same id three times. However, when used as a list comprehension, the optimization can't step in, and I have no way of using the `del` keyword
>>> [id(v) for v in itertools.combinations([1, 2, 3], 1)]
[2500926200992, 2500926199072, 2500926200992]
`itertools.combinations` is not the only place to make this optimization - parts of numpy use it too, allowing
a = (b * c) + d
to elide the temporary `b*c`. This elision can't happen with the spelling
bc = b * c
a = bc + d
My suggestion would be to make `del x` an expression, with semantics "unbind the name `x`, and evaluate to its value".
This would allow:
>>> [id(del v) for v in itertools.combinations([1, 2, 3], 1)]
[2500926200992, 2500926200992, 2500926200992]
and
bc = b * c
a = (del bc) + d # in C++, this would be `std::move(bc) + d`
## Why a keyword
Unbinding a name is not something that a user would expect a function to do. Functions operate on values, not names, so `move(bc)` would be surprising.
## Why `del`
`del` already has "unbind this name" semantics, and allow it to be used as an expression would break no existing code.
`move x` might be more understandable, but adding new keywords is expensive
## Optional extension
For consistency, `x = (del foo.attr)` and `x = (del foo[i])` could also become legal expressions, and `__delete__`, `__delattr__`, and `__delitem__` would now have return values. Existing types would be free to continue to return `None`.
Another idea I've had that may be of use:
PYTHONLOGGING environment variable.
Setting PYTHONLOGGING to any log level or level name will initialize
logging.basicConfig() with that appropriate level.
Another option would be that -x dev or a different -x logging will
initialize basic config.
Will be useful mostly for debugging purposes instead of temporarily
modifying the code.
Kinda surprised it doesn't exist tbh.
Bar Harel
Hello,
1. Intro
--------
It is a well-known anti-pattern to use a string as a string buffer, to
construct a long (perhaps very long) string piece-wise. A running
example is:
buf = ""
for i in range(50000):
buf += "foo"
print(buf)
An alternative is to use a buffer-like object explicitly designed for
incremental updates, which for Python is io.StringIO:
buf = io.StringIO()
for i in range(50000):
buf.write("foo")
print(buf.getvalue())
As can be seen, this requires changing the way buffer is constructed
(usually in one place), the way buffer value is taken (usually in one
place), but more importantly, it requires changing each line which
adds content to a buffer, and there can be many of those for more
complex algorithms, leading to a code less clear than the original code,
requiring noise-like changes, and complicating updates to 3rd-party code
which needs optimization.
To address this, this RFC proposes to add an __iadd__ method (i.e.
implementing "+=" operator) to io.StringIO and io.BytesIO objects,
making it the exact alias of .write() method. This will allow for
the code very parallel to the original str-using code:
buf = io.StringIO()
for i in range(50000):
buf += "foo"
print(buf.getvalue())
This will still require updates for buffer construction/getting value,
but that's usually 2 lines. But it will leave the rest of code intact,
and not obfuscate the original content construction algorithm.
2. Performance Discussion
-------------------------
The motivation for this change (of promoting usage of io.StringIO, by
making it look&feel more like str) is performance. But is it really a
problem? Turns out, it is such a pervasive anti-pattern, that recent
versions on CPython3 have a special optimization for it. Let's use
following script for testing:
---------
import timeit
import io
def string():
sb = u""
for i in range(50000):
sb += u"a"
def strio():
sb = io.StringIO()
for i in range(50000):
sb.write(u"a")
print(timeit.timeit(string, number=10))
print(timeit.timeit(strio, number=10))
---------
With CPython3.6 the result is:
$ python3.6 str_iadd-vs-StringIO_write.py
0.03350826998939738
0.033480543992482126
In other words, there's no difference between usage of str vs StringIO.
But it wasn't always like that, with CPython2.7.17:
$ python2.7 str_iadd-vs-StringIO_write.py
2.10510993004
0.0399420261383
But Python2 is dead, right? Ok, let's see how Jython3 and IronPython3
fair. To my surprise, there're no (public releases of) such. Both
projects sit firmly in the Python2 territory. So, let's try them:
$ java -jar jython-standalone-2.7.2.jar str_iadd-vs-StringIO_write.py
10.8869998455
1.74700021744
Sadly, I wasn't able to get to run IronPython.2.7.9.zip on my Linux
system, so I used the online version at https://tio.run/#python2-iron
(after discovering that https://ironpython.net/try/ is dead).
2.7.9 (IronPython 2.7.9 (2.7.9.0) on Mono 4.0.30319.42000 (64-bit))
26.2704391479
1.55628967285
So, it seems that rumors of Python2 being dead are somewhat exaggerated.
Let's try a project which tries to provide "missing migration path"
between Python2 and Python3 - https://github.com/naftaliharris/tauthon
Tauthon 2.8.1+ (heads/master:7da5b76f5b, Mar 29 2020, 18:05:05)
$ tauthon str_iadd-vs-StringIO_write.py
0.792158126831
0.0467159748077
Whoa, tauthon seems to be faithful to its promise of being half-way
between CPython2 and CPython2.
Anyway, let's get back to Python3. Fortunately, there's PyPy3, so let's
try that:
$ ./pypy3.6-v7.3.0-linux64/bin/pypy3 str_iadd-vs-StringIO_write.py
0.5423258490045555
0.01754526497097686
Let's not forget little Python brothers and continue with
MicroPython 1.12 (https://github.com/micropython/micropython):
$ micropython str_iadd-vs-StringIO_write.py
41.63419413566589
0.08073711395263672
Pycopy 3.0.6 (https://github.com/pfalcon/pycopy):
$ pycopy str_iadd-vs-StringIO_write.py
25.03198313713074
0.0713810920715332
I also wanted to include TinyPy (http://tinypy.org/) and Snek
(https://github.com/keith-packard/snek) in the shootout, but both
(seem to) lack StringIO object.
These results can be summarized as follows: of more than half-dozen
Python implementations, CPython3 is the only implementation which
optimizes for the dubious usage of an immutable string type as an
accumulating character buffer. For all other implementations, unintended
usage of str incurs overhead of about one order of magnitude, 2 order
of magnitude for implementations optimized for particular usecases
(this includes PyPy optimized for speed vs MicroPython/Pycopy optimized
for small code size and memory usage).
Consequently, other implementations have 2 choices:
1. Succumb to applying the same mis-optimization for string type as
CPython3. (With the understanding that for speed-optimized projects,
implementing mis-optimizations will eat into performance budget, and
for memory-optimized projects, it likely will lead to noticeable
memory bloat.)
2. Struggle against inefficient-by-concept usage, and promote usage of
the correct object types for incremental construction of string content.
This would require improving ergonomics of existing string buffer
object, to make its usage less painful for both writing new code and
refactoring existing.
As you may imagine, the purpose of this RFC is to raise awareness and
try to make headway with the choice 2.
3. Scope Creep, aka "Possible Future Work"
------------------------------------------
The purpose of this RFC is specifically to propose to apply *single*
simple, obvious change. .__iadd__ is just an alias for .write, period.
However, for completeness, it makes sense to consider both alternatives
and where the path of adding "str-like functionality" may lead us.
1. One alternative to patching StringIO would be to introduce a
completely different type, e.g. StringBuf. But that largely would be
"creating more entities without necessity", given that StringIO
already offers needed buffering functionality, and just needs a little
touch of polish with interface. If 2 classes like StringIO and
StringBuf existed, it would be extra quiz to explain difference between
them and why they both exist.
2. On the other hand, this RFC fixates on the output buffering. But
just image how much fun can be done re: input buffers! E.g., we can
define "buf[0]" to have semantics of "tmp = buf.tell(); res =
buf.read(1); buf.seek(tmp); return res". Ditto for .startswith(), etc.,
etc. So, well... This RFC is about making .__iadd__ to be an alias
for .write, and cover with this output buffering usecase. Whoever may
have interest in dealing with input buffer shortcuts would need to
provide a separate RFC, with separate usecases and argumentation.
4. (Self-)Criticism and Risks.
------------------------------
1. The biggest "criticism" I see is a response a-la "there's no problem
with CPython3, so there's nothing to fix". This is related to a bigger
questions "whether a life outside CPython exists", or put more
formally, where's the border between Python-the-language and
CPython-the-implementation. To address this point, I tried to collect
performance stats for a pretty wide array of Python implementations.
2. Another potential criticism is that this may open a scope creep to
add more str-like functionality to classes which classically expose
a stream interface. Paragraph 3.2 is dedicated specifically to address
this point, by invoking hopefully the best-practice approach: request
to focus on the currently proposed feature, which requires very little
changes for arguably noticeable improvements. (At the same time, for
an abstract possibility of this change to be found positive, this may
influence further proposals from interested parties).
3. Implementing the required functionality is pretty easy with a user
subclass:
class MyStringBuf(io.StringIO):
def __iadd__(self, s):
self.write(s)
return self
Voila. The problem is performance. Calling such .__iadd__() method
implementing in Python is 3 times slower than calling .write()
directly (with CPython3.6). But paradigmatic problem is even bigger:
this RFC seeks to establish the best practice of using explicitly
designed for the purpose type with ergonomic interface. Saying that "we
lack such clearly designated type out of the box, but if you figure out
that it's a problem (if you do figure that out), you can easily resolve
that on your side, albeit with a performance hit when compared with it
being provided out of the box" - that's not really welcoming or
ergonomic.
5. Prior Art
------------
As many things related to Python, the idea is not new. I found thread
from 2006 dedicated to it:
https://mail.python.org/pipermail/python-list/2006-January/403480.html
(strangely, there's some mixup in archives, and "thread view" shows
another message as thread starter, though it's not:
https://mail.python.org/pipermail/python-list/2006-January/357453.html) .
The discussion there seemed to be without clear resolution and has
been swamped into discussion of unicode handling complexities in
Python2 and implementation details of "StringIO" vs "cStringIO" modules
(both of which were deprecated in favor of "io").
--
Best regards,
Paul mailto:pmiscml@gmail.com
I'd like to propose an improvement to `concurrent.futures`. The library's
ThreadPoolExecutor and ProcessPoolExecutor are excellent tools, but there
is currently no mechanism for configuring which type of executor you want.
Also, there is no duck-typed class that behaves like an executor, but does
its processing in serial. Often times a develop will want to run a task in
parallel, but depending on the environment they may want to disable
threading or process execution. To address this I use a utility called a
`SerialExecutor` which shares an API with
ThreadPoolExecutor/ProcessPoolExecutor but executes processes sequentially
in the same python thread:
```python
import concurrent.futures
class SerialFuture( concurrent.futures.Future):
"""
Non-threading / multiprocessing version of future for drop in
compatibility
with concurrent.futures.
"""
def __init__(self, func, *args, **kw):
super(SerialFuture, self).__init__()
self.func = func
self.args = args
self.kw = kw
# self._condition = FakeCondition()
self._run_count = 0
# fake being finished to cause __get_result to be called
self._state = concurrent.futures._base.FINISHED
def _run(self):
result = self.func(*self.args, **self.kw)
self.set_result(result)
self._run_count += 1
def set_result(self, result):
"""
Overrides the implementation to revert to pre python3.8 behavior
"""
with self._condition:
self._result = result
self._state = concurrent.futures._base.FINISHED
for waiter in self._waiters:
waiter.add_result(self)
self._condition.notify_all()
self._invoke_callbacks()
def _Future__get_result(self):
# overrides private __getresult method
if not self._run_count:
self._run()
return self._result
class SerialExecutor(object):
"""
Implements the concurrent.futures API around a single-threaded backend
Example:
>>> with SerialExecutor() as executor:
>>> futures = []
>>> for i in range(100):
>>> f = executor.submit(lambda x: x + 1, i)
>>> futures.append(f)
>>> for f in concurrent.futures.as_completed(futures):
>>> assert f.result() > 0
>>> for i, f in enumerate(futures):
>>> assert i + 1 == f.result()
"""
def __enter__(self):
return self
def __exit__(self, ex_type, ex_value, tb):
pass
def submit(self, func, *args, **kw):
return SerialFuture(func, *args, **kw)
def shutdown(self):
pass
```
In order to make it easy to choose the type of parallel (or serial) backend
with minimal code changes I use the following "Executor" wrapper class
(although if this was integrated into concurrent.futures the name would
need to change to something better):
```python
class Executor(object):
"""
Wrapper around a specific executor.
Abstracts Serial, Thread, and Process Executor via arguments.
Args:
mode (str, default='thread'): either thread, serial, or process
max_workers (int, default=0): number of workers. If 0, serial is
forced.
"""
def __init__(self, mode='thread', max_workers=0):
from concurrent import futures
if mode == 'serial' or max_workers == 0:
backend = SerialExecutor()
elif mode == 'thread':
backend = futures.ThreadPoolExecutor(max_workers=max_workers)
elif mode == 'process':
backend = futures.ProcessPoolExecutor(max_workers=max_workers)
else:
raise KeyError(mode)
self.backend = backend
def __enter__(self):
return self.backend.__enter__()
def __exit__(self, ex_type, ex_value, tb):
return self.backend.__exit__(ex_type, ex_value, tb)
def submit(self, func, *args, **kw):
return self.backend.submit(func, *args, **kw)
def shutdown(self):
return self.backend.shutdown()
```
So in summary, I'm proposing to add a SerialExecutor and SerialFuture class
as an alternative to the ThreadPool / ProcessPool executors, and I'm also
advocating for some sort of "ParamatrizedExecutor", where the user can
construct it in "thread", "process", or "serial" model.
--
-Jon
(needs a sponsor)
latest version at
https://github.com/gerritholl/peps/blob/animal-friendly/pep-9999.rst
PEP: 9999
Title: Retire animal-unfriendly language
Author: Gerrit Holl <gerrit.holl(a)gmail.com>
Discussions-To: python-ideas(a)python.org
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 01-Apr-2020
Post-History: 01-Apr-2020
Sponsor:
Abstract
========
Python has long used metasyntactic variables that are based on the
consumption of meat and dairy products, such as "spam", "ham", and
"eggs".
This language is not considerate to pigs or chicken and violates the
spirit of the Code of Conduct. This PEP proposes to retire the use
of those names in official Python documentation and source code and to
recommend users of Python to do the same.
Motivation and Rationale
========================
Estimates for the number of animals slaughtered for meat every year
vary, but `worldindata`_ estimates around 80 billion individuals.
Farmed animals are often kept in small cages with little to no access
to daylight, suffer stress during life and slaughter, or are otherwise
systematically mistreated.
The `Python Code of Conduct`_ describes that community members are
open, considerate, and respectful. The Python standard library and
documentation contain numerous references to meat or dairy based food
products that are not respectful to our fellow inhabitants of planet
Earth. Examples include "spam", "bacon", and "eggs".
To align the language use in the standard library and documentation
with
the Code of Conduct, use of such language should be retired.
Current practice
================
There is a widespread tradition in the Python standard library, the
documentation, and the wider community, to include references to Monty
Pythons Flying Circus. The use of "spam", "bacon", "sausage", and
"eggs" can be traced back to the `"Spam" sketch`_ originally broadcast
by the British Broadcasting Corporation (BBC) on 8 September 1972.
In this sketch, a couple are trying to order food in a diner where all
items contain spam. The woman does not like spam and wants to order
food without spam. A group of horned vikings then sing about the
wonderful spam.
To get an overview of the usage in the current standard library, the
command ``cat $(find . -name '*.py') | grep -oi term | wc -l`` was
used.
This showed 2615 occurences for spam, 593 for ham (this include some
false positives, among other reasons due to references to people whose
name innociously contains the substring ham), 517 for eggs, 57 for
bacon,
and 10 for sausage. Searching ``*.rst`` in the documentation revealed
391 occurrences for spam, 82 for ham, 96 for eggs, 28 for bacon, and
10 for sausage. The source code for cpython revealed just 2 usages
for
spam and 1 for eggs.
Proposed alternatives
=====================
Keeping with the good practice of referencing sketches from Monty
Python's
Flying Circus, this PEP proposes to adopt the fruits mentioned in the
`"Self-Defence Against Fresh Fruit" sketch`_:
* raspberry (not currently in use)
* banana (68 times in standard library)
* apricot (not currently in use)
* pineapple (8 times in standard library)
* peach (once in standard library)
* redcurrant (not currently in use)
* damson (not currently in use)
* prune (23 times in standard library)
Other possible alternatives keeping with food items:
* salad (occurs once in standard library)
* aubergine (referred to in the spam sketch)
* shallot (the same)
* tofu (vegan protein alternative)
Specification
=============
For the reasons mentioned in the rationale, all references to meat or
dairy
products shall be removed from the Python standard library, the
documentation,
and the cpython source code. The wider Python community is
recommended to
follow this practice. In core Python:
* Programmers SHALL NOT use the metasyntactic variables "spam", "ham",
"bacon",
or "sausage", neither as variable names, nor in example strings, nor
in
documentation.
* Programmers SHALL NOT use the metasyntactic variable "eggs" in
context with
food items, but may still use it in context of other body parts.
Prohibited:
``["salad", "eggs"]``. Allowed: ``["ovaries", "pouch", "eggs"]``.
* Programmers SHALL NOT use any other metasyntactic variable that is
unfriendly
to animals.
The wider Python community is encouraged to adopt these practices as
well, but
the continued use of animal-unfriendly metasyntactic variables will
not be
considered a violation of the code of conduct.
Rejected ideas
==============
The authors carefully considered the widespread use of the word "bug"
in the meaning of a source code error. Insects including bugs play
a crucial role in ecosystems around the world, and it is not fair to
blame them for an error that can only be the programmer's. However,
the use of the word "bug" for a source code error is too much
ingrained
into daily use, it far predates the Python community, is not limited
to
the Python community, and the word "bug" is less unfriendly than
"spam",
"ham", or "bacon". Therefore, the word "bug" may still be used.
Reference Implementation
========================
The author promises to provide a reference implementation for Python
3.10,
should this PEP be accepted.
References
==========
.. _worldindata: https://ourworldindata.org/meat-production
.. _Python code of conduct: https://www.python.org/psf/conduct/
.. _"Spam" sketch: http://www.montypython.net/scripts/spam.php
.. _"Self-Defence Against Fresh Fruit" sketch:
http://www.montypython.net/scripts/fruit.php
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End: