Hello Python-devs,
The csv module is probably heavily utilized by newcomers to Python, being a
very popular data exchange format.
Although, there are better tools for processing tabular data like SQLite,
or Pandas, I suspect this is still a very popular
module.
There are many examples floating around how one can read and process CSV
with the csv module.
Quite a few tutorials show how to use namedtuple to gain memory saving and
speed, over the DictReader.
Python's own documentation has got a recipe in the collections modules[1]
Hence, I was wondering why not go the extra step and add a new class to the
CSV module NamedTupleReader?
This class would do a good service for Python's users, especially newcomers
who are still not aware of
modules like the collections module.
Would someone be willing to sponsor and review such a PR from me?
As a smaller change, we could simply add a link from the CSV module's
documentation to the recipe in the collections module.
What do you think?
Best regards
Oz
[1]:
https://docs.python.org/3/library/collections.html?highlight=namedtuple%20c…
---
Imagine there's no countries
it isn't hard to do
Nothing to kill or die for
And no religion too
Imagine all the people
Living life in peace
On behalf of the Python development community, I'm relieved to announce
the availability of Python 3.5.8.
Python 3.5 is in "security fixes only" mode. This new version only
contains security fixes, not conventional bug fixes, and it is a
source-only release.
You can find Python 3.5.8 here:
https://www.python.org/downloads/release/python-358/
Oh what fun,
//arry/
Recently, Brett updated the developer log in the devguide
(https://devguide.python.org/developers/) to fetch the names of each core
developer and the date they were given commit privileges from the private
python-committers repository.
I think it would also be quite useful to include GitHub usernames on that list.
Currently, the only list that contributors can find the GitHub usernames for
each core developer is through the committers list on bpo. Since we will be
moving away from bpo (PEP 581), we should have a comprehensive list that is
separate from that platform.
The motivation behind creating a a new topic for this issue was Brett's
response to my comment in the PR that updated the devguide
(https://github.com/python/devguide/pull/533#issuecomment-532405907).
Essentially, if no core developers have an issue with having their GitHub
username posted on the devguide, we can move forward with adding it.
Another related but more long term project is adding the GitHub usernames
to the experts index (https://devguide.python.org/experts/). This is more
involved because the bpo nosy list currently pulls from the experts index,
meaning the nosy list is dependent on the specific formatting used.
To address this, I opened a PR a couple of months ago which would add a .json
file containing the data from the experts index
(https://github.com/python/devguide/pull/517), based on the discussion in the
related issue (https://github.com/python/devguide/issues/507). If any available
core developers are experienced with structuring .json files, I would greatly
appreciate any feedback.
The next step would be converting the nosy list script to use the new .json
file instead of the experts index page, so that we could adjust the page
to also include GitHub usernames. Optimally, the contents in the experts
index would be pulled from the .json file automatically so any changes only
have to be made to a single location.
Hi CPython maintainers,
I need to test my CORS setup and looking for a possibility to set a
custom *Access-Control-Allow-Origin
*header in http.server. As of now, there is no such feature. Are you
interested in me writing a patch to contribute a feature of setting custom
headers directly to `http.server`?
Best,
- Alex
Hi,
I just posted a new PEP for comments, please reply there, rather than by email:
https://discuss.python.org/t/rfc-pep-608-coordinated-python-release/2539
PEP 608: Coordinated Python release
https://www.python.org/dev/peps/pep-0608/
Abstract:
Block a Python release until a compatible version of selected projects
is available.
The Python release manager can decide to release Python even if a
project is not compatible, if they decide that the project is going to
be fixed soon enough, or if the issue severity is low enough.
Victor
--
Night gathers, and now my watch begins. It shall not end until my death.
Hi everyone,
I've found myself recently writing Python code that dynamically generates
bytecode.¹ I now have yet another case where I'm having to do this, in
which my nice situation of being able to easily precompute all the jump
addresses no longer holds. So I'm starting to write a helper to make it
easy to write bytecode from Python, with its basic API calls being
write(opcode, arg) and nextLine(optional label). The argument can be an
int, name, local name, constant, label, etc., depending on the opcode, and
it maintains all the appropriate tables and finally dumps a code object at
the end.
All of which is well and good and makes life much easier, but... I am
*not* looking
forward to writing the logic that basically duplicates that of assemble()
in compile.c, of splitting all of this into basic blocks and computing the
correct jump positions and so on before finally dumping out the bytecode.
Has anyone already done this that people know of? (Searching the
Internetz didn't turn anything up) Failing that, to what extent is it
reasonable to either consider assemble() as some kind of sane API point
into compile.c, and/or add some new API in compile.h which implements all
of the stuff described above in C?
(I'm fully expecting the answer to these latter questions to be "you have
got to be kidding me," but figured it was wiser to check than to reinvent
this particular wheel if it isn't necessary)
Yonatan
¹ Not out of masochism, in case you're wondering; there was a real use
case. A storage system would receive a read request that specified a bunch
of (key, condition) pairs, where conditions where either return any value,
return an exact value, or return values in a range. It would then receive
between 1 and 1M (depending on the request parameters) candidate cells from
the underlying storage layers, each of which had a tuple of bytes as its
actual key values; it had to compare each of those tuples against the
request parameters, and yield the values which matched. Because it's an
inner loop and can easily be called 1M times, doing this in pure Python
slows things down by a lot. Because it's also only called once, doing some
really expensive overhead like synthesizing Python code and calling
compile() on it would also slow things down a lot. But converting a bunch
of (key, condition) pairs to a really efficient function from tuples of
bytes to bools was pretty easy.
Dear Sir / Madam ,
I'm Lasan Nishshanka, CEO and the Founder of Clenontec. We are creating
an IDE. It's name is Bacend Studio. We're creating this for the Windows
Operating System. This IDE is designed to perform special functions such
as App Development, Software Development, Games Development, Machine
Learning, Artificial Intelligence etc.
We will send you this letter to get permission to add the PYTHON
Programming Language to our IDE. If you can give it permission, please
tell us. We await a speedy reply from you.
Company Web Site : https://www.clenontec.com/
E Mail : Clenontec(a)gmail.com
Thank You.
--
This email has been checked for viruses by AVG.
https://www.avg.com
Hi,
Right now, there are 14 open issues with "test_asyncio" in the title.
Many test_asyncio tests have race conditions. I'm trying to fix them
one by one, but it takes time, and then new tests are added with new
race condition :-( For example, the following new test is failing
randomly on Windows:
"Windows: test_asyncio: test_huge_content_recvinto() fails randomly
with ProactorEventLoop" is failing randomly since 6 months:
https://bugs.python.org/issue36732
test_asyncio uses more and more functional tests which is a good
thing. In the early days of asyncio, most tests mocked more than half
of asyncio to really be "unit test". But at the end, the test tested
more mocks than asyncio... The problem of functional tests is that
it's hard to design them properly to avoid all race conditions,
especially when you consider multiplatform (Windows, macOS, Linux,
FreeBSD, etc.).
It would help me if someone could try to investigate these issues,
provide a reliable way to reproduce them, and propose a fix. (Simply
saying that you can reproduce the test and that you would like to work
on an issue doesn't really help, sorry.)
Recently, I started to experiment "./python -m test [options] -F
-j100" to attempt to reproduce some tricky race conditions: -j100
spawns 100 worker processes in parallel and -F stands for --forever
(run tests in loop and stop at the first failure). I was surprised
that my Fedora 30 didn't burn in blame. In fact, the GNOME desktop
remains responsible even with a system load higher than 100. The Linux
kernel (5.2) is impressive! Under such high system load (my laptop has
8 logical CPUs), race conditions are way more likely.
The problem of test_asyncio is that it's made of 2160 tests, see:
./python -m test test_asyncio --list-cases
You may want to only run a single test case (class) or even a single
test method: see --match option which can be used multiple times to
only run selected test classes or selected test methods. See also
--matchfile which is similar but uses a file. Example:
$ ./python -m test test_asyncio --list-cases > cases
# edit cases
$ ./python -m test test_asyncio --matchfile=cases
test_asyncio is one of the most unstable test: I'm getting more and
more buildbot-status emails about test_asyncio... likely because we
fixed most of the other race conditions which is a good thing ;-)
Some issues look to be specific to Windows, but it should be possible
to reproduce most issues on Linux as Linux. Sometimes, it's just that
some specific Windows buildbot workers are slower than other buildbot
workers.
Good luck ;-)
Victor
--
Night gathers, and now my watch begins. It shall not end until my death.