Hi, I’m working on the tarfile module to add support for file objects
whose size is not know beforehand (https://bugs.python.org/issue35227).
In doing so, I need to adapt `tarfile.copyfileobj` to return the length
of the file after it has been copied.
Calling this function with `length=None` currently leads to data being
copied but without adding the necessary padding. This seems weird to me,
I do not understand why this would be needed and this behaviour is currently
not used.
This function is not documented in Python documentation so nobody is probably
using it.
Can I safely change `tarfile.copyfileobj` to make it write the padding when
`length=None`?
Thanks,
Rémi
Armin Rigo wrote:
> The C API would change a lot, so it's not reasonable to do that in the
> CPython repo. But it could be a third-party project, attempting to
> define an API like this and implement it well on top of both CPython
> and PyPy. IMHO this might be a better idea than just changing the API
> of functions defined long ago to make them more regular (e.g. stop
> returning borrowed references); by now this would mostly mean creating
> more work for the PyPy team to track and adapt to the changes, with no
> real benefits.
I like this idea. For example, when writing two versions of a C module,
one that uses CPython internals indiscriminately and another that uses
a "clean" API, such a third-party project would help.
I'd also be more motivated to write two versions if I know that the
project is supported by PyPy devs.
Do you think that such an API might be faster than CFFI on PyPy?
Stefan Krah
Overall, I support the efforts to improve the C API, but over the last few weeks have become worried. I don't want to hold up progress with fear, uncertainty, and doubt. Yet, I would like to be more comfortable that we're all aware of what is occurring and what are the potential benefits and risks.
* Inline functions are great. They provide true local variables, better separation of concerns, are far less kludgy than text based macro substitution, and will typically generate the same code as the equivalent macro. This is good tech when used with in a single source file where it has predictable results.
However, I'm not at all confident about moving these into header files which are included in multiple target .c files which need be compiled into separate .o files and linked to other existing libraries.
With a macro, I know for sure that the substitution is taking place. This happens at all levels of optimization and in a debug mode. The effects are 100% predictable and have a well-established track record in our mature battle-tested code base. With cross module function calls, I'm less confident about what is happening, partly because compilers are free to ignore inline directives and partly because the semantics of inlining are less clear when the crossing module boundaries.
* Other categories of changes that we make tend to have only a shallow reach. However, these C API changes will likely touch every C extension that has ever been written, some of which is highly tuned but not actively re-examined. If any mistakes are make, they will likely be pervasive. Accordingly, caution is warranted.
My expectation was that the changes would be conducted in experimental branches. But extensive changes are already being made (or about to be made) on the 3.8 master. If a year from now, we decide that the changes were destabilizing or that the promised benefits didn't materialize, they will be difficult to undo because there are so many of them and because they will be interleaved with other changes.
The original motivation was to achieve a 2x speedup in return for significantly churning the C API. However, the current rearranging of the include files and macro-to-inline-function changes only give us churn. At the very best, they will be performance neutral. At worst, formerly cheap macro calls will become expensive in places that we haven't thought to run timings on. Given that compilers don't have to honor an inline directive, we can't really know for sure -- perhaps today it works out fine, and perhaps tomorrow the compilers opt for a different behavior.
Maybe everything that is going on is fine. Maybe it's not. I am not expert enough to know for sure, but we should be careful before green-lighting such an extensive series of changes directly to master. Reasonable questions to ask are: 1) What are the risks to third party modules, 2) Do we really know that the macro-to-inline-function transformations are semantically neutral. 3) If there is no performance benefit (none has been seen so far, nor is any promised in the pending PRs), is it worth it?
We do know that PyPy folks have had their share of issues with the C API, but I'm not sure that we can make any of this go away without changing the foundations of the whole ecosystem. It is inconvenient for a full GC environment to interact with the API for a reference counted environment -- I don't think we can make this challenge go away without giving up reference counting. It is inconvenient for a system that manifests objects on demand to interact with an API that assumes that objects have identity and never more once they are created -- I don't think we can make this go away either. It is inconvenient to a system that uses unboxed data to interact with our API where everything is an object that includes a type pointer and reference count -- We have provided an API for boxing and boxing, but the trip back-and-forth is inconveniently expensive -- I don't think we can make that go away either because too much of the ecosystem depends on that API. There are some things that can be mitigated such as challenges with borrowed references but that doesn't seem to have been the focus on any of the PRs.
In short, I'm somewhat concerned about the extensive changes that are occurring. I do know they will touch substantially every C module in the entire ecosystem. I don't know whether they are safe or whether they will give any real benefit.
FWIW, none of this is a criticism of the work being done. Someone needs to think deeply about the C API or else progress will never be made. That said, it is a high risk project with many PRs going directly into master, so it does warrant having buy in that the churn isn't destabilizing and will actually produce a benefit that is worth it.
Raymond
Victor Stinner wrote:
> Moreover, I failed to find anyone who can explain me how the C API is used
> in the wild, which functions are important or not, what is the C API, etc.
In practice people desperately *have* to use whatever is there, including
functions with underscores that are not even officially in the C-API.
I have to use _PyFloat_Pack* in order to be compatible with CPython, I need
PySlice_Unpack() etc., I need PyUnicode_KIND(), need PyUnicode_AsUTF8AndSize(),
I *wish* there were PyUnicode_AsAsciiAndSize().
In general, in daily use of the C-API I wish it were *larger* and not smaller.
I often want functions that return C instead of Python values ot functions
that take C instead of Python values.
The ideal situation for me would be a lower layer library, say libcpython.a
that has all those functions like _PyFloat_Pack*.
It would be an enormous amount of work though, especially since the status quo
kind of works.
Stefan Krah
Hi,
The current C API of Python is both a strength and a weakness of the
Python ecosystem as a whole. It's a strength because it allows to
quickly reuse a huge number of existing libraries by writing a glue
for them. It made numpy possible and this project is a big sucess!
It's a weakness because of its cost on the maintenance, it prevents
optimizations, and more generally it prevents to experiment modifying
Python internals.
For example, CPython cannot use tagged pointers, because the existing
C API is heavily based on the ability to dereference a PyObject*
object and access directly members of objects (like PyTupleObject).
For example, Py_INCREF() modifies *directly* PyObject.ob_refcnt. It's
not possible neither to use a Python compiled in debug mode on C
extensions (compiled in release mode), because the ABI is different in
debug mode. As a consequence, nobody uses the debug mode, whereas it
is very helpful to develop C extensions and investigate bugs.
I also consider that the C API gives too much work to PyPy (for their
"cpyext" module). A better C API (not leaking implementation) details
would make PyPy more efficient (and simplify its implementation in the
long term, when the support for the old C API can be removed). For
example, PyList_GetItem(list, 0) currently converts all items of the
list to PyObject* in PyPy, it can waste memory if only the first item
of the list is needed. PyPy has much more efficient storage than an
array of PyObject* for lists.
I wrote a website to explain all these issues with much more details:
https://pythoncapi.readthedocs.io/
I identified "bad APIs" like using borrowed references or giving
access to PyObject** (ex: PySequence_Fast_ITEMS).
I already wrote an (incomplete) implementation of a new C API which
doesn't leak implementation details:
https://github.com/pythoncapi/pythoncapi
It uses an opt-in option (Py_NEWCAPI define -- I'm not sure about the
name) to get the new API. The current C API is unchanged.
Ah, important points. I don't want to touch the current C API nor make
it less efficient. And compatibility in both directions (current C API
<=> new C API) is very important for me. There is no such plan as
"Python 4" which would break the world and *force* everybody to
upgrade to the new C API, or stay to Python 3 forever. No. The new C
API must be an opt-in option, and current C API remains the default
and not be changed.
I have different ideas for the compatibility part, but I'm not sure of
what are the best options yet.
My short term for the new C API would be to ease the experimentation
of projects like tagged pointers. Currently, I have to maintain the
implementation of a new C API which is not really convenient.
--
Today I tried to abuse the Py_DEBUG define for the new C API, but it
seems to be a bad idea:
https://github.com/python/cpython/pull/10435
A *new* define is needed to opt-in for the new C API.
Victor
In this PR [https://github.com/python/cpython/pull/3382] "Remove reference to
address from the docs, as it only causes confusion", opened by Chris
Angelico, there is a discussion about the right term to use for the
address of an object in memory.
If you are interested by the topic, you could comment it.
If there is no comments then I think we could close the PR.
Thank you
Stéphane
--
Stéphane Wirtel - https://wirtel.be - @matrixise
Hi all,
When we receive a PR about the documentation, I think that could be
interesting if we could have a running instance of the doc on a sub
domain of python.org.
For example, pr-10000-doc.python.org or whatever, but by this way the
reviewers could see the result online.
The workflow would be like that:
New PR -> build the doc (done by Travis) -> publish it to a server ->
once published, the PR is notified by "doc is available at URL".
Once merged -> we remove the doc and the link (hello bedevere).
I am interested by this feature and if you also interested, tell me.
I would like discuss with Julien Palard and Ernest W. Durbin III for a
solution as soon as possible.
Have a nice day,
Stéphane
--
Stéphane Wirtel - https://wirtel.be - @matrixise