Paul Moore schrieb:
Here's an example. PIL handles images (in various formats) in memory, as blocks of binary image data. NumPy provides methods for manipulating in-memory blocks of data. Now, if I want to use NumPy to manipulate that data in place (for example, to cap the red component at 128, and equalise the range of the green component) my code needs to know the format of the memory block that PIL exposes. I am assuming that in-place manipulation is better, because there is no need for repeated copies of the data to be made (this would be true for large images).
Thanks, that looks like a good example. Is it possible to elaborate that? E.g. what specific image format would I use (could that work for jpeg, even though this format has compression in it), and what specific NumPy routines would I use to implement the capping and equalising? What would the datatype description look like that those tools need to exchange?
Looking at this in more detail, PIL in-memory images (ImagingCore objects) either have the image8 UINT8**, or the image32 INT32**; they have separate fields for pixelsize and linesize. In the image8 case, there are three options: - each value is an 8-bit integer (IMAGING_TYPE_UINT8) (1) - each value is a 16-bit integer, either little (2) or big endian (3) (IMAGING_TYPE_SPECIAL, mode either I;16 or I;16B) In the image32 case, there are five options: - two 8-bit values per four bytes, namely byte 0 and byte 3 (4) - three 8-bit values (bytes 0, 1, 2) (5) - four 8-bit values (6) - a single 32-bit int (7) - a single 32-bit float (8)
Now, what would be the algorithm in NumPy that I could use to implement capping and equalising?
If PIL could expose a descriptor for its data structure, NumPy code could manipulate it in place without fear of corrupting it. Of course, this can be done by the end user reading the PIL documentation and transcribing the documented format into the NumPy code. But I would argue that it's better if the PIL block is self-describing in a way that avoids the need for a manual transcription of the format.
Without digging further, I think some of the formats simply don't allow for the kind of manipulation you suggest, namely all palette formats (which are the single-valued ones, plus the two-band version with a palette number and an alpha value), and greyscale images. So in any case, the application has to look at the mode of the image to find out whether the operation is even meaningful. And then, the application has to tell NumPy somehow what fields to operate on.
To do this *without* needing the PIL and NumPy developers to co-operate needs an independent standard, which is what I assume this PEP is intended to provide.
Ok, I now understand the goal, although I still like to understand this usecase better.